Data Engineer

at Crewscale

DP
Posted by vinodh Rajamani
icon
Remote only
icon
2 - 6 yrs
icon
₹4L - ₹40L / yr
icon
Full time
Skills
Python
SQL
Amazon Web Services (AWS)
Data Warehouse (DWH)
Informatica
ETL
Data engineering
Crewscale – Toplyne Collaboration:

The present role is a Data engineer role for Crewscale– Toplyne Collaboration.
Crewscale is exclusive partner of Toplyne.

About Crewscale:
Crewscale is a premium technology company focusing on helping companies building world
class scalable products. We are a product based start-up having a code assessment platform
which is being used top technology disrupters across the world.

Crewscale works with premium product companies (Indian and International) like - Swiggy,
ShareChat Grab, Capillary, Uber, Workspan, Ovo and many more. We are responsible for
managing infrastructure for Swiggy as well.
We focus on building only world class tech product and our USP is building technology can
handle scale from 1 million to 1 billion hits.

We invite candidates who have a zeal to develop world class products to come and work with us.

Toplyne

Who are we? 👋

Toplyne is a global SaaS product built to help revenue teams, at businesses with a self-service motion, and a large user-base, identify which users to spend time on, when and for what outcome. Think self-service or freemium-led companies like Figma, Notion, Freshworks, and Slack. We do this by helping companies recognize signals across their - product engagement, sales, billing, and marketing data.

Founded in June 2021, Toplyne is backed by marquee investors like Sequoia,Together fund and a bunch of well known angels. You can read more about us on - https://bit.ly/ForbesToplyne" target="_blank">https://bit.ly/ForbesToplyne , https://bit.ly/YourstoryToplyne" target="_blank">https://bit.ly/YourstoryToplyne.

What will you get to work on? 🏗️

  • Design, Develop and maintain scalable data pipelines and Data warehouse to support continuing increases in data volume and complexity.

  • Develop and implement processes and systems to supervise data quality, data mining and ensuring production data is always accurate and available for key partners and business processes that depend on it.

  • Perform data analysis required to solve data related issues and assist in the resolution of data issues.

  • Complete ownership - You’ll build highly scalable platforms and services that support rapidly growing data needs in Toplyne. There’s no instruction book, it’s yours to write. You’ll figure it out, ship it, and iterate.

What do we expect from you? 🙌🏻

  • 3-6 years of relevant work experience in a Data Engineering role.

  • Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases.

  • Experience building and optimising data pipelines, architectures and data sets.

  • Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement.

  • Strong analytic skills related to working with unstructured datasets.

  • Good understanding of Airflow, Spark, NoSql databases, Kakfa is nice to have.

Read more

About Crewscale

CrewScale is one stop solution for end-to-end remote talent acquisition. We'll connect you to pre-vetted, domain relevant tech talent around the world.
Read more
Founded
2018
Type
Products & Services
Size
employees
Stage
Profitable
View full company details
Why apply to jobs via Cutshort
Personalized job matches
Stop wasting time. Get matched with jobs that meet your skills, aspirations and preferences.
Verified hiring teams
See actual hiring teams, find common social connections or connect with them directly. No 3rd party agencies here.
Move faster with AI
We use AI to get you faster responses, recommendations and unmatched user experience.
2101133
Matches delivered
3712187
Network size
15000
Companies hiring

Similar jobs

Data Engineering/Data Analytics

at Leading StartUp Focused On Employee Growth

Agency job
via Qrata
Data engineering
Data Analytics
Big Data
Apache Spark
airflow
Data engineer
Cassandra
Amazon Redshift
Apache Sqoop
Amazon EC2
Python
ETL
Amazon Web Services (AWS)
icon
Bengaluru (Bangalore)
icon
2 - 6 yrs
icon
₹25L - ₹45L / yr
2+ years of experience in a Data Engineer role.
● Proficiency in Linux.
● Experience working with AWS cloud services: EC2, S3, RDS, Redshift.
● Must have SQL knowledge and experience working with relational databases, query
authoring (SQL) as well as familiarity with databases including Mysql, Mongo, Cassandra,
and Athena.
● Must have experience with Python/Scala.
● Must have experience with Big Data technologies like Apache Spark.
● Must have experience with Apache Airflow.
● Experience with data pipelines and ETL tools like AWS Glue.
Read more
Job posted by
Blessy Fernandes

Data Engineer -Lead

at Vahak

Founded 2016  •  Product  •  20-100 employees  •  Raised funding
SQL
Data engineering
Big Data
Python
R Language
Meta-data management
Data Analytics
Tableau
PowerBI
icon
Bengaluru (Bangalore)
icon
4 - 12 yrs
icon
₹15L - ₹40L / yr

Who Are We?

 

Vahak (https://www.vahak.in) is India’s largest & most trusted online transport marketplace & directory for road transport businesses and individual commercial vehicle (Trucks, Trailers, Containers, Hyva, LCVs) owners for online truck and load booking, transport business branding and transport business network expansion. Lorry owners can find intercity and intracity loads from all over India and connect with other businesses to find trusted transporters and best deals in the Indian logistics services market. With the Vahak app, users can book loads and lorries from a live transport marketplace with over 7 Lakh + Transporters and Lorry owners in over 10,000+ locations for daily transport requirements.

Vahak has raised a capital of $5+ Million in a Pre-Series A round from RTP Global along with participation from Luxor Capital and Leo Capital. The other marquee angel investors include Kunal Shah, Founder and CEO, CRED; Jitendra Gupta, Founder and CEO, Jupiter; Vidit Aatrey and Sanjeev Barnwal, Co-founders, Meesho; Mohd Farid, Co-founder, Sharechat; Amrish Rau, CEO, Pine Labs; Harsimarbir Singh, Co-founder, Pristyn Care; Rohit and Kunal Bahl, Co-founders, Snapdeal; and Ravish Naresh, Co-founder and CEO, Khatabook.

 

Lead Data Engineer:

We at Vahak, are looking for an enthusiastic and passionate Data Engineering lead to join our young & diverse team.You will play a key role in the data science group, working with state of the art big data technologies, building pipelines for various data sources and developing organization’s data late and data warehouse

Our goal as a group is to drive powerful, big data analytics products with scalable results.We love people who are humble and collaborative with hunger for excellence.

Responsibilities

  • Should act as a technical resource for the Data Science team and be involved in creating and implementing current and future Analytics projects like data lake design, data warehouse design, etc.
  • Analysis and design of ETL solutions to store/fetch data from multiple systems like Google Analytics, CleverTap, CRM systems etc.
  • Developing and maintaining data pipelines for real time analytics as well as batch analytics use cases.
  • Collaborate with data scientists and actively work in the feature engineering and data preparation phase of model building
  • Collaborate with product development and dev ops teams in implementing the data collection and aggregation solutions
  • Ensure quality and consistency of the data in Data warehouse and follow best data governance practices
  • Analyze large amounts of information to discover trends and patterns
  • Mine and analyze data from company databases to drive optimization and improvement of product development, marketing techniques and business strategies.

Requirements:

  • Bachelor’s or Masters in a highly numerate discipline such as Engineering, Science and Economics
  • 5+ years of proven experience working as a Data Engineer preferably in ecommerce/web based or consumer technologies company
  • Hands on experience of working with different big data tools like Hadoop, Spark , Flink, Kafka and so on
  • Good understanding of AWS ecosystem for big data analytics
  • Hands on experience in creating data pipelines either using tools or by independently writing scripts
  • Hands on experience in scripting languages like  Python, Scala, Unix Shell scripting and so on
  • Strong problem solving skills with an emphasis on product development.
  • Experience using business intelligence tools e.g. Tableau, Power BI would be an added advantage (not mandatory)

 

Read more
Job posted by
Vahak Talent

Data Engineer - Global Media Agency

at client of Merito

Agency job
via Merito
Python
SQL
Tableau
PowerBI
PHP
snowflake
Data engineering
icon
Mumbai
icon
3 - 8 yrs
icon
Best in industry

Our client is the world’s largest media investment company and are a part of WPP. In fact, they are responsible for one in every three ads you see globally. We are currently looking for a Senior Software Engineer to join us. In this role, you will be responsible for coding/implementing of custom marketing applications that Tech COE builds for its customer and managing a small team of developers.

 

What your day job looks like:

  • Serve as a Subject Matter Expert on data usage – extraction, manipulation, and inputs for analytics
  • Develop data extraction and manipulation code based on business rules
  • Develop automated and manual test cases for the code written
  • Design and construct data store and procedures for their maintenance
  • Perform data extract, transform, and load activities from several data sources.
  • Develop and maintain strong relationships with stakeholders
  • Write high quality code as per prescribed standards.
  • Participate in internal projects as required

 
Minimum qualifications:

  • B. Tech./MCA or equivalent preferred
  • Excellent 3 years Hand on experience on Big data, ETL Development, Data Processing.


    What you’ll bring:

  • Strong experience in working with Snowflake, SQL, PHP/Python.
  • Strong Experience in writing complex SQLs
  • Good Communication skills
  • Good experience of working with any BI tool like Tableau, Power BI.
  • Sqoop, Spark, EMR, Hadoop/Hive are good to have.

 

 

Read more
Job posted by
Merito Talent

Data Engineer

at Amagi Media Labs

Founded 2008  •  Product  •  500-1000 employees  •  Profitable
Data engineering
Spark
Scala
Hadoop
Apache Hadoop
Apache Spark
icon
Bengaluru (Bangalore), Noida
icon
5 - 9 yrs
icon
₹10L - ₹17L / yr
  • We are looking for : Data engineer
  • Sprak
  • Scala
  • Hadoop
Exp - 5 to 9 years
N.p - 15 days to 30 Days
Location : Bangalore / Noida
Read more
Job posted by
Rajesh C

Data Engineer

at AI-powered cloud-based SaaS solution

Agency job
via wrackle
Data engineering
Big Data
Data Engineer
Big Data Engineer
Hibernate (Java)
Data Structures
Agile/Scrum
SaaS
Cassandra
Spark
Python
NOSQL Databases
Hadoop
HDFS
MapReduce
AWS CloudFormation
EMR
Amazon S3
Apache Kafka
Apache ZooKeeper
Systems Development Life Cycle (SDLC)
Java
YARN
icon
Bengaluru (Bangalore)
icon
2 - 10 yrs
icon
₹15L - ₹50L / yr
Responsibilities

● Able contribute to the gathering of functional requirements, developing technical
specifications, and project & test planning
● Demonstrating technical expertise, and solving challenging programming and design
problems
● Roughly 80% hands-on coding
● Generate technical documentation and PowerPoint presentations to communicate
architectural and design options, and educate development teams and business users
● Resolve defects/bugs during QA testing, pre-production, production, and post-release
patches
● Work cross-functionally with various bidgely teams including: product management,
QA/QE, various product lines, and/or business units to drive forward results

Requirements
● BS/MS in computer science or equivalent work experience
● 2-4 years’ experience designing and developing applications in Data Engineering
● Hands-on experience with Big data Eco Systems.
● Hadoop,Hdfs,Map Reduce,YARN,AWS Cloud, EMR, S3, Spark, Cassandra, Kafka,
Zookeeper
● Expertise with any of the following Object-Oriented Languages (OOD): Java/J2EE,Scala,
Python
● Strong leadership experience: Leading meetings, presenting if required
● Excellent communication skills: Demonstrated ability to explain complex technical
issues to both technical and non-technical audiences
● Expertise in the Software design/architecture process
● Expertise with unit testing & Test-Driven Development (TDD)
● Experience on Cloud or AWS is preferable
● Have a good understanding and ability to develop software, prototypes, or proofs of
concepts (POC's) for various Data Engineering requirements.
Read more
Job posted by
Naveen Taalanki

Senior Big Data Engineer

at 6sense

Founded 2013  •  Product  •  1000-5000 employees  •  Raised funding
Spark
Hadoop
Big Data
Data engineering
PySpark
Apache Spark
Data Structures
Python
icon
Remote only
icon
5 - 9 yrs
icon
Best in industry

It’s no surprise that 6sense is named a top workplace year after year — we have industry-leading technology developed and taken to market by a world-class team. 6sense is Top Rated on Glassdoor with a 4.9/5 and our CEO Jason Zintak was recognized as the #1 CEO in the small & medium business category by Glassdoor’s 2021 Top CEO Employees Choice Awards.

In 2021, the company was recognized for having the Best Company for Diversity, Best Company for Women, Best CEO, Best Company Culture, Best Company Perks & Benefits and Happiest Employees from the employee feedback platform Comparably. In addition, 6sense has also won several accolades that demonstrate its reputation as an employer of choice including the Glassdoor Best Place to Work (2022), TrustRadius Tech Cares (2021) and Inc. Best Workplaces (2022, 2021, 2020, 2019).

6sense reinvents the way organizations create, manage, and convert pipeline to revenue. The 6sense Revenue AI captures anonymous buying signals, predicts the right accounts to target at the ideal time, and recommends the channels and messages to boost revenue performance. Removing guesswork, friction and wasted sales effort, 6sense empowers sales, marketing, and customer success teams to significantly improve pipeline quality, accelerate sales velocity, increase conversion rates, and grow revenue predictably.

 

6sense is seeking a Data Engineer to become part of a team designing, developing, and deploying its customer centric applications.  

A Data Engineer at 6sense will have the opportunity to 

  • Create, validate and maintain optimal data pipelines, assemble large, complex data sets that meet functional / non-functional business requirements.
  • Improving our current data pipelines i.e. improve their performance, remove redundancy, and figure out a way to test before v/s after to roll out.
  • Debug any issues that arise from data pipelines especially performance issues.
  • Experiment with new tools and new versions of hive/presto etc. etc.

Required qualifications and must have skills 

  • Excellent analytical and problem-solving skills
  • 6+ years work experience showing growth as a Data Engineer.
  • Strong hands-on experience with Big Data Platforms like Hadoop / Hive / Spark / Presto
  • Experience with writing Hive / Presto UDFs in Java
  • String experience in writing complex, optimized SQL queries across large data sets
  • Experience with optimizing queries and underlying storage
  • Comfortable with Unix / Linux command line
  • BE/BTech/BS or equivalent 

Nice to have Skills 

  • Used Key Value stores or noSQL databases 
  • Good understanding of docker and container platforms like Mesos and Kubernetes 
  • Security-first architecture approach 
  • Application benchmarking and optimization  
Read more
Job posted by
Shrutika Dhawan

Data Engineer

at Bungee Tech India

Founded 2018  •  Products & Services  •  20-100 employees  •  Raised funding
Big Data
Hadoop
Apache Hive
Spark
ETL
Data engineering
Databases
Performance tuning
icon
Remote, NCR (Delhi | Gurgaon | Noida), Chennai
icon
5 - 10 yrs
icon
₹10L - ₹30L / yr

Company Description

At Bungee Tech, we help retailers and brands meet customers everywhere and, on every occasion, they are in. We believe that accurate, high-quality data matched with compelling market insights empowers retailers and brands to keep their customers at the center of all innovation and value they are delivering. 

 

We provide a clear and complete omnichannel picture of their competitive landscape to retailers and brands. We collect billions of data points every day and multiple times in a day from publicly available sources. Using high-quality extraction, we uncover detailed information on products or services, which we automatically match, and then proactively track for price, promotion, and availability. Plus, anything we do not match helps to identify a new assortment opportunity.

 

Empowered with this unrivalled intelligence, we unlock compelling analytics and insights that once blended with verified partner data from trusted sources such as Nielsen, paints a complete, consolidated picture of the competitive landscape.

We are looking for a Big Data Engineer who will work on the collecting, storing, processing, and analyzing of huge sets of data. The primary focus will be on choosing optimal solutions to use for these purposes, then maintaining, implementing, and monitoring them.

You will also be responsible for integrating them with the architecture used in the company.

 

We're working on the future. If you are seeking an environment where you can drive innovation, If you want to apply state-of-the-art software technologies to solve real world problems, If you want the satisfaction of providing visible benefit to end-users in an iterative fast paced environment, this is your opportunity.

 

Responsibilities

As an experienced member of the team, in this role, you will:

 

  • Contribute to evolving the technical direction of analytical Systems and play a critical role their design and development

 

  • You will research, design and code, troubleshoot and support. What you create is also what you own.

 

  • Develop the next generation of automation tools for monitoring and measuring data quality, with associated user interfaces.

 

  • Be able to broaden your technical skills and work in an environment that thrives on creativity, efficient execution, and product innovation.

 

BASIC QUALIFICATIONS

  • Bachelor’s degree or higher in an analytical area such as Computer Science, Physics, Mathematics, Statistics, Engineering or similar.
  • 5+ years relevant professional experience in Data Engineering and Business Intelligence
  • 5+ years in with Advanced SQL (analytical functions), ETL, Data Warehousing.
  • Strong knowledge of data warehousing concepts, including data warehouse technical architectures, infrastructure components, ETL/ ELT and reporting/analytic tools and environments, data structures, data modeling and performance tuning.
  • Ability to effectively communicate with both business and technical teams.
  • Excellent coding skills in Java, Python, C++, or equivalent object-oriented programming language
  • Understanding of relational and non-relational databases and basic SQL
  • Proficiency with at least one of these scripting languages: Perl / Python / Ruby / shell script

 

PREFERRED QUALIFICATIONS

 

  • Experience with building data pipelines from application databases.
  • Experience with AWS services - S3, Redshift, Spectrum, EMR, Glue, Athena, ELK etc.
  • Experience working with Data Lakes.
  • Experience providing technical leadership and mentor other engineers for the best practices on the data engineering space
  • Sharp problem solving skills and ability to resolve ambiguous requirements
  • Experience on working with Big Data
  • Knowledge and experience on working with Hive and the Hadoop ecosystem
  • Knowledge of Spark
  • Experience working with Data Science teams
Read more
Job posted by
Abigail David

Data Engineer

at NSEIT

Founded 1999  •  Products & Services  •  100-1000 employees  •  Profitable
Data engineering
Big Data
Data Engineer
Amazon Web Services (AWS)
NOSQL Databases
Programming
icon
Remote only
icon
7 - 12 yrs
icon
₹20L - ₹40L / yr
  • Design AWS data ingestion frameworks and pipelines based on the specific needs driven by the Product Owners and user stories…
  • Experience building Data Lake using AWS and Hands-on experience in S3, EKS, ECS, AWS Glue, AWS KMS, AWS Firehose, EMR
  • Experience Apache Spark Programming with Databricks
  • Experience working on NoSQL Databases such as Cassandra, HBase, and Elastic Search
  • Hands on experience with leveraging CI/CD to rapidly build & test application code
  • Expertise in Data governance and Data Quality
  • Experience working with PCI Data and working with data scientists is a plus
  • At least 4+ years of experience in the following Big Data frameworks: File Format (Parquet, AVRO, ORC), Resource Management, Distributed Processing and RDBMS
  • 5+ years of experience on designing and developing Data Pipelines for Data Ingestion or Transformation using AWS technologies
Read more
Job posted by
Vishal Pednekar

Senior Data Engineer

at Bookr Inc

Founded 2019  •  Products & Services  •  20-100 employees  •  Raised funding
Big Data
Hadoop
Spark
Data engineering
Data Warehouse (DWH)
ETL
EMR
Amazon Redshift
PostgreSQL
SQL
Scala
Java
Python
airflow
icon
Remote, Chennai, Bengaluru (Bangalore)
icon
4 - 7 yrs
icon
₹15L - ₹35L / yr

In this role you'll get.

  • Being part of core team member for data platform, setup platform foundation while adhering all required quality standards and design patterns
  • Write efficient and quality code that can scale
  • Adopt Bookr quality standards, recommend process standards and best practices
  • Research, learn & adapt new technologies to solve problems & improve existing solutions
  • Contribute to engineering excellence backlog
  • Identify performance issues
  • Effective code and design reviews
  • Improve reliability of overall production system by proactively identifying patterns of failure
  • Leading and mentoring junior engineers by example
  • End-to-end ownership of stories (including design, serviceability, performance, failure handling)
  • Strive hard to provide the best experience to anyone using our products
  • Conceptualise innovative and elegant solutions to solve challenging big data problems
  • Engage with Product Management and Business to drive the agenda, set your priorities and deliver awesome products
  • Adhere to company policies, procedures, mission, values, and standards of ethics and integrity

 

On day one we'll expect you to.

  • B. E/B. Tech from a reputed institution
  • Minimum 5 years of software development experience and at least a year experience in leading/guiding people
  • Expert coding skills in Python/PySpark or Java/Scala
  • Deep understanding in Big Data Ecosystem - Hadoop and Spark
  • Must have project experience with Spark
  • Ability to independently troubleshoot Spark jobs
  • Good understanding of distributed systems
  • Fast learner and quickly adapt to new technologies
  • Prefer individuals with high ownership and commitment
  • Expert hands on experience with RDBMS
  • Fast learner and quickly adapt to new technologies
  • Prefer individuals with high ownership and commitment
  • Ability to work independently as well as working collaboratively in a team

 

Added bonuses you have.

  • Hands on experience with EMR/Glue/Data bricks
  • Hand on experience with Airflow
  • Hands on experience with AWS Big Data ecosystem

 

We are looking for passionate Engineers who are always hungry for challenging problems. We believe in creating opportunistic, yet balanced, work environment for savvy, entrepreneurial tech individuals. We are thriving on remote work with team working across multiple timezones.

 

 

  • Flexible hours & Remote work - We are a results focused bunch, so we encourage you to work whenever and wherever you feel most creative and focused.
  • Unlimited PTOWe want you to feel free to recharge your batteries when you need it!
  • Stock Options - Opportunity to participate in Company stock plan
  • Flat hierarchy - Team leaders at your fingertips
  • BFC(Stands for bureaucracy-free company). We're action oriented and don't bother with dragged-out meetings or pointless admin exercises - we'd rather get our hands dirty!
  • Working along side Leaders - You being part of core team, will give you opportunity to directly work with founding and management team

 

Read more
Job posted by
Nimish Mehta

Data Engineer

at Service based company

pandas
PySpark
Big Data
Data engineering
Performance optimixation
oo concepts
SQL
Python
icon
Remote only
icon
3 - 8 yrs
icon
₹8L - ₹13L / yr
Data pre-processing, data transformation, data analysis, and feature engineering, 
The candidate must have Expertise in ADF(Azure data factory), well versed with python.
Performance optimization of scripts (code) and Productionizing of code (SQL, Pandas, Python or PySpark, etc.)
Required skills:
Bachelors in - in Computer Science, Data Science, Computer Engineering, IT or equivalent
Fluency in Python (Pandas), PySpark, SQL, or similar
Azure data factory experience (min 12 months)
Able to write efficient code using traditional, OO concepts, modular programming following the SDLC process.
Experience in production optimization and end-to-end performance tracing (technical root cause analysis)
Ability to work independently with demonstrated experience in project or program management
Azure experience ability to translate data scientist code in Python and make it efficient (production) for cloud deployment
Read more
Job posted by
Sonali Kamani
Did not find a job you were looking for?
icon
Search for relevant jobs from 10000+ companies such as Google, Amazon & Uber actively hiring on Cutshort.
Get to hear about interesting companies hiring right now
iconFollow Cutshort
Want to apply to this role at Crewscale?
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Learn more
Get to hear about interesting companies hiring right now
iconFollow Cutshort