Cutshort logo
Shiprocket logo
Data Engineer
Shiprocket's logo

Data Engineer

Kailuni Lanah's profile picture
Posted by Kailuni Lanah
4 - 10 yrs
₹25L - ₹35L / yr
Gurugram
Skills
PySpark
Data engineering
Big Data
Hadoop
Spark
Apache Hive
skill iconAmazon Web Services (AWS)
ETL
ETL management

We are seeking an experienced Senior Data Platform Engineer to join our team. The ideal candidate should have extensive experience with Pyspark, Airflow, Presto, Hive, Kafka and Debezium, and should be passionate about developing scalable and reliable data platforms.

Responsibilities:

  • Design, develop, and maintain our data platform architecture using Pyspark, Airflow, Presto, Hive, Kafka, and Debezium.
  • Develop and maintain ETL processes to ingest, transform, and load data from various sources into our data platform.
  • Work closely with data analysts, data scientists, and other stakeholders to understand their requirements and design solutions that meet their needs.
  • Implement and maintain data governance policies and procedures to ensure data quality, privacy, and security.
  • Continuously monitor and optimize the performance of our data platform to ensure scalability, reliability, and cost-effectiveness.
  • Keep up-to-date with the latest trends and technologies in the field of data engineering and share knowledge and best practices with the team.

Requirements:

  • Bachelor's degree in Computer Science, Information Technology, or related field.
  • 5+ years of experience in data engineering or related fields.
  • Strong proficiency in Pyspark, Airflow, Presto, Hive, Datalake, and Debezium.
  • Experience with data warehousing, data modeling, and data governance.
  • Experience working with large-scale distributed systems and cloud platforms (e.g., AWS, GCP, Azure).
  • Strong problem-solving skills and ability to work independently and collaboratively.
  • Excellent communication and interpersonal skills.

If you are a self-motivated and driven individual with a passion for data engineering and a strong background in Pyspark, Airflow, Presto, Hive, Datalake, and Debezium, we encourage you to apply for this exciting opportunity. We offer competitive compensation, comprehensive benefits, and a collaborative work environment that fosters innovation and growth.

Read more
Users love Cutshort
Read about what our users have to say about finding their next opportunity on Cutshort.
Subodh Popalwar's profile image

Subodh Popalwar

Software Engineer, Memorres
For 2 years, I had trouble finding a company with good work culture and a role that will help me grow in my career. Soon after I started using Cutshort, I had access to information about the work culture, compensation and what each company was clearly offering.
Companies hiring on Cutshort
companies logos

About Shiprocket

Founded :
2012
Type
Size
Stage :
Profitable
About
ShipRocket is India's most used eCommerce logistics and shipping software solution. Avail features like COD (Cash on Delivery), prepaid delivery, automated shipping, multiple couriers, rate calculator, etc. Ship your products to customers to 26,000+ pin codes in India.
Read more
Connect with the team
Profile picture
Jisha Bawa
Profile picture
sunil kumar
Profile picture
Pooja Bhatt
Profile picture
Ashish Kataria
Profile picture
Kailuni Lanah
Company social profiles
bloginstagramtwitterfacebook

Similar jobs

Optisol Business Solutions Pvt Ltd
Veeralakshmi K
Posted by Veeralakshmi K
Remote, Chennai, Coimbatore, Madurai
4 - 10 yrs
₹10L - ₹15L / yr
skill iconPython
SQL
Amazon Redshift
Amazon RDS
AWS Simple Notification Service (SNS)
+5 more

Role Summary


As a Data Engineer, you will be an integral part of our Data Engineering team supporting an event-driven server less data engineering pipeline on AWS cloud, responsible for assisting in the end-to-end analysis, development & maintenance of data pipelines and systems (DataOps). You will work closely with fellow data engineers & production support to ensure the availability and reliability of data for analytics and business intelligence purposes.


Requirements:


·      Around 4 years of working experience in data warehousing / BI system.

·      Strong hands-on experience with Snowflake AND strong programming skills in Python

·      Strong hands-on SQL skills

·      Knowledge with any of the cloud databases such as Snowflake,Redshift,Google BigQuery,RDS,etc.

·      Knowledge on debt for cloud databases

·      AWS Services such as SNS, SQS, ECS, Docker, Kinesis & Lambda functions

·      Solid understanding of ETL processes, and data warehousing concepts

·      Familiarity with version control systems (e.g., Git/bit bucket, etc.) and collaborative development practices in an agile framework

·      Experience with scrum methodologies

·      Infrastructure build tools such as CFT / Terraform is a plus.

·      Knowledge on Denodo, data cataloguing tools & data quality mechanisms is a plus.

·      Strong team player with good communication skills.


Overview Optisol Business Solutions


OptiSol was named on this year's Best Companies to Work for list by Great place to work. We are a team of about 500+ Agile employees with a development center in India and global offices in the US, UK (United Kingdom), Australia, Ireland, Sweden, and Dubai. 16+ years of joyful journey and we have built about 500+ digital solutions. We have 200+ happy and satisfied clients across 24 countries.


Benefits, working with Optisol


·      Great Learning & Development program

·      Flextime, Work-at-Home & Hybrid Options

·      A knowledgeable, high-achieving, experienced & fun team.

·      Spot Awards & Recognition.

·      The chance to be a part of next success story.

·      A competitive base salary.


More Than Just a Job, We Offer an Opportunity To Grow. Are you the one, who looks out to Build your Future & Build your Dream? We have the Job for you, to make your dream comes true.

Read more
Chennai, Tirunelveli
5 - 7 yrs
Best in industry
PySpark
Data engineering
Big Data
Hadoop
Spark
+7 more

Greetings!!!!


We are looking for a data engineer for one of our premium clients for their Chennai and Tirunelveli location


Required Education/Experience


● Bachelor’s degree in computer Science or related field

● 5-7 years’ experience in the following:

● Snowflake, Databricks management,

● Python and AWS Lambda

● Scala and/or Java

● Data integration service, SQL and Extract Transform Load (ELT)

● Azure or AWS for development and deployment

● Jira or similar tool during SDLC

● Experience managing codebase using Code repository in Git/GitHub or Bitbucket

● Experience working with a data warehouse.

● Familiarity with structured and semi-structured data formats including JSON, Avro, ORC, Parquet, or XML

● Exposure to working in an agile work environment


Read more
TekClan
Tanu Shree
Posted by Tanu Shree
Chennai
4 - 7 yrs
Best in industry
MS SQLServer
SQL Programming
SQL
ETL
ETL management
+5 more

Company - Tekclan Software Solutions

Position – SQL Developer

Experience – Minimum 4+ years of experience in MS SQL server, SQL Programming, ETL development.

Location - Chennai


We are seeking a highly skilled SQL Developer with expertise in MS SQL Server, SSRS, SQL programming, writing stored procedures, and proficiency in ETL using SSIS. The ideal candidate will have a strong understanding of database concepts, query optimization, and data modeling.


Responsibilities:

1. Develop, optimize, and maintain SQL queries, stored procedures, and functions for efficient data retrieval and manipulation.

2. Design and implement ETL processes using SSIS for data extraction, transformation, and loading from various sources.

3. Collaborate with cross-functional teams to gather business requirements and translate them into technical specifications.

4. Create and maintain data models, ensuring data integrity, normalization, and performance.

5. Generate insightful reports and dashboards using SSRS to facilitate data-driven decision making.

6. Troubleshoot and resolve database performance issues, bottlenecks, and data inconsistencies.

7. Conduct thorough testing and debugging of SQL code to ensure accuracy and reliability.

8. Stay up-to-date with emerging trends and advancements in SQL technologies and provide recommendations for improvement.

9. Should be an independent and individual contributor.


Requirements:

1. Minimum of 4+ years of experience in MS SQL server, SQL Programming, ETL development.

2. Proven experience as a SQL Developer with a strong focus on MS SQL Server.

3. Proficiency in SQL programming, including writing complex queries, stored procedures, and functions.

4. In-depth knowledge of ETL processes and hands-on experience with SSIS.

5. Strong expertise in creating reports and dashboards using SSRS.

6. Familiarity with database design principles, query optimization, and data modeling.

7. Experience with performance tuning and troubleshooting SQL-related issues.

8. Excellent problem-solving skills and attention to detail.

9. Strong communication and collaboration abilities.

10. Ability to work independently and handle multiple tasks simultaneously.


Preferred Skills:

1. Certification in MS SQL Server or related technologies.

2. Knowledge of other database systems such as Oracle or MySQL.

3. Familiarity with data warehousing concepts and tools.

4. Experience with version control systems.

Read more
Persistent Systems
at Persistent Systems
1 video
1 recruiter
Agency job
via Milestone Hr Consultancy by Haina khan
Bengaluru (Bangalore), Hyderabad, Pune
9 - 16 yrs
₹7L - ₹32L / yr
Big Data
skill iconScala
Spark
Hadoop
skill iconPython
+1 more
Greetings..
 
We have urgent requirement for the post of Big Data Architect in reputed MNC company
 
 


Location:  Pune/Nagpur,Goa,Hyderabad/Bangalore

Job Requirements:

  • 9 years and above of total experience preferably in bigdata space.
  • Creating spark applications using Scala to process data.
  • Experience in scheduling and troubleshooting/debugging Spark jobs in steps.
  • Experience in spark job performance tuning and optimizations.
  • Should have experience in processing data using Kafka/Pyhton.
  • Individual should have experience and understanding in configuring Kafka topics to optimize the performance.
  • Should be proficient in writing SQL queries to process data in Data Warehouse.
  • Hands on experience in working with Linux commands to troubleshoot/debug issues and creating shell scripts to automate tasks.
  • Experience on AWS services like EMR.
Read more
Number Theory
at Number Theory
3 recruiters
Nidhi Mishra
Posted by Nidhi Mishra
Gurugram
5 - 12 yrs
₹10L - ₹40L / yr
Spark
Hadoop
Big Data
Data engineering
PySpark
+5 more
Job Description – Big Data Architect
Number Theory is looking for experienced software/data engineer who would be focused on owning and rearchitecting dynamic pricing engineering systems
Job Responsibilities:
 Evaluate and recommend Big Data technology stack best suited for NT AI at scale Platform
and other products
 Lead the team for defining proper Big Data Architecture Design.
 Design and implement features on NT AI at scale platform using Spark and other Hadoop
Stack components.
 Drive significant technology initiatives end to end and across multiple layers of architecture
 Provides strong technical leadership in adopting and contributing to open source technologies related to Big Data across multiple engagements
 Designing /architecting complex, highly available, distributed, failsafe compute systems dealing with considerable scalable amount of data
 Identify and work upon incorporating Non-functional requirements into the solution (Performance, scalability, monitoring etc.)

Requirements:
 A successful candidate with 8+ years of experience in the role of implementation of a highend software product.
 Provides technical leadership in Big Data space (Spark and Hadoop Stack like Map/Reduc,
HDFS, Hive, HBase, Flume, Sqoop etc. NoSQL stores like Cassandra, HBase etc) across
Engagements and contributes to open-source Big Data technologies.
 Rich hands on in Spark and worked on Spark at a larger scale.
 Visualize and evangelize next generation infrastructure in Big Data space (Batch, Near
Real-time, Realtime technologies).
 Passionate for continuous learning, experimenting, applying and contributing towards
cutting edge open-source technologies and software paradigms
 Expert-level proficiency in Java and Scala.
 Strong understanding and experience in distributed computing frameworks, particularly
Apache Hadoop2.0 (YARN; MR & HDFS) and associated technologies one or more of Hive,
Sqoop, Avro, Flume, Oozie, Zookeeper, etc.Hands-on experience with Apache Spark and its
components (Streaming, SQL, MLLib)
 Operating knowledge of cloud computing platforms (AWS,Azure) –

Good to have:

 Operating knowledge of different enterprise hadoop distribution (C) –
 Good Knowledge of Design Patterns
 Experience working within a Linux computing environment, and use of command line tools
including knowledge of shell/Python scripting for automating common tasks.
Read more
Mobile Programming LLC
at Mobile Programming LLC
1 video
34 recruiters
keerthi varman
Posted by keerthi varman
Bengaluru (Bangalore)
3 - 8 yrs
₹10L - ₹14L / yr
Oracle SQL Developer
PL/SQL
ETL
Informatica
Data Warehouse (DWH)
+4 more
The role and responsibilities of Oracle or PL/SQL Developer and Database Administrator:
• Working Knowledge of XML, JSON, Shell and other DBMS scripts
• Hands on Experience on Oracle 11G,12c. Working knowledge of Oracle 18 and 19c
• Analysis, design, coding, testing, debugging and documentation. Complete knowledge of
Software Development Life Cycle (SDLC).
• Writing Complex Queries, stored procedures, functions and packages
• Knowledge of REST Services, UTL functions, DBMS functions and data integration is required
• Good knowledge on table level partitions, row locks and experience in OLTP.
• Should be aware about ETL tools, Data Migration, Data Mapping functionalities
• Understand the business requirement, transform/design the same into business solutions.
Perform data modelling and implement the business rules using Oracle database objects.
• Define source to target data mapping and data transformation logic as per the business
need.
• Should have worked on Materialised views creation and maintenance. Experience in
Performance tuning, impact analysis required
• Monitoring and optimizing the performance of the database. Planning for backup and
recovery of database information. Maintaining archived data. Backing up and restoring
databases.
• Hands on Experience on SQL Developer
Read more
IDfy
at IDfy
6 recruiters
Stuti Srivastava
Posted by Stuti Srivastava
Mumbai
3 - 10 yrs
₹15L - ₹45L / yr
Data Warehouse (DWH)
Informatica
ETL
ETL architecture
Responsive Design
+4 more

Who is IDfy?

 

IDfy is the Fintech ScaleUp of the Year 2021. We build technology products that identify people accurately. This helps businesses prevent fraud and engage with the genuine with the least amount of friction. If you have opened an account with HDFC Bank or ordered from Amazon and Zomato or transacted through Paytm and BharatPe or played on Dream11 and MPL, you might have already experienced IDfy. Without even knowing it. Well…that’s just how we roll. Global credit rating giant TransUnion is an investor in IDfy. So are international venture capitalists like MegaDelta Capital, BEENEXT, and Dream Incubator. Blume Ventures is an early investor and continues to place its faith in us. We have kept our 500 clients safe from fraud while helping the honest get the opportunities they deserve. Our 350-people strong family works and plays out of our offices in suburban Mumbai. IDfy has run verifications on 100 million people. In the next 2 years, we want to touch a billion users. If you wish to be part of this journey filled with lots of action and learning, we welcome you to be part of the team!

 

What are we looking for?

 

As a senior software engineer in Data Fabric POD, you would be responsible for producing and implementing functional software solutions. You will work with upper management to define software requirements and take the lead on operational and technical projects. You would be working with a data management and science platform which provides Data as a service (DAAS) and Insight as a service (IAAS) to internal employees and external stakeholders.

 

You are eager to learn technology-agnostic who loves working with data and drawing insights from it. You have excellent organization and problem-solving skills and are looking to build the tools of the future. You have exceptional communication skills and leadership skills and the ability to make quick decisions.

 

YOE: 3 - 10 yrs

Position: Sr. Software Engineer/Module Lead/Technical Lead

 

Responsibilities:

  • Work break-down and orchestrating the development of components for each sprint.
  • Identifying risks and forming contingency plans to mitigate them.
  • Liaising with team members, management, and clients to ensure projects are completed to standard.
  • Inventing new approaches to detecting existing fraud. You will also stay ahead of the game by predicting future fraud techniques and building solutions to prevent them.
  • Developing Zero Defect Software that is secured, instrumented, and resilient.
  • Creating design artifacts before implementation.
  • Developing Test Cases before or in parallel with implementation.
  • Ensuring software developed passes static code analysis, performance, and load test.
  • Developing various kinds of components (such as UI Components, APIs, Business Components, image Processing, etc. ) that define the IDfy Platforms which drive cutting-edge Fraud Detection and Analytics.
  • Developing software using Agile Methodology and tools that support the same.

 

Requirements:

  • Apache BEAM, Clickhouse, Grafana, InfluxDB, Elixir, BigQuery, Logstash.
  • An understanding of Product Development Methodologies.
  • Strong understanding of relational databases especially SQL and hands-on experience with OLAP.
  • Experience in the creation of data ingestion pipelines and ETL pipeline (Good to have Apache Beam or Apache Airflow experience).
  • Strong design skills in defining API Data Contracts / OOAD / Microservices / Data Models.

 

Good to have:

  • Experience with TimeSeries DBs (we use InfluxDB) and Alerting / Anomaly Detection Frameworks.
  • Visualization Layers: Metabase, PowerBI, Tableau.
  • Experience in developing software in the Cloud such as GCP / AWS.
  • A passion to explore new technologies and express yourself through technical blogs.
Read more
Nascentvision
at Nascentvision
1 recruiter
Shanu Mohan
Posted by Shanu Mohan
Gurugram, Mumbai, Bengaluru (Bangalore)
2 - 4 yrs
₹10L - ₹17L / yr
skill iconPython
PySpark
skill iconAmazon Web Services (AWS)
Spark
skill iconScala
+2 more
  • Hands-on experience in any Cloud Platform
· Versed in Spark, Scala/python, SQL
  • Microsoft Azure Experience
· Experience working on Real Time Data Processing Pipeline
Read more
Bengaluru (Bangalore)
3 - 8 yrs
₹13L - ₹22L / yr
ETL
IDQ
  • Participate in planning, implementation of solutions, and transformation programs from legacy system to a cloud-based system
  • Work with the team on Analysis, High level and low-level design for solutions using ETL or ELT based solutions and DB services in RDS
  • Work closely with the architect and engineers to design systems that effectively reflect business needs, security requirements, and service level requirements
  • Own deliverables related to design and implementation
  • Own Sprint tasks and drive the team towards the goal while understanding the change and release process defined by the organization.
  • Excellent communication skills, particularly those relating to complex findings and presenting them to ensure audience appeal at various levels of the organization
  • Ability to integrate research and best practices into problem avoidance and continuous improvement
  • Must be able to perform as an effective member in a team-oriented environment, maintain a positive attitude, and achieve desired results while working with minimal supervision


Basic Qualifications:

  • Minimum of 5+ years of technical work experience in the implementation of complex, large scale, enterprise-wide projects including analysis, design, core development, and delivery
  • Minimum of 3+ years of experience with expertise in Informatica ETL, Informatica Power Center, and Informatica Data Quality
  • Experience with Informatica MDM tool is good to have
  • Should be able to understand the scope of the work and ask for clarifications
  • Should have advanced SQL skills. Including complex PL/SQL coding skills
  • Knowledge of Agile is plus
  • Well-versed with SOAP, Webservice, and REST API.
  • Hand on development using Java would be a plus.

 

 

 

Read more
Saama Technologies
at Saama Technologies
6 recruiters
Sandeep Chaudhary
Posted by Sandeep Chaudhary
Pune
2 - 5 yrs
₹1L - ₹18L / yr
Hadoop
Spark
Apache Hive
Apache Flume
skill iconJava
+5 more
Description Deep experience and understanding of Apache Hadoop and surrounding technologies required; Experience with Spark, Impala, Hive, Flume, Parquet and MapReduce. Strong understanding of development languages to include: Java, Python, Scala, Shell Scripting Expertise in Apache Spark 2. x framework principals and usages. Should be proficient in developing Spark Batch and Streaming job in Python, Scala or Java. Should have proven experience in performance tuning of Spark applications both from application code and configuration perspective. Should be proficient in Kafka and integration with Spark. Should be proficient in Spark SQL and data warehousing techniques using Hive. Should be very proficient in Unix shell scripting and in operating on Linux. Should have knowledge about any cloud based infrastructure. Good experience in tuning Spark applications and performance improvements. Strong understanding of data profiling concepts and ability to operationalize analyses into design and development activities Experience with best practices of software development; Version control systems, automated builds, etc. Experienced in and able to lead the following phases of the Software Development Life Cycle on any project (feasibility planning, analysis, development, integration, test and implementation) Capable of working within the team or as an individual Experience to create technical documentation
Read more
Why apply to jobs via Cutshort
people_solving_puzzle
Personalized job matches
Stop wasting time. Get matched with jobs that meet your skills, aspirations and preferences.
people_verifying_people
Verified hiring teams
See actual hiring teams, find common social connections or connect with them directly. No 3rd party agencies here.
ai_chip
Move faster with AI
We use AI to get you faster responses, recommendations and unmatched user experience.
21,01,133
Matches delivered
37,12,187
Network size
15,000
Companies hiring
Did not find a job you were looking for?
icon
Search for relevant jobs from 10000+ companies such as Google, Amazon & Uber actively hiring on Cutshort.
companies logo
companies logo
companies logo
companies logo
companies logo
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Users love Cutshort
Read about what our users have to say about finding their next opportunity on Cutshort.
Subodh Popalwar's profile image

Subodh Popalwar

Software Engineer, Memorres
For 2 years, I had trouble finding a company with good work culture and a role that will help me grow in my career. Soon after I started using Cutshort, I had access to information about the work culture, compensation and what each company was clearly offering.
Companies hiring on Cutshort
companies logos