Cutshort logo
Data engineering Jobs in Chennai

9+ Data engineering Jobs in Chennai | Data engineering Job openings in Chennai

Apply to 9+ Data engineering Jobs in Chennai on CutShort.io. Explore the latest Data engineering Job opportunities across top companies like Google, Amazon & Adobe.

icon
Tredence
Rohit S
Posted by Rohit S
Chennai, Pune, Bengaluru (Bangalore), Gurugram
11 - 16 yrs
₹20L - ₹32L / yr
Data Warehouse (DWH)
Google Cloud Platform (GCP)
skill iconAmazon Web Services (AWS)
Data engineering
Data migration
+1 more
• Engages with Leadership of Tredence’s clients to identify critical business problems, define the need for data engineering solutions and build strategy and roadmap
• S/he possesses a wide exposure to complete lifecycle of data starting from creation to consumption
• S/he has in the past built repeatable tools / data-models to solve specific business problems
• S/he should have hand-on experience of having worked on projects (either as a consultant or with in a company) that needed them to
o Provide consultation to senior client personnel o Implement and enhance data warehouses or data lakes.
o Worked with business teams or was a part of the team that implemented process re-engineering driven by data analytics/insights
• Should have deep appreciation of how data can be used in decision-making
• Should have perspective on newer ways of solving business problems. E.g. external data, innovative techniques, newer technology
• S/he must have a solution-creation mindset.
Ability to design and enhance scalable data platforms to address the business need
• Working experience on data engineering tool for one or more cloud platforms -Snowflake, AWS/Azure/GCP
• Engage with technology teams from Tredence and Clients to create last mile connectivity of the solutions
o Should have experience of working with technology teams
• Demonstrated ability in thought leadership – Articles/White Papers/Interviews
Mandatory Skills Program Management, Data Warehouse, Data Lake, Analytics, Cloud Platform
Read more
Tredence
Bengaluru (Bangalore), Pune, Gurugram, Chennai
8 - 12 yrs
₹12L - ₹30L / yr
Snow flake schema
Snowflake
SQL
Data modeling
Data engineering
+1 more

JOB DESCRIPTION:. THE IDEAL CANDIDATE WILL:

• Ensure new features and subject areas are modelled to integrate with existing structures and provide a consistent view. Develop and maintain documentation of the data architecture, data flow and data models of the data warehouse appropriate for various audiences. Provide direction on adoption of Cloud technologies (Snowflake) and industry best practices in the field of data warehouse architecture and modelling.

• Providing technical leadership to large enterprise scale projects. You will also be responsible for preparing estimates and defining technical solutions to proposals (RFPs). This role requires a broad range of skills and the ability to step into different roles depending on the size and scope of the project Roles & Responsibilities.

ELIGIBILITY CRITERIA: Desired Experience/Skills:
• Must have total 5+ yrs. in IT and 2+ years' experience working as a snowflake Data Architect and 4+ years in Data warehouse, ETL, BI projects.
• Must have experience at least two end to end implementation of Snowflake cloud data warehouse and 3 end to end data warehouse implementations on-premise preferably on Oracle.

• Expertise in Snowflake – data modelling, ELT using Snowflake SQL, implementing complex stored Procedures and standard DWH and ETL concepts
• Expertise in Snowflake advanced concepts like setting up resource monitors, RBAC controls, virtual warehouse sizing, query performance tuning, Zero copy clone, time travel and understand how to use these features
• Expertise in deploying Snowflake features such as data sharing, events and lake-house patterns
• Hands-on experience with Snowflake utilities, SnowSQL, SnowPipe, Big Data model techniques using Python
• Experience in Data Migration from RDBMS to Snowflake cloud data warehouse
• Deep understanding of relational as well as NoSQL data stores, methods and approaches (star and snowflake, dimensional modelling)
• Experience with data security and data access controls and design
• Experience with AWS or Azure data storage and management technologies such as S3 and ADLS
• Build processes supporting data transformation, data structures, metadata, dependency and workload management
• Proficiency in RDBMS, complex SQL, PL/SQL, Unix Shell Scripting, performance tuning and troubleshoot
• Provide resolution to an extensive range of complicated data pipeline related problems, proactively and as issues surface
• Must have expertise in AWS or Azure Platform as a Service (PAAS)
• Certified Snowflake cloud data warehouse Architect (Desirable)
• Should be able to troubleshoot problems across infrastructure, platform and application domains.
• Must have experience of Agile development methodologies
• Strong written communication skills. Is effective and persuasive in both written and oral communication

Nice to have Skills/Qualifications:Bachelor's and/or master’s degree in computer science or equivalent experience.
• Strong communication, analytical and problem-solving skills with a high attention to detail.

 

About you:
• You are self-motivated, collaborative, eager to learn, and hands on
• You love trying out new apps, and find yourself coming up with ideas to improve them
• You stay ahead with all the latest trends and technologies
• You are particular about following industry best practices and have high standards regarding quality

Read more
DFCS Technologies
Agency job
via dfcs Technologies by SheikDawood Ali
Remote, Chennai, Anywhere India
1 - 5 yrs
₹9L - ₹14L / yr
PySpark
Data engineering
Big Data
Hadoop
Spark
+5 more
  • Create and maintain optimal data pipeline architecture,
  • Assemble large, complex data sets that meet functional / non-functional business requirements.
  • Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
  • Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and AWS ‘big data’ technologies.
  • Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency and other key business performance metrics.
  • Work with stakeholders including the Executive, Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs.
  • Keep our data separated and secure across national boundaries through multiple data centers and AWS regions.
  • Create data tools for analytics and data scientist team members that assist them in building and optimizing our product into an innovative industry leader.
  • Work with data and analytics experts to strive for greater functionality in our data systems.

  • Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases.
  • Experience building and optimizing ‘big data’ data pipelines, architectures and data sets.
  • Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement.
  • Strong analytic skills related to working with unstructured datasets.
  • Build processes supporting data transformation, data structures, metadata, dependency and workload management.
  • A successful history of manipulating, processing and extracting value from large disconnected datasets.
  • Working knowledge of message queuing, stream processing, and highly scalable ‘big data’ data stores.
  • Strong project management and organizational skills.
  • Experience supporting and working with cross-functional teams in a dynamic environment.
  • We are looking for a candidate with 5+ years of experience in a Data Engineer role, who has attained a Graduate degree in Computer Science, Statistics, Informatics, Information Systems or another quantitative field. They should also have experience using the following software/tools: Experience with big
    • data tools: Hadoop, Spark, Kafka, etc.
    • Experience with relational SQL and NoSQL databases, including Postgres and Cassandra.
    • Experience with data pipeline and workflow management tools: Azkaban, Luigi, Airflow, etc.
    • Experience with AWS cloud services: EC2, EMR, RDS, Redshift
    • Experience with stream-processing systems: Storm, Spark-Streaming, etc.
    • Experience with object-oriented/object function scripting languages: Python, Java, C++, Scala, etc.
Read more
Mobile Programming India Pvt Ltd

at Mobile Programming India Pvt Ltd

1 video
17 recruiters
Pawan Tiwari
Posted by Pawan Tiwari
Remote, Bengaluru (Bangalore), Chennai, Pune, Gurugram, Mohali, Dehradun
4 - 7 yrs
₹10L - ₹15L / yr
Data engineering
Data Engineer
skill iconDjango
skill iconPython

Looking Data Enginner for our OWn organization-

Notice Period- 15-30 days
CTC- upto 15 lpa

 

Preferred Technical Expertise 

  1. Expertise in Python programming.
  2. Proficient in Pandas/Numpy Libraries. 
  3. Experience with Django framework and API Development.
  4. Proficient in writing complex queries using SQL
  5. Hands on experience with Apache Airflow.
  6. Experience with source code versioning tools such as GIT, Bitbucket etc.

 Good to have Skills:

  1. Create and maintain Optimal Data Pipeline Architecture
  2. Experienced in handling large structured data.
  3. Demonstrated ability in solutions covering data ingestion, data cleansing, ETL, Data mart creation and exposing data for consumers.
  4. Experience with any cloud platform (GCP is a plus)
  5. Experience with JQuery, HTML, Javascript, CSS is a plus.
If Intersted , Kindly share Your CV
Read more
VIMANA

at VIMANA

4 recruiters
Loshy Chandran
Posted by Loshy Chandran
Remote, Chennai
2 - 5 yrs
₹10L - ₹20L / yr
Data engineering
Data Engineer
Apache Kafka
Big Data
skill iconJava
+4 more

We are looking for passionate, talented and super-smart engineers to join our product development team. If you are someone who innovates, loves solving hard problems, and enjoys end-to-end product development, then this job is for you! You will be working with some of the best developers in the industry in a self-organising, agile environment where talent is valued over job title or years of experience.

 

Responsibilities:

  • You will be involved in end-to-end development of VIMANA technology, adhering to our development practices and expected quality standards.
  • You will be part of a highly collaborative Agile team which passionately follows SAFe Agile practices, including pair-programming, PR reviews, TDD, and Continuous Integration/Delivery (CI/CD).
  • You will be working with cutting-edge technologies and tools for stream processing using Java, NodeJS and Python, using frameworks like Spring, RxJS etc.
  • You will be leveraging big data technologies like Kafka, Elasticsearch and Spark, processing more than 10 Billion events per day to build a maintainable system at scale.
  • You will be building Domain Driven APIs as part of a micro-service architecture.
  • You will be part of a DevOps culture where you will get to work with production systems, including operations, deployment, and maintenance.
  • You will have an opportunity to continuously grow and build your capabilities, learning new technologies, languages, and platforms.

 

Requirements:

  • Undergraduate degree in Computer Science or a related field, or equivalent practical experience.
  • 2 to 5 years of product development experience.
  • Experience building applications using Java, NodeJS, or Python.
  • Deep knowledge in Object-Oriented Design Principles, Data Structures, Dependency Management, and Algorithms.
  • Working knowledge of message queuing, stream processing, and highly scalable Big Data technologies.
  • Experience in working with Agile software methodologies (XP, Scrum, Kanban), TDD and Continuous Integration (CI/CD).
  • Experience using no-SQL databases like MongoDB or Elasticsearch.
  • Prior experience with container orchestrators like Kubernetes is a plus.
About VIMANA

We build products and platforms for the Industrial Internet of Things. Our technology is being used around the world in mission-critical applications - from improving the performance of manufacturing plants, to making electric vehicles safer and more efficient, to making industrial equipment smarter.

Please visit https://govimana.com/ to learn more about what we do.

Why Explore a Career at VIMANA
  • We recognize that our dedicated team members make us successful and we offer competitive salaries.
  • We are a workplace that values work-life balance, provides flexible working hours, and full time remote work options.
  • You will be part of a team that is highly motivated to learn and work on cutting edge technologies, tools, and development practices.
  • Bon Appetit! Enjoy catered breakfasts, lunches and free snacks!

VIMANA Interview Process
We usually target to complete all the interviews in a week's time and would provide prompt feedback to the candidate. As of now, all the interviews are conducted online due to covid situation.

1.Telephonic screening (30 Min )

A 30 minute telephonic interview to understand and evaluate the candidate's fit with the job role and the company.
Clarify any queries regarding the job/company.
Give an overview about further interview rounds

2. Technical Rounds

This would be deep technical round to evaluate the candidate's technical capability pertaining to the job role.

3. HR Round

Candidate's team and cultural fit will be evaluated during this round

We would proceed with releasing the offer if the candidate clears all the above rounds.

Note: In certain cases, we might schedule additional rounds if needed before releasing the offer.
Read more
Agilisium
Agency job
via Recruiting India by Moumita Santra
Chennai
10 - 19 yrs
₹12L - ₹40L / yr
Big Data
Apache Spark
Spark
PySpark
ETL
+1 more

Job Sector: IT, Software

Job Type: Permanent

Location: Chennai

Experience: 10 - 20 Years

Salary: 12 – 40 LPA

Education: Any Graduate

Notice Period: Immediate

Key Skills: Python, Spark, AWS, SQL, PySpark

Contact at triple eight two zero nine four two double seven

 

Job Description:

Requirements

  • Minimum 12 years experience
  • In depth understanding and knowledge on distributed computing with spark.
  • Deep understanding of Spark Architecture and internals
  • Proven experience in data ingestion, data integration and data analytics with spark, preferably PySpark.
  • Expertise in ETL processes, data warehousing and data lakes.
  • Hands on with python for Big data and analytics.
  • Hands on in agile scrum model is an added advantage.
  • Knowledge on CI/CD and orchestration tools is desirable.
  • AWS S3, Redshift, Lambda knowledge is preferred
Thanks
Read more
1CH

at 1CH

1 recruiter
Sathish Sukumar
Posted by Sathish Sukumar
Chennai, Bengaluru (Bangalore), Hyderabad, NCR (Delhi | Gurgaon | Noida), Mumbai, Pune
4 - 15 yrs
₹10L - ₹25L / yr
Data engineering
Data engineer
ETL
SSIS
ADF
+3 more
  • Expertise in designing and implementing enterprise scale database (OLTP) and Data warehouse solutions.
  • Hands on experience in implementing Azure SQL Database, Azure SQL Date warehouse (Azure Synapse Analytics) and big data processing using Azure Databricks and Azure HD Insight.
  • Expert in writing T-SQL programming for complex stored procedures, functions, views and query optimization.
  • Should be aware of Database development for both on-premise and SAAS Applications using SQL Server and PostgreSQL.
  • Experience in ETL and ELT implementations using Azure Data Factory V2 and SSIS.
  • Experience and expertise in building machine learning models using Logistic and linear regression, Decision tree  and Random forest Algorithms.
  • PolyBase queries for exporting and importing data into Azure Data Lake.
  • Building data models both tabular and multidimensional using SQL Server data tools.
  • Writing data preparation, cleaning and processing steps using Python, SCALA, and R.
  • Programming experience using python libraries NumPy, Pandas and Matplotlib.
  • Implementing NOSQL databases and writing queries using cypher.
  • Designing end user visualizations using Power BI, QlikView and Tableau.
  • Experience working with all versions of SQL Server 2005/2008/2008R2/2012/2014/2016/2017/2019
  • Experience using the expression languages MDX and DAX.
  • Experience in migrating on-premise SQL server database to Microsoft Azure.
  • Hands on experience in using Azure blob storage, Azure Data Lake Storage Gen1 and Azure Data Lake Storage Gen2.
  • Performance tuning complex SQL queries, hands on experience using SQL Extended events.
  • Data modeling using Power BI for Adhoc reporting.
  • Raw data load automation using T-SQL and SSIS
  • Expert in migrating existing on-premise database to SQL Azure.
  • Experience in using U-SQL for Azure Data Lake Analytics.
  • Hands on experience in generating SSRS reports using MDX.
  • Experience in designing predictive models using Python and SQL Server.
  • Developing machine learning models using Azure Databricks and SQL Server
Read more
Bungee Tech India
Abigail David
Posted by Abigail David
Remote, NCR (Delhi | Gurgaon | Noida), Chennai
5 - 10 yrs
₹10L - ₹30L / yr
Big Data
Hadoop
Apache Hive
Spark
ETL
+3 more

Company Description

At Bungee Tech, we help retailers and brands meet customers everywhere and, on every occasion, they are in. We believe that accurate, high-quality data matched with compelling market insights empowers retailers and brands to keep their customers at the center of all innovation and value they are delivering. 

 

We provide a clear and complete omnichannel picture of their competitive landscape to retailers and brands. We collect billions of data points every day and multiple times in a day from publicly available sources. Using high-quality extraction, we uncover detailed information on products or services, which we automatically match, and then proactively track for price, promotion, and availability. Plus, anything we do not match helps to identify a new assortment opportunity.

 

Empowered with this unrivalled intelligence, we unlock compelling analytics and insights that once blended with verified partner data from trusted sources such as Nielsen, paints a complete, consolidated picture of the competitive landscape.

We are looking for a Big Data Engineer who will work on the collecting, storing, processing, and analyzing of huge sets of data. The primary focus will be on choosing optimal solutions to use for these purposes, then maintaining, implementing, and monitoring them.

You will also be responsible for integrating them with the architecture used in the company.

 

We're working on the future. If you are seeking an environment where you can drive innovation, If you want to apply state-of-the-art software technologies to solve real world problems, If you want the satisfaction of providing visible benefit to end-users in an iterative fast paced environment, this is your opportunity.

 

Responsibilities

As an experienced member of the team, in this role, you will:

 

  • Contribute to evolving the technical direction of analytical Systems and play a critical role their design and development

 

  • You will research, design and code, troubleshoot and support. What you create is also what you own.

 

  • Develop the next generation of automation tools for monitoring and measuring data quality, with associated user interfaces.

 

  • Be able to broaden your technical skills and work in an environment that thrives on creativity, efficient execution, and product innovation.

 

BASIC QUALIFICATIONS

  • Bachelor’s degree or higher in an analytical area such as Computer Science, Physics, Mathematics, Statistics, Engineering or similar.
  • 5+ years relevant professional experience in Data Engineering and Business Intelligence
  • 5+ years in with Advanced SQL (analytical functions), ETL, Data Warehousing.
  • Strong knowledge of data warehousing concepts, including data warehouse technical architectures, infrastructure components, ETL/ ELT and reporting/analytic tools and environments, data structures, data modeling and performance tuning.
  • Ability to effectively communicate with both business and technical teams.
  • Excellent coding skills in Java, Python, C++, or equivalent object-oriented programming language
  • Understanding of relational and non-relational databases and basic SQL
  • Proficiency with at least one of these scripting languages: Perl / Python / Ruby / shell script

 

PREFERRED QUALIFICATIONS

 

  • Experience with building data pipelines from application databases.
  • Experience with AWS services - S3, Redshift, Spectrum, EMR, Glue, Athena, ELK etc.
  • Experience working with Data Lakes.
  • Experience providing technical leadership and mentor other engineers for the best practices on the data engineering space
  • Sharp problem solving skills and ability to resolve ambiguous requirements
  • Experience on working with Big Data
  • Knowledge and experience on working with Hive and the Hadoop ecosystem
  • Knowledge of Spark
  • Experience working with Data Science teams
Read more
Chennai, Coimbatore, Madurai
5 - 10 yrs
₹12L - ₹19L / yr
Apache Spark
HiveQL
skill iconAmazon Web Services (AWS)
Data engineering
JSON
+2 more
  • Must have the experience of leading teams and drive customer interactions
  • Must have multiple successful deployments user stories
  • Extensive hands on experience in Apache Spark along with HiveQL
  • Sound knowledge in Amazon Web Services or any other Cloud environment.
  • Experienced in data flow orchestration using Apache Airflow
  • JSON, XML, CSV, Parquet file formats with snappy compression.
  • File movements between HDFS and AWS S3
  • Experience in shell scripting and scripting to automate report generation and migration of reports to AWS S3
  • Worked in building a data pipeline using Pandas and Flask FrameworkGood Familiarity with Anaconda and Jupyternotebook
Read more
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Find more jobs
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort