Cutshort logo
ETL management Jobs in Chennai

11+ ETL management Jobs in Chennai | ETL management Job openings in Chennai

Apply to 11+ ETL management Jobs in Chennai on CutShort.io. Explore the latest ETL management Job opportunities across top companies like Google, Amazon & Adobe.

icon
TekClan
Tanu Shree
Posted by Tanu Shree
Chennai
4 - 7 yrs
Best in industry
MS SQLServer
SQL Programming
SQL
ETL
ETL management
+5 more

Company - Tekclan Software Solutions

Position – SQL Developer

Experience – Minimum 4+ years of experience in MS SQL server, SQL Programming, ETL development.

Location - Chennai


We are seeking a highly skilled SQL Developer with expertise in MS SQL Server, SSRS, SQL programming, writing stored procedures, and proficiency in ETL using SSIS. The ideal candidate will have a strong understanding of database concepts, query optimization, and data modeling.


Responsibilities:

1. Develop, optimize, and maintain SQL queries, stored procedures, and functions for efficient data retrieval and manipulation.

2. Design and implement ETL processes using SSIS for data extraction, transformation, and loading from various sources.

3. Collaborate with cross-functional teams to gather business requirements and translate them into technical specifications.

4. Create and maintain data models, ensuring data integrity, normalization, and performance.

5. Generate insightful reports and dashboards using SSRS to facilitate data-driven decision making.

6. Troubleshoot and resolve database performance issues, bottlenecks, and data inconsistencies.

7. Conduct thorough testing and debugging of SQL code to ensure accuracy and reliability.

8. Stay up-to-date with emerging trends and advancements in SQL technologies and provide recommendations for improvement.

9. Should be an independent and individual contributor.


Requirements:

1. Minimum of 4+ years of experience in MS SQL server, SQL Programming, ETL development.

2. Proven experience as a SQL Developer with a strong focus on MS SQL Server.

3. Proficiency in SQL programming, including writing complex queries, stored procedures, and functions.

4. In-depth knowledge of ETL processes and hands-on experience with SSIS.

5. Strong expertise in creating reports and dashboards using SSRS.

6. Familiarity with database design principles, query optimization, and data modeling.

7. Experience with performance tuning and troubleshooting SQL-related issues.

8. Excellent problem-solving skills and attention to detail.

9. Strong communication and collaboration abilities.

10. Ability to work independently and handle multiple tasks simultaneously.


Preferred Skills:

1. Certification in MS SQL Server or related technologies.

2. Knowledge of other database systems such as Oracle or MySQL.

3. Familiarity with data warehousing concepts and tools.

4. Experience with version control systems.

Read more
Gurugram, Bengaluru (Bangalore), Chennai
2 - 9 yrs
₹9L - ₹27L / yr
DevOps
Microsoft Windows Azure
gitlab
skill iconAmazon Web Services (AWS)
Google Cloud Platform (GCP)
+15 more
Greetings!!

We are looking out for a technically driven  "ML OPS Engineer" for one of our premium client

COMPANY DESCRIPTION:
This Company is a global management consulting firm. We are the trusted advisor to the world's leading businesses, governments, and institutions. We work with leading organizations across the private, public and social sectors. Our scale, scope, and knowledge allow us to address


Key Skills
• Excellent hands-on expert knowledge of cloud platform infrastructure and administration
(Azure/AWS/GCP) with strong knowledge of cloud services integration, and cloud security
• Expertise setting up CI/CD processes, building and maintaining secure DevOps pipelines with at
least 2 major DevOps stacks (e.g., Azure DevOps, Gitlab, Argo)
• Experience with modern development methods and tooling: Containers (e.g., docker) and
container orchestration (K8s), CI/CD tools (e.g., Circle CI, Jenkins, GitHub actions, Azure
DevOps), version control (Git, GitHub, GitLab), orchestration/DAGs tools (e.g., Argo, Airflow,
Kubeflow)
• Hands-on coding skills Python 3 (e.g., API including automated testing frameworks and libraries
(e.g., pytest) and Infrastructure as Code (e.g., Terraform) and Kubernetes artifacts (e.g.,
deployments, operators, helm charts)
• Experience setting up at least one contemporary MLOps tooling (e.g., experiment tracking,
model governance, packaging, deployment, feature store)
• Practical knowledge delivering and maintaining production software such as APIs and cloud
infrastructure
• Knowledge of SQL (intermediate level or more preferred) and familiarity working with at least
one common RDBMS (MySQL, Postgres, SQL Server, Oracle)
Read more
Agilisium
Agency job
via Recruiting India by Moumita Santra
Chennai
10 - 19 yrs
₹12L - ₹40L / yr
Big Data
Apache Spark
Spark
PySpark
ETL
+1 more

Job Sector: IT, Software

Job Type: Permanent

Location: Chennai

Experience: 10 - 20 Years

Salary: 12 – 40 LPA

Education: Any Graduate

Notice Period: Immediate

Key Skills: Python, Spark, AWS, SQL, PySpark

Contact at triple eight two zero nine four two double seven

 

Job Description:

Requirements

  • Minimum 12 years experience
  • In depth understanding and knowledge on distributed computing with spark.
  • Deep understanding of Spark Architecture and internals
  • Proven experience in data ingestion, data integration and data analytics with spark, preferably PySpark.
  • Expertise in ETL processes, data warehousing and data lakes.
  • Hands on with python for Big data and analytics.
  • Hands on in agile scrum model is an added advantage.
  • Knowledge on CI/CD and orchestration tools is desirable.
  • AWS S3, Redshift, Lambda knowledge is preferred
Thanks
Read more
AxionConnect Infosolutions Pvt Ltd
Shweta Sharma
Posted by Shweta Sharma
Pune, Bengaluru (Bangalore), Hyderabad, Nagpur, Chennai
5.5 - 7 yrs
₹20L - ₹25L / yr
skill iconDjango
skill iconFlask
Snowflake
Snow flake schema
SQL
+4 more

Job Location: Hyderabad/Bangalore/ Chennai/Pune/Nagpur

Notice period: Immediate - 15 days

 

1.      Python Developer with Snowflake

 

Job Description :


  1. 5.5+ years of Strong Python Development Experience with Snowflake.
  2. Strong hands of experience with SQL ability to write complex queries.
  3. Strong understanding of how to connect to Snowflake using Python, should be able to handle any type of files
  4.  Development of Data Analysis, Data Processing engines using Python
  5. Good Experience in Data Transformation using Python. 
  6.  Experience in Snowflake data load using Python.
  7.  Experience in creating user-defined functions in Snowflake.
  8.  Snowsql implementation.
  9.  Knowledge of query performance tuning will be added advantage.
  10. Good understanding of Datawarehouse (DWH) concepts.
  11.  Interpret/analyze business requirements & functional specification
  12.  Good to have DBT, FiveTran, and AWS Knowledge.
Read more
Mobile Programming LLC

at Mobile Programming LLC

1 video
34 recruiters
Sukhdeep Singh
Posted by Sukhdeep Singh
Chennai
4 - 7 yrs
₹13L - ₹15L / yr
skill iconData Analytics
Data Visualization
PowerBI
Tableau
Qlikview
+10 more

Title: Platform Engineer Location: Chennai Work Mode: Hybrid (Remote and Chennai Office) Experience: 4+ years Budget: 16 - 18 LPA

Responsibilities:

  • Parse data using Python, create dashboards in Tableau.
  • Utilize Jenkins for Airflow pipeline creation and CI/CD maintenance.
  • Migrate Datastage jobs to Snowflake, optimize performance.
  • Work with HDFS, Hive, Kafka, and basic Spark.
  • Develop Python scripts for data parsing, quality checks, and visualization.
  • Conduct unit testing and web application testing.
  • Implement Apache Airflow and handle production migration.
  • Apply data warehousing techniques for data cleansing and dimension modeling.

Requirements:

  • 4+ years of experience as a Platform Engineer.
  • Strong Python skills, knowledge of Tableau.
  • Experience with Jenkins, Snowflake, HDFS, Hive, and Kafka.
  • Proficient in Unix Shell Scripting and SQL.
  • Familiarity with ETL tools like DataStage and DMExpress.
  • Understanding of Apache Airflow.
  • Strong problem-solving and communication skills.

Note: Only candidates willing to work in Chennai and available for immediate joining will be considered. Budget for this position is 16 - 18 LPA.

Read more
Aureus Tech Systems

at Aureus Tech Systems

3 recruiters
Naveen Yelleti
Posted by Naveen Yelleti
Kolkata, Hyderabad, Chennai, Bengaluru (Bangalore), Bhubaneswar, Visakhapatnam, Vijayawada, Trichur, Thiruvananthapuram, Mysore, Delhi, Noida, Gurugram, Nagpur
1 - 7 yrs
₹4L - ₹15L / yr
PySpark
Data engineering
Big Data
Hadoop
Spark
+2 more

Skills and requirements

  • Experience analyzing complex and varied data in a commercial or academic setting.
  • Desire to solve new and complex problems every day.
  • Excellent ability to communicate scientific results to both technical and non-technical team members.


Desirable

  • A degree in a numerically focused discipline such as, Maths, Physics, Chemistry, Engineering or Biological Sciences..
  • Hands on experience on Python, Pyspark, SQL
  • Hands on experience on building End to End Data Pipelines.
  • Hands on Experience on Azure Data Factory, Azure Data Bricks, Data Lake - added advantage
  • Hands on Experience in building data pipelines.
  • Experience with Bigdata Tools, Hadoop, Hive, Sqoop, Spark, SparkSQL
  • Experience with SQL or NoSQL databases for the purposes of data retrieval and management.
  • Experience in data warehousing and business intelligence tools, techniques and technology, as well as experience in diving deep on data analysis or technical issues to come up with effective solutions.
  • BS degree in math, statistics, computer science or equivalent technical field.
  • Experience in data mining structured and unstructured data (SQL, ETL, data warehouse, Machine Learning etc.) in a business environment with large-scale, complex data sets.
  • Proven ability to look at solutions in unconventional ways. Sees opportunities to innovate and can lead the way.
  • Willing to learn and work on Data Science, ML, AI.
Read more
Crayon Data

at Crayon Data

2 recruiters
Varnisha Sethupathi
Posted by Varnisha Sethupathi
Chennai
5 - 8 yrs
₹15L - ₹25L / yr
SQL
skill iconPython
Analytical Skills
Data modeling
Data Visualization
+1 more

Role : Senior Customer Scientist 

Experience : 6-8 Years 

Location : Chennai (Hybrid) 
 
 

Who are we? 
 
 

A young, fast-growing AI and big data company, with an ambitious vision to simplify the world’s choices. Our clients are top-tier enterprises in the banking, e-commerce and travel spaces. They use our core AI-based choice engine http://maya.ai/">maya.ai, to deliver personal digital experiences centered around taste. The http://maya.ai/">maya.ai platform now touches over 125M customers globally. You’ll find Crayon Boxes in Chennai and Singapore. But you’ll find Crayons in every corner of the world. Especially where our client projects are – UAE, India, SE Asia and pretty soon the US. 
 
 

Life in the Crayon Box is a little chaotic, largely dynamic and keeps us on our toes! Crayons are a diverse and passionate bunch. Challenges excite us. Our mission drives us. And good food, caffeine (for the most part) and youthful energy fuel us. Over the last year alone, Crayon has seen a growth rate of 3x, and we believe this is just the start. 
 

 
We’re looking for young and young-at-heart professionals with a relentless drive to help Crayon double its growth. Leaders, doers, innovators, dreamers, implementers and eccentric visionaries, we have a place for you all. 
 

 
 

Can you say “Yes, I have!” to the below? 
 
 

  1. Experience with exploratory analysis, statistical analysis, and model development 
     
  1. Knowledge of advanced analytics techniques, including Predictive Modelling (Logistic regression), segmentation, forecasting, data mining, and optimizations 
     
  1. Knowledge of software packages such as SAS, R, Rapidminer for analytical modelling and data management. 
     
  1. Strong experience in SQL/ Python/R working efficiently at scale with large data sets 
     
  1. Experience in using Business Intelligence tools such as PowerBI, Tableau, Metabase for business applications 
     

 
 

 

Can you say “Yes, I will!” to the below? 
 
 

  1. Drive clarity and solve ambiguous, challenging business problems using data-driven approaches. Propose and own data analysis (including modelling, coding, analytics) to drive business insight and facilitate decisions. 
     
  1. Develop creative solutions and build prototypes to business problems using algorithms based on machine learning, statistics, and optimisation, and work with engineering to deploy those algorithms and create impact in production. 
     
  1. Perform time-series analyses, hypothesis testing, and causal analyses to statistically assess the relative impact and extract trends 
     
  1. Coordinate individual teams to fulfil client requirements and manage deliverable 
     
  1. Communicate and present complex concepts to business audiences 
     
  1. Travel to client locations when necessary  

 

 

Crayon is an equal opportunity employer. Employment is based on a person's merit and qualifications and professional competences. Crayon does not discriminate against any employee or applicant because of race, creed, color, religion, gender, sexual orientation, gender identity/expression, national origin, disability, age, genetic information, marital status, pregnancy or related.  
 
 

More about Crayon: https://www.crayondata.com/">https://www.crayondata.com/  
 

More about http://maya.ai/">maya.ai: https://maya.ai/">https://maya.ai/  

 

 

Read more
Amazon India

at Amazon India

1 video
58 recruiters
Tanya Thakur
Posted by Tanya Thakur
Chennai
5 - 12 yrs
₹10L - ₹22L / yr
Spotfire
Qlikview
Tableau
PowerBI
Data Visualization
+5 more

BASIC QUALIFICATIONS

 

  • 2+ years experience in program or project management
  • Project handling experience using six sigma/Lean processes
  • Experience interpreting data to make business recommendations

  • Bachelor’s degree or higher in Operations, Business, Project Management, Engineering
  • 5-10 years' experience in project / Customer Satisfaction, with proven success record
  • Understand basic and systematic approaches to manage projects/programs
  • Structured problem solving approach to identify & fix problems
  • Open-minded, creative and proactive thinking
  • Pioneer to invent and make differences
  • Understanding of customer experience, listening to customers' voice and work backwards to improve business process and operations
  • Certification in 6 Sigma

 

PREFERRED QUALIFICATIONS

 

  • Automation Skills with experience in Advance SQL, Python, Tableau
Read more
Chennai
5 - 13 yrs
₹9L - ₹28L / yr
PySpark
Data engineering
Big Data
Hadoop
Spark
+6 more
  • Demonstrable experience owning and developing big data solutions, using Hadoop, Hive/Hbase, Spark, Databricks, ETL/ELT for 5+ years

·       10+ years of Information Technology experience, preferably with Telecom / wireless service providers.

·       Experience in designing data solution following Agile practices (SAFe methodology); designing for testability, deployability and releaseability; rapid prototyping, data modeling, and decentralized innovation

  • DataOps mindset: allowing the architecture of a system to evolve continuously over time, while simultaneously supporting the needs of current users
  • Create and maintain Architectural Runway, and Non-Functional Requirements.
  • Design for Continuous Delivery Pipeline (CI/CD data pipeline) and enables Built-in Quality & Security from the start.

·       To be able to demonstrate an understanding and ideally use of, at least one recognised architecture framework or standard e.g. TOGAF, Zachman Architecture Framework etc

·       The ability to apply data, research, and professional judgment and experience to ensure our products are making the biggest difference to consumers

·       Demonstrated ability to work collaboratively

·       Excellent written, verbal and social skills - You will be interacting with all types of people (user experience designers, developers, managers, marketers, etc.)

·       Ability to work in a fast paced, multiple project environment on an independent basis and with minimal supervision

·       Technologies: .NET, AWS, Azure; Azure Synapse, Nifi, RDS, Apache Kafka, Azure Data bricks, Azure datalake storage, Power BI, Reporting Analytics, QlickView, SQL on-prem Datawarehouse; BSS, OSS & Enterprise Support Systems

Read more
Maveric Systems

at Maveric Systems

3 recruiters
Rashmi Poovaiah
Posted by Rashmi Poovaiah
Bengaluru (Bangalore), Chennai, Pune
4 - 10 yrs
₹8L - ₹15L / yr
Big Data
Hadoop
Spark
Apache Kafka
HiveQL
+2 more

Role Summary/Purpose:

We are looking for a Developer/Senior Developers to be a part of building advanced analytical platform leveraging Big Data technologies and transform the legacy systems. This role is an exciting, fast-paced, constantly changing and challenging work environment, and will play an important role in resolving and influencing high-level decisions.

 

Requirements:

  • The candidate must be a self-starter, who can work under general guidelines in a fast-spaced environment.
  • Overall minimum of 4 to 8 year of software development experience and 2 years in Data Warehousing domain knowledge
  • Must have 3 years of hands-on working knowledge on Big Data technologies such as Hadoop, Hive, Hbase, Spark, Kafka, Spark Streaming, SCALA etc…
  • Excellent knowledge in SQL & Linux Shell scripting
  • Bachelors/Master’s/Engineering Degree from a well-reputed university.
  • Strong communication, Interpersonal, Learning and organizing skills matched with the ability to manage stress, Time, and People effectively
  • Proven experience in co-ordination of many dependencies and multiple demanding stakeholders in a complex, large-scale deployment environment
  • Ability to manage a diverse and challenging stakeholder community
  • Diverse knowledge and experience of working on Agile Deliveries and Scrum teams.

 

Responsibilities

  • Should works as a senior developer/individual contributor based on situations
  • Should be part of SCRUM discussions and to take requirements
  • Adhere to SCRUM timeline and deliver accordingly
  • Participate in a team environment for the design, development and implementation
  • Should take L3 activities on need basis
  • Prepare Unit/SIT/UAT testcase and log the results
  • Co-ordinate SIT and UAT Testing. Take feedbacks and provide necessary remediation/recommendation in time.
  • Quality delivery and automation should be a top priority
  • Co-ordinate change and deployment in time
  • Should create healthy harmony within the team
  • Owns interaction points with members of core team (e.g.BA team, Testing and business team) and any other relevant stakeholders
Read more
Mobile Programming LLC

at Mobile Programming LLC

1 video
34 recruiters
vandana chauhan
Posted by vandana chauhan
Remote, Chennai
3 - 7 yrs
₹12L - ₹18L / yr
Big Data
skill iconAmazon Web Services (AWS)
Hadoop
SQL
skill iconPython
+5 more
Position: Data Engineer  
Location: Chennai- Guindy Industrial Estate
Duration: Full time role
Company: Mobile Programming (https://www.mobileprogramming.com/" target="_blank">https://www.mobileprogramming.com/) 
Client Name: Samsung 


We are looking for a Data Engineer to join our growing team of analytics experts. The hire will be
responsible for expanding and optimizing our data and data pipeline architecture, as well as optimizing
data flow and collection for cross functional teams. The ideal candidate is an experienced data pipeline
builder and data wrangler who enjoy optimizing data systems and building them from the ground up.
The Data Engineer will support our software developers, database architects, data analysts and data
scientists on data initiatives and will ensure optimal data delivery architecture is consistent throughout
ongoing projects. They must be self-directed and comfortable supporting the data needs of multiple
teams, systems and products.

Responsibilities for Data Engineer
 Create and maintain optimal data pipeline architecture,
 Assemble large, complex data sets that meet functional / non-functional business requirements.
 Identify, design, and implement internal process improvements: automating manual processes,
optimizing data delivery, re-designing infrastructure for greater scalability, etc.
 Build the infrastructure required for optimal extraction, transformation, and loading of data
from a wide variety of data sources using SQL and AWS big data technologies.
 Build analytics tools that utilize the data pipeline to provide actionable insights into customer
acquisition, operational efficiency and other key business performance metrics.
 Work with stakeholders including the Executive, Product, Data and Design teams to assist with
data-related technical issues and support their data infrastructure needs.
 Create data tools for analytics and data scientist team members that assist them in building and
optimizing our product into an innovative industry leader.
 Work with data and analytics experts to strive for greater functionality in our data systems.

Qualifications for Data Engineer
 Experience building and optimizing big data ETL pipelines, architectures and data sets.
 Advanced working SQL knowledge and experience working with relational databases, query
authoring (SQL) as well as working familiarity with a variety of databases.
 Experience performing root cause analysis on internal and external data and processes to
answer specific business questions and identify opportunities for improvement.
 Strong analytic skills related to working with unstructured datasets.
 Build processes supporting data transformation, data structures, metadata, dependency and
workload management.
 A successful history of manipulating, processing and extracting value from large disconnected
datasets.

 Working knowledge of message queuing, stream processing and highly scalable ‘big datadata
stores.
 Strong project management and organizational skills.
 Experience supporting and working with cross-functional teams in a dynamic environment.

We are looking for a candidate with 3-6 years of experience in a Data Engineer role, who has
attained a Graduate degree in Computer Science, Statistics, Informatics, Information Systems or another quantitative field. They should also have experience using the following software/tools:
 Experience with big data tools: Spark, Kafka, HBase, Hive etc.
 Experience with relational SQL and NoSQL databases
 Experience with AWS cloud services: EC2, EMR, RDS, Redshift
 Experience with stream-processing systems: Storm, Spark-Streaming, etc.
 Experience with object-oriented/object function scripting languages: Python, Java, Scala, etc.

Skills: Big Data, AWS, Hive, Spark, Python, SQL
 
Read more
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Find more jobs
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort