Cutshort logo
ETL Jobs in Chennai

19+ ETL Jobs in Chennai | ETL Job openings in Chennai

Apply to 19+ ETL Jobs in Chennai on CutShort.io. Explore the latest ETL Job opportunities across top companies like Google, Amazon & Adobe.

icon
VyTCDC
Gobinath Sundaram
Posted by Gobinath Sundaram
Chennai, Bengaluru (Bangalore), Hyderabad
5 - 10 yrs
₹5L - ₹10L / yr
Google Cloud Platform (GCP)
Teradata
ETL
Datawarehousing

Overview:

We are seeking a talented and experienced GCP Data Engineer with strong expertise in Teradata, ETL, and Data Warehousing to join our team. As a key member of our Data Engineering team, you will play a critical role in developing and maintaining data pipelines, optimizing ETL processes, and managing large-scale data warehouses on the Google Cloud Platform (GCP).

Responsibilities:

  • Design, implement, and maintain scalable ETL pipelines on GCP (Google Cloud Platform).
  • Develop and manage data warehouse solutions using Teradata and cloud-based technologies (BigQuery, Cloud Storage, etc.).
  • Build and optimize high-performance data pipelines for real-time and batch data processing.
  • Integrate, transform, and load large datasets into GCP-based data lakes and data warehouses.
  • Collaborate with data scientists, analysts, and other stakeholders to understand data requirements and deliver solutions.
  • Write efficient, clean, and reusable code for ETL processes and data workflows.
  • Ensure data quality, consistency, and integrity across all pipelines and storage solutions.
  • Implement data governance practices and ensure security and compliance of data processes.
  • Monitor and troubleshoot data pipeline performance and resolve issues proactively.
  • Participate in the design and implementation of scalable data architectures using GCP services like BigQuery, Cloud Dataflow, and Cloud Pub/Sub.
  • Optimize and automate data workflows for continuous improvement.
  • Maintain up-to-date documentation of data pipeline architectures and processes.

Requirements:

Technical Skills:

  • Google Cloud Platform (GCP): Extensive experience with BigQuery, Cloud Storage, Cloud Dataflow, and Cloud Composer.
  • ETL Tools: Expertise in building ETL pipelines using tools such as Apache NiFi, Apache Beam, or custom Python-based scripts.
  • Data Warehousing: Strong experience working with Teradata for data warehousing, including data modeling, schema design, and performance tuning.
  • SQL: Advanced proficiency in SQL and relational databases, particularly in the context of Teradata and GCP environments.
  • Programming: Proficient in Python, Java, or Scala for building and automating data processes.
  • Data Architecture: Knowledge of best practices in designing scalable data architectures for both structured and unstructured data.

Experience:

  • Proven experience as a Data Engineer, with a focus on building and managing ETL pipelines and data warehouse solutions.
  • Hands-on experience in data modeling and working with complex, high-volume data in a cloud-based environment.
  • Experience with data migration from on-premises to cloud environments (Teradata to GCP).
  • Familiarity with Data Lake concepts and technologies.
  • Experience with version control systems like Git and working in Agile environments.
  • Knowledge of CI/CD and automation processes in data engineering.

Soft Skills:

  • Strong problem-solving and troubleshooting skills.
  • Excellent communication skills, both verbal and written, for interacting with technical and non-technical teams.
  • Ability to work collaboratively in a fast-paced, cross-functional team environment.
  • Strong attention to detail and ability to prioritize tasks.

Preferred Qualifications:

  • Experience with other GCP tools such as Dataproc, Bigtable, Cloud Functions.
  • Knowledge of Terraform or similar infrastructure-as-code tools for managing cloud resources.
  • Familiarity with data governance frameworks and data privacy regulations.
  • Certifications in Google Cloud or Teradata are a plus.

Benefits:

  • Competitive salary and performance-based bonuses.
  • Health, dental, and vision insurance.
  • 401(k) with company matching.
  • Paid time off and flexible work schedules.
  • Opportunities for professional growth and development.


Read more
Xebia IT Architects

at Xebia IT Architects

2 recruiters
Vijay S
Posted by Vijay S
Bengaluru (Bangalore), Pune, Hyderabad, Chennai, Gurugram, Bhopal, Jaipur
5 - 15 yrs
₹20L - ₹35L / yr
Spark
ETL
Data Transformation Tool (DBT)
skill iconPython
Apache Airflow
+2 more

We are seeking a highly skilled and experienced Offshore Data Engineer . The role involves designing, implementing, and testing data pipelines and products.


Qualifications & Experience:


bachelor's or master's degree in computer science, Information Systems, or a related field.


5+ years of experience in data engineering, with expertise in data architecture and pipeline development.


☁️ Proven experience with GCP, Big Query, Databricks, Airflow, Spark, DBT, and GCP Services.


️ Hands-on experience with ETL processes, SQL, PostgreSQL, MySQL, MongoDB, Cassandra.


Strong proficiency in Python and data modelling.


Experience in testing and validation of data pipelines.


Preferred: Experience with eCommerce systems, data visualization tools (Tableau, Looker), and cloud certifications.


If you meet the above criteria and are interested, please share your updated CV along with the following details:


Total Experience:


Current CTC:


Expected CTC:


Current Location:


Preferred Location:


Notice Period / Last Working Day (if serving notice):


⚠️ Kindly share your details only if you have not applied recently or are not currently in the interview process for any open roles at Xebia.


Looking forward to your response!

Read more
Chennai
4 - 6 yrs
₹1L - ₹16L / yr
skill iconPython
SQL
Spark
ETL
Apache Airflow
+3 more


We are looking for skilled Data Engineer to design, build, and maintain robust data pipelines and infrastructure. You will play a pivotal role in optimizing data flow, ensuring scalability, and enabling seamless access to structured/unstructured data across the organization. This role requires technical expertise in Python, SQL, ETL/ELT frameworks, and cloud data warehouses, along with strong collaboration skills to partner with cross-functional teams.


Company: BigThinkCode Technologies

URL:

Location: Chennai (Work from office / Hybrid)

Experience: 4 - 6 years


Key Responsibilities:

  • Design, develop, and maintain scalable ETL/ELT pipelines to process structured and unstructured data.
  • Optimize and manage SQL queries for performance and efficiency in large-scale datasets.
  • Experience working with data warehouse solutions (e.g., Redshift, BigQuery, Snowflake) for analytics and reporting.
  • Collaborate with data scientists, analysts, and business stakeholders to translate requirements into technical solutions.
  • Experience in Implementing solutions for streaming data (e.g., Apache Kafka, AWS Kinesis) is preferred but not mandatory.
  • Ensure data quality, governance, and security across pipelines and storage systems.
  • Document architectures, processes, and workflows for clarity and reproducibility.


Required Technical Skills:

  • Proficiency in Python for scripting, automation, and pipeline development.
  • Expertise in SQL (complex queries, optimization, and database design).
  • Hands-on experience with ETL/ELT tools (e.g., Apache Airflow, dbt, AWS Glue).
  • Experience working with structured data (RDBMS) and unstructured data (JSON, Parquet, Avro).
  • Familiarity with cloud-based data warehouses (Redshift, BigQuery, Snowflake).
  • Knowledge of version control systems (e.g., Git) and CI/CD practices.


Preferred Qualifications:

  • Experience with streaming data technologies (e.g., Kafka, Kinesis, Spark Streaming).
  • Exposure to cloud platforms (AWS, GCP, Azure) and their data services.
  • Understanding of data modelling (dimensional, star schema) and optimization techniques.


Soft Skills:

  • Team player with a collaborative mindset and ability to mentor junior engineers.
  • Strong stakeholder management skills to align technical solutions with business goals.
  • Excellent communication skills to explain technical concepts to non-technical audiences.
  • Proactive problem-solving and adaptability in fast-paced environments.


If interested, apply / reply by sharing your updated profile to connect and discuss.


Regards

Read more
Clients located in Bangalore,Chennai &Pune Location

Clients located in Bangalore,Chennai &Pune Location

Agency job
Bengaluru (Bangalore), Pune, Chennai
3 - 8 yrs
₹8L - ₹16L / yr
ETL
skill iconPython
Shell Scripting
Data modeling
Datawarehousing

Role: Ab Initio Developer

Experience: 2.5 (mandate) - 8 years

Skills: Ab Initio Development

Location: Chennai/Bangalore/Pune

only Immediate to 15 days joiners

should be available for in person interview only

Its a long term contract role with IBM and Arnold is the payrolling company.

JOB DESCRIPTION:

We are seeking a skilled Ab Initio Developer to join our dynamic team and contribute to the development and maintenance of critical data integration solutions. As an Ab Initio Developer, you will be responsible for designing, developing, and implementing robust and efficient data pipelines using Ab Initio's powerful ETL capabilities.


Key Responsibilities:

·      Design, develop, and implement complex data integration solutions using Ab Initio's graphical interface and command-line tools.

·      Analyze complex data requirements and translate them into effective Ab Initio designs.

·      Develop and maintain efficient data pipelines, including data extraction, transformation, and loading processes.

·      Troubleshoot and resolve technical issues related to Ab Initio jobs and data flows.

·      Optimize performance and scalability of Ab Initio jobs.

·      Collaborate with business analysts, data analysts, and other team members to understand data requirements and deliver solutions that meet business needs.

·      Stay up-to-date with the latest Ab Initio technologies and industry best practices.

Required Skills and Experience:

·      2.5 to 8 years of hands-on experience in Ab Initio development.

·      Strong understanding of Ab Initio components, including Designer, Conductor, and Monitor.

·      Proficiency in Ab Initio's graphical interface and command-line tools.

·      Experience in data modeling, data warehousing, and ETL concepts.

·      Strong SQL skills and experience with relational databases.

·      Excellent problem-solving and analytical skills.

·      Ability to work independently and as part of a team.

·      Strong communication and documentation skills.

Preferred Skills:

·      Experience with cloud-based data integration platforms.

·      Knowledge of data quality and data governance concepts.

·      Experience with scripting languages (e.g., Python, Shell scripting).

·      Certification in Ab Initio or related technologies.

Read more
top MNC

top MNC

Agency job
via Vy Systems by thirega thanasekaran
Hyderabad, Chennai
10 - 15 yrs
₹8L - ₹20L / yr
Data engineering
ETL
Datawarehousing
cicd
skill iconJenkins
+3 more

Key Responsibilities:

  • Lead Data Engineering Team: Provide leadership and mentorship to junior data engineers and ensure best practices in data architecture and pipeline design.
  • Data Pipeline Development: Design, implement, and maintain end-to-end ETL (Extract, Transform, Load) processes to support analytics, reporting, and data science activities.
  • Cloud Architecture (GCP): Architect and optimize data infrastructure on Google Cloud Platform (GCP), ensuring scalability, reliability, and performance of data systems.
  • CI/CD Pipelines: Implement and maintain CI/CD pipelines using Jenkins and other tools to ensure the seamless deployment and automation of data workflows.
  • Data Warehousing: Design and implement data warehousing solutions, ensuring optimal performance and efficient data storage using technologies like Teradata, Oracle, and SQL Server.
  • Workflow Orchestration: Use Apache Airflow to orchestrate complex data workflows and scheduling of data pipeline jobs.
  • Automation with Terraform: Implement Infrastructure as Code (IaC) using Terraform to provision and manage cloud resources.

Share Cv to




Thirega@ vysystems dot com - WhatsApp - 91Five0033Five2Three

Read more
codersbrain

at codersbrain

1 recruiter
Tanuj Uppal
Posted by Tanuj Uppal
Remote, Chennai
4 - 8 yrs
₹2L - ₹4L / yr
skill iconBootstrap
skill iconHTML/CSS
Data Visualization
ETL

We are seeking a skilled Qlik Developer with 4-5 years of experience in Qlik development to join our team. The ideal candidate will have expertise in QlikView and Qlik Sense, along with strong communication skills for interacting with business stakeholders. Knowledge of other BI tools such as Power BI and Tableau is a plus.


Must-Have Skills:


QlikView and Qlik Sense Development: 4-5 years of hands-on experience in developing and maintaining QlikView/Qlik Sense applications and dashboards.

Data Visualization: Proficiency in creating interactive reports and dashboards, with a deep understanding of data storytelling.

ETL (Extract, Transform, Load): Experience in data extraction from multiple data sources (databases, flat files, APIs) and transforming it into actionable insights.

Qlik Scripting: Knowledge of Qlik scripting, set analysis, and expressions to create efficient solutions.

Data Modeling: Expertise in designing and implementing data models for reporting and analytics.

Stakeholder Communication: Strong communication skills to collaborate with non-technical business users and translate their requirements into effective BI solutions.

Troubleshooting and Support: Ability to identify, troubleshoot, and resolve issues related to Qlik applications.


Nice-to-Have Skills:


Other BI Tools: Experience in using other business intelligence tools such as Power BI and Tableau.

SQL & Data Querying: Familiarity with SQL for data querying and database management.

Cloud Platforms: Experience with cloud services like Azure, AWS, or Google Cloud in relation to BI and data solutions.

Programming Knowledge: Exposure to programming languages like Python or R.

Agile Methodologies: Understanding of Agile frameworks for project delivery.

Read more
Bengaluru (Bangalore), Mumbai, Delhi, Gurugram, Pune, Hyderabad, Ahmedabad, Chennai
3 - 7 yrs
₹8L - ₹15L / yr
AWS Lambda
Amazon S3
Amazon VPC
Amazon EC2
Amazon Redshift
+3 more

Technical Skills:


  • Ability to understand and translate business requirements into design.
  • Proficient in AWS infrastructure components such as S3, IAM, VPC, EC2, and Redshift.
  • Experience in creating ETL jobs using Python/PySpark.
  • Proficiency in creating AWS Lambda functions for event-based jobs.
  • Knowledge of automating ETL processes using AWS Step Functions.
  • Competence in building data warehouses and loading data into them.


Responsibilities:


  • Understand business requirements and translate them into design.
  • Assess AWS infrastructure needs for development work.
  • Develop ETL jobs using Python/PySpark to meet requirements.
  • Implement AWS Lambda for event-based tasks.
  • Automate ETL processes using AWS Step Functions.
  • Build data warehouses and manage data loading.
  • Engage with customers and stakeholders to articulate the benefits of proposed solutions and frameworks.
Read more
Optisol Business Solutions Pvt Ltd
Veeralakshmi K
Posted by Veeralakshmi K
Remote, Chennai, Coimbatore, Madurai
4 - 10 yrs
₹10L - ₹15L / yr
skill iconPython
SQL
Amazon Redshift
Amazon RDS
AWS Simple Notification Service (SNS)
+5 more

Role Summary


As a Data Engineer, you will be an integral part of our Data Engineering team supporting an event-driven server less data engineering pipeline on AWS cloud, responsible for assisting in the end-to-end analysis, development & maintenance of data pipelines and systems (DataOps). You will work closely with fellow data engineers & production support to ensure the availability and reliability of data for analytics and business intelligence purposes.


Requirements:


·      Around 4 years of working experience in data warehousing / BI system.

·      Strong hands-on experience with Snowflake AND strong programming skills in Python

·      Strong hands-on SQL skills

·      Knowledge with any of the cloud databases such as Snowflake,Redshift,Google BigQuery,RDS,etc.

·      Knowledge on debt for cloud databases

·      AWS Services such as SNS, SQS, ECS, Docker, Kinesis & Lambda functions

·      Solid understanding of ETL processes, and data warehousing concepts

·      Familiarity with version control systems (e.g., Git/bit bucket, etc.) and collaborative development practices in an agile framework

·      Experience with scrum methodologies

·      Infrastructure build tools such as CFT / Terraform is a plus.

·      Knowledge on Denodo, data cataloguing tools & data quality mechanisms is a plus.

·      Strong team player with good communication skills.


Overview Optisol Business Solutions


OptiSol was named on this year's Best Companies to Work for list by Great place to work. We are a team of about 500+ Agile employees with a development center in India and global offices in the US, UK (United Kingdom), Australia, Ireland, Sweden, and Dubai. 16+ years of joyful journey and we have built about 500+ digital solutions. We have 200+ happy and satisfied clients across 24 countries.


Benefits, working with Optisol


·      Great Learning & Development program

·      Flextime, Work-at-Home & Hybrid Options

·      A knowledgeable, high-achieving, experienced & fun team.

·      Spot Awards & Recognition.

·      The chance to be a part of next success story.

·      A competitive base salary.


More Than Just a Job, We Offer an Opportunity To Grow. Are you the one, who looks out to Build your Future & Build your Dream? We have the Job for you, to make your dream comes true.

Read more
A Product Based Client,Chennai

A Product Based Client,Chennai

Agency job
via SangatHR by Anna Poorni
Chennai
4 - 8 yrs
₹10L - ₹15L / yr
Data Warehouse (DWH)
Informatica
ETL
Spark
PySpark
+2 more

Analytics Job Description

We are hiring an Analytics Engineer to help drive our Business Intelligence efforts. You will

partner closely with leaders across the organization, working together to understand the how

and why of people, team and company challenges, workflows and culture. The team is

responsible for delivering data and insights that drive decision-making, execution, and

investments for our product initiatives.

You will work cross-functionally with product, marketing, sales, engineering, finance, and our

customer-facing teams enabling them with data and narratives about the customer journey.

You’ll also work closely with other data teams, such as data engineering and product analytics,

to ensure we are creating a strong data culture at Blend that enables our cross-functional partners

to be more data-informed.


Role : DataEngineer 

Please find below the JD for the DataEngineer Role..

  Location: Guindy,Chennai

How you’ll contribute:

• Develop objectives and metrics, ensure priorities are data-driven, and balance short-

term and long-term goals


• Develop deep analytical insights to inform and influence product roadmaps and

business decisions and help improve the consumer experience

• Work closely with GTM and supporting operations teams to author and develop core

data sets that empower analyses

• Deeply understand the business and proactively spot risks and opportunities

• Develop dashboards and define metrics that drive key business decisions

• Build and maintain scalable ETL pipelines via solutions such as Fivetran, Hightouch,

and Workato

• Design our Analytics and Business Intelligence architecture, assessing and

implementing new technologies that fitting


• Work with our engineering teams to continually make our data pipelines and tooling

more resilient


Who you are:

• Bachelor’s degree or equivalent required from an accredited institution with a

quantitative focus such as Economics, Operations Research, Statistics, Computer Science OR 1-3 Years of Experience as a Data Analyst, Data Engineer, Data Scientist

• Must have strong SQL and data modeling skills, with experience applying skills to

thoughtfully create data models in a warehouse environment.

• A proven track record of using analysis to drive key decisions and influence change

• Strong storyteller and ability to communicate effectively with managers and

executives

• Demonstrated ability to define metrics for product areas, understand the right

questions to ask and push back on stakeholders in the face of ambiguous, complex

problems, and work with diverse teams with different goals

• A passion for documentation.

• A solution-oriented growth mindset. You’ll need to be a self-starter and thrive in a

dynamic environment.

• A bias towards communication and collaboration with business and technical

stakeholders.

• Quantitative rigor and systems thinking.

• Prior startup experience is preferred, but not required.

• Interest or experience in machine learning techniques (such as clustering, decision

tree, and segmentation)

• Familiarity with a scientific computing language, such as Python, for data wrangling

and statistical analysis

• Experience with a SQL focused data transformation framework such as dbt

• Experience with a Business Intelligence Tool such as Mode/Tableau


Mandatory Skillset:


-Very Strong in SQL

-Spark OR pyspark OR Python

-Shell Scripting


Read more
InfoCepts
Lalsaheb Bepari
Posted by Lalsaheb Bepari
Chennai, Pune, Nagpur
7 - 10 yrs
₹5L - ₹15L / yr
Spark
Hadoop
Big Data
Data engineering
PySpark
+5 more

Responsibilities:

 

• Designing Hive/HCatalog data model includes creating table definitions, file formats, compression techniques for Structured & Semi-structured data processing

• Implementing Spark processing based ETL frameworks

• Implementing Big data pipeline for Data Ingestion, Storage, Processing & Consumption

• Modifying the Informatica-Teradata & Unix based data pipeline

• Enhancing the Talend-Hive/Spark & Unix based data pipelines

• Develop and Deploy Scala/Python based Spark Jobs for ETL processing

• Strong SQL & DWH concepts.

 

Preferred Background:

 

• Function as integrator between business needs and technology solutions, helping to create technology solutions to meet clients’ business needs

• Lead project efforts in defining scope, planning, executing, and reporting to stakeholders on strategic initiatives

• Understanding of EDW system of business and creating High level design document and low level implementation document

• Understanding of Big Data Lake system of business and creating High level design document and low level implementation document

• Designing Big data pipeline for Data Ingestion, Storage, Processing & Consumption

Read more
vThink Global Technologies
Balasubramanian Ramaiyar
Posted by Balasubramanian Ramaiyar
Chennai
4 - 7 yrs
₹8L - ₹15L / yr
SQL
ETL
Informatica
Data Warehouse (DWH)
Stored Procedures
+1 more
We are looking for a strong SQL Developer well versed and hands-on in SQL, Stored Procedures, Joins and ETL. A data-savvy individual with advanced SQL skills.
Read more
Agiletech Info Solutions pvt ltd
Chennai
4 - 7 yrs
₹7L - ₹16L / yr
SQL server
SSIS
ETL
ETL QA
ADF
+3 more
  • Proficient with SQL Server/T-SQL programming in creation and optimization of complex Stored Procedures, UDF, CTE and Triggers
  • Overall Experience should be between 4 to 7 years
  • Experience working in a data warehouse environment and a strong understanding of dimensional data modeling concepts. Experience in SQL server, DW principles and SSIS.
  • Should have strong experience in building data transformations with SSIS including importing data from files, and moving data from source to destination.
  • Creating new SSIS packages or modifying existing SSIS packages using SQL server
  • Debug and fine-tune SSIS processes to ensure accurate and efficient movement of data. Experience with ETL testing & data validation.
  • 1+ years of experience with Azure services like Azure Data Factory, Data flow, Azure blob Storage, etc.
  • 1+ years of experience with developing Azure Data Factory Objects - ADF pipeline, configuration, parameters, variables, Integration services runtime.
  • Must be able to build Business Intelligence solutions in a collaborative, agile development environment.
  • Reporting experience with Power BI or SSRS is a plus.
  • Experience working on an Agile/Scrum team preferred.
  • Proven strong problem-solving skills, troubleshooting, and root cause analysis.
  • Excellent written and verbal communication skills.
Read more
Telecom  Client

Telecom Client

Agency job
via Eurka IT SOL by Srikanth a
Chennai
5 - 13 yrs
₹9L - ₹28L / yr
PySpark
Data engineering
Big Data
Hadoop
Spark
+6 more
  • Demonstrable experience owning and developing big data solutions, using Hadoop, Hive/Hbase, Spark, Databricks, ETL/ELT for 5+ years

·       10+ years of Information Technology experience, preferably with Telecom / wireless service providers.

·       Experience in designing data solution following Agile practices (SAFe methodology); designing for testability, deployability and releaseability; rapid prototyping, data modeling, and decentralized innovation

  • DataOps mindset: allowing the architecture of a system to evolve continuously over time, while simultaneously supporting the needs of current users
  • Create and maintain Architectural Runway, and Non-Functional Requirements.
  • Design for Continuous Delivery Pipeline (CI/CD data pipeline) and enables Built-in Quality & Security from the start.

·       To be able to demonstrate an understanding and ideally use of, at least one recognised architecture framework or standard e.g. TOGAF, Zachman Architecture Framework etc

·       The ability to apply data, research, and professional judgment and experience to ensure our products are making the biggest difference to consumers

·       Demonstrated ability to work collaboratively

·       Excellent written, verbal and social skills - You will be interacting with all types of people (user experience designers, developers, managers, marketers, etc.)

·       Ability to work in a fast paced, multiple project environment on an independent basis and with minimal supervision

·       Technologies: .NET, AWS, Azure; Azure Synapse, Nifi, RDS, Apache Kafka, Azure Data bricks, Azure datalake storage, Power BI, Reporting Analytics, QlickView, SQL on-prem Datawarehouse; BSS, OSS & Enterprise Support Systems

Read more
Amagi Media Labs

at Amagi Media Labs

3 recruiters
Rajesh C
Posted by Rajesh C
Bengaluru (Bangalore), Chennai
12 - 15 yrs
₹50L - ₹60L / yr
skill iconData Science
skill iconMachine Learning (ML)
ETL
Data Warehouse (DWH)
skill iconAmazon Web Services (AWS)
+5 more
Job Title: Data Architect
Job Location: Chennai

Job Summary
The Engineering team is seeking a Data Architect. As a Data Architect, you will drive a
Data Architecture strategy across various Data Lake platforms. You will help develop
reference architecture and roadmaps to build highly available, scalable and distributed
data platforms using cloud based solutions to process high volume, high velocity and
wide variety of structured and unstructured data. This role is also responsible for driving
innovation, prototyping, and recommending solutions. Above all, you will influence how
users interact with Conde Nast’s industry-leading journalism.
Primary Responsibilities
Data Architect is responsible for
• Demonstrated technology and personal leadership experience in architecting,
designing, and building highly scalable solutions and products.
• Enterprise scale expertise in data management best practices such as data integration,
data security, data warehousing, metadata management and data quality.
• Extensive knowledge and experience in architecting modern data integration
frameworks, highly scalable distributed systems using open source and emerging data
architecture designs/patterns.
• Experience building external cloud (e.g. GCP, AWS) data applications and capabilities is
highly desirable.
• Expert ability to evaluate, prototype and recommend data solutions and vendor
technologies and platforms.
• Proven experience in relational, NoSQL, ELT/ETL technologies and in-memory
databases.
• Experience with DevOps, Continuous Integration and Continuous Delivery technologies
is desirable.
• This role requires 15+ years of data solution architecture, design and development
delivery experience.
• Solid experience in Agile methodologies (Kanban and SCRUM)
Required Skills
• Very Strong Experience in building Large Scale High Performance Data Platforms.
• Passionate about technology and delivering solutions for difficult and intricate
problems. Current on Relational Databases and No sql databases on cloud.
• Proven leadership skills, demonstrated ability to mentor, influence and partner with
cross teams to deliver scalable robust solutions..
• Mastery of relational database, NoSQL, ETL (such as Informatica, Datastage etc) /ELT
and data integration technologies.
• Experience in any one of Object Oriented Programming (Java, Scala, Python) and
Spark.
• Creative view of markets and technologies combined with a passion to create the
future.
• Knowledge on cloud based Distributed/Hybrid data-warehousing solutions and Data
Lake knowledge is mandate.
• Good understanding of emerging technologies and its applications.
• Understanding of code versioning tools such as GitHub, SVN, CVS etc.
• Understanding of Hadoop Architecture and Hive SQL
• Knowledge in any one of the workflow orchestration
• Understanding of Agile framework and delivery

Preferred Skills:
● Experience in AWS and EMR would be a plus
● Exposure in Workflow Orchestration like Airflow is a plus
● Exposure in any one of the NoSQL database would be a plus
● Experience in Databricks along with PySpark/Spark SQL would be a plus
● Experience with the Digital Media and Publishing domain would be a
plus
● Understanding of Digital web events, ad streams, context models

About Condé Nast

CONDÉ NAST INDIA (DATA)
Over the years, Condé Nast successfully expanded and diversified into digital, TV, and social
platforms - in other words, a staggering amount of user data. Condé Nast made the right
move to invest heavily in understanding this data and formed a whole new Data team
entirely dedicated to data processing, engineering, analytics, and visualization. This team
helps drive engagement, fuel process innovation, further content enrichment, and increase
market revenue. The Data team aimed to create a company culture where data was the
common language and facilitate an environment where insights shared in real-time could
improve performance.
The Global Data team operates out of Los Angeles, New York, Chennai, and London. The
team at Condé Nast Chennai works extensively with data to amplify its brands' digital
capabilities and boost online revenue. We are broadly divided into four groups, Data
Intelligence, Data Engineering, Data Science, and Operations (including Product and
Marketing Ops, Client Services) along with Data Strategy and monetization. The teams built
capabilities and products to create data-driven solutions for better audience engagement.
What we look forward to:
We want to welcome bright, new minds into our midst and work together to create diverse
forms of self-expression. At Condé Nast, we encourage the imaginative and celebrate the
extraordinary. We are a media company for the future, with a remarkable past. We are
Condé Nast, and It Starts Here.
Read more
Amagi Media Labs

at Amagi Media Labs

3 recruiters
Rajesh C
Posted by Rajesh C
Chennai
15 - 18 yrs
Best in industry
Data architecture
Architecture
Data Architect
Architect
skill iconJava
+5 more
Job Title: Data Architect
Job Location: Chennai
Job Summary

The Engineering team is seeking a Data Architect. As a Data Architect, you will drive a
Data Architecture strategy across various Data Lake platforms. You will help develop
reference architecture and roadmaps to build highly available, scalable and distributed
data platforms using cloud based solutions to process high volume, high velocity and
wide variety of structured and unstructured data. This role is also responsible for driving
innovation, prototyping, and recommending solutions. Above all, you will influence how
users interact with Conde Nast’s industry-leading journalism.
Primary Responsibilities
Data Architect is responsible for
• Demonstrated technology and personal leadership experience in architecting,
designing, and building highly scalable solutions and products.
• Enterprise scale expertise in data management best practices such as data integration,
data security, data warehousing, metadata management and data quality.
• Extensive knowledge and experience in architecting modern data integration
frameworks, highly scalable distributed systems using open source and emerging data
architecture designs/patterns.
• Experience building external cloud (e.g. GCP, AWS) data applications and capabilities is
highly desirable.
• Expert ability to evaluate, prototype and recommend data solutions and vendor
technologies and platforms.
• Proven experience in relational, NoSQL, ELT/ETL technologies and in-memory
databases.
• Experience with DevOps, Continuous Integration and Continuous Delivery technologies
is desirable.
• This role requires 15+ years of data solution architecture, design and development
delivery experience.
• Solid experience in Agile methodologies (Kanban and SCRUM)
Required Skills
• Very Strong Experience in building Large Scale High Performance Data Platforms.
• Passionate about technology and delivering solutions for difficult and intricate
problems. Current on Relational Databases and No sql databases on cloud.
• Proven leadership skills, demonstrated ability to mentor, influence and partner with
cross teams to deliver scalable robust solutions..
• Mastery of relational database, NoSQL, ETL (such as Informatica, Datastage etc) /ELT
and data integration technologies.
• Experience in any one of Object Oriented Programming (Java, Scala, Python) and
Spark.
• Creative view of markets and technologies combined with a passion to create the
future.
• Knowledge on cloud based Distributed/Hybrid data-warehousing solutions and Data
Lake knowledge is mandate.
• Good understanding of emerging technologies and its applications.
• Understanding of code versioning tools such as GitHub, SVN, CVS etc.
• Understanding of Hadoop Architecture and Hive SQL
• Knowledge in any one of the workflow orchestration
• Understanding of Agile framework and delivery

Preferred Skills:
● Experience in AWS and EMR would be a plus
● Exposure in Workflow Orchestration like Airflow is a plus
● Exposure in any one of the NoSQL database would be a plus
● Experience in Databricks along with PySpark/Spark SQL would be a plus
● Experience with the Digital Media and Publishing domain would be a
plus
● Understanding of Digital web events, ad streams, context models
About Condé Nast
CONDÉ NAST INDIA (DATA)
Over the years, Condé Nast successfully expanded and diversified into digital, TV, and social
platforms - in other words, a staggering amount of user data. Condé Nast made the right
move to invest heavily in understanding this data and formed a whole new Data team
entirely dedicated to data processing, engineering, analytics, and visualization. This team
helps drive engagement, fuel process innovation, further content enrichment, and increase
market revenue. The Data team aimed to create a company culture where data was the
common language and facilitate an environment where insights shared in real-time could
improve performance.
The Global Data team operates out of Los Angeles, New York, Chennai, and London. The
team at Condé Nast Chennai works extensively with data to amplify its brands' digital
capabilities and boost online revenue. We are broadly divided into four groups, Data
Intelligence, Data Engineering, Data Science, and Operations (including Product and
Marketing Ops, Client Services) along with Data Strategy and monetization. The teams built
capabilities and products to create data-driven solutions for better audience engagement.
What we look forward to:
We want to welcome bright, new minds into our midst and work together to create diverse
forms of self-expression. At Condé Nast, we encourage the imaginative and celebrate the
extraordinary. We are a media company for the future, with a remarkable past. We are
Condé Nast, and It Starts Here.
Read more
Agilisium

Agilisium

Agency job
via Recruiting India by Moumita Santra
Chennai
10 - 19 yrs
₹12L - ₹40L / yr
Big Data
Apache Spark
Spark
PySpark
ETL
+1 more

Job Sector: IT, Software

Job Type: Permanent

Location: Chennai

Experience: 10 - 20 Years

Salary: 12 – 40 LPA

Education: Any Graduate

Notice Period: Immediate

Key Skills: Python, Spark, AWS, SQL, PySpark

Contact at triple eight two zero nine four two double seven

 

Job Description:

Requirements

  • Minimum 12 years experience
  • In depth understanding and knowledge on distributed computing with spark.
  • Deep understanding of Spark Architecture and internals
  • Proven experience in data ingestion, data integration and data analytics with spark, preferably PySpark.
  • Expertise in ETL processes, data warehousing and data lakes.
  • Hands on with python for Big data and analytics.
  • Hands on in agile scrum model is an added advantage.
  • Knowledge on CI/CD and orchestration tools is desirable.
  • AWS S3, Redshift, Lambda knowledge is preferred
Thanks
Read more
1CH

at 1CH

1 recruiter
Sathish Sukumar
Posted by Sathish Sukumar
Chennai, Bengaluru (Bangalore), Hyderabad, NCR (Delhi | Gurgaon | Noida), Mumbai, Pune
4 - 15 yrs
₹10L - ₹25L / yr
Data engineering
Data engineer
ETL
SSIS
ADF
+3 more
  • Expertise in designing and implementing enterprise scale database (OLTP) and Data warehouse solutions.
  • Hands on experience in implementing Azure SQL Database, Azure SQL Date warehouse (Azure Synapse Analytics) and big data processing using Azure Databricks and Azure HD Insight.
  • Expert in writing T-SQL programming for complex stored procedures, functions, views and query optimization.
  • Should be aware of Database development for both on-premise and SAAS Applications using SQL Server and PostgreSQL.
  • Experience in ETL and ELT implementations using Azure Data Factory V2 and SSIS.
  • Experience and expertise in building machine learning models using Logistic and linear regression, Decision tree  and Random forest Algorithms.
  • PolyBase queries for exporting and importing data into Azure Data Lake.
  • Building data models both tabular and multidimensional using SQL Server data tools.
  • Writing data preparation, cleaning and processing steps using Python, SCALA, and R.
  • Programming experience using python libraries NumPy, Pandas and Matplotlib.
  • Implementing NOSQL databases and writing queries using cypher.
  • Designing end user visualizations using Power BI, QlikView and Tableau.
  • Experience working with all versions of SQL Server 2005/2008/2008R2/2012/2014/2016/2017/2019
  • Experience using the expression languages MDX and DAX.
  • Experience in migrating on-premise SQL server database to Microsoft Azure.
  • Hands on experience in using Azure blob storage, Azure Data Lake Storage Gen1 and Azure Data Lake Storage Gen2.
  • Performance tuning complex SQL queries, hands on experience using SQL Extended events.
  • Data modeling using Power BI for Adhoc reporting.
  • Raw data load automation using T-SQL and SSIS
  • Expert in migrating existing on-premise database to SQL Azure.
  • Experience in using U-SQL for Azure Data Lake Analytics.
  • Hands on experience in generating SSRS reports using MDX.
  • Experience in designing predictive models using Python and SQL Server.
  • Developing machine learning models using Azure Databricks and SQL Server
Read more
IT Giant

IT Giant

Agency job
Remote, Chennai, Bengaluru (Bangalore), Hyderabad, Pune, Mumbai, NCR (Delhi | Gurgaon | Noida), Kolkata
10 - 18 yrs
₹15L - ₹30L / yr
ETL
Informatica
Informatica PowerCenter
Windows Azure
SQL Azure
+2 more
Key skills:
Informatica PowerCenter, Informatica Change Data Capture, Azure SQL, Azure Data Lake

Job Description
Minimum of 15 years of Experience with Informatica ETL, Database technologies Experience with Azure database technologies including Azure SQL Server, Azure Data Lake Exposure to Change data capture technology Lead and guide development of an Informatica based ETL architecture. Develop solution in highly demanding environment and provide hands on guidance to other team members. Head complex ETL requirements and design. Implement an Informatica based ETL solution fulfilling stringent performance requirements. Collaborate with product development teams and senior designers to develop architectural requirements for the requirements. Assess requirements for completeness and accuracy. Determine if requirements are actionable for ETL team. Conduct impact assessment and determine size of effort based on requirements. Develop full SDLC project plans to implement ETL solution and identify resource requirements. Perform as active, leading role in shaping and enhancing overall ETL Informatica architecture and Identify, recommend and implement ETL process and architecture improvements. Assist and verify design of solution and production of all design phase deliverables. Manage build phase and quality assure code to ensure fulfilling requirements and adhering to ETL architecture.
Read more
A leading IT MNC

A leading IT MNC

Agency job
Chennai
12 - 18 yrs
₹14L - ₹25L / yr
ETL
Informatica PowerCenter
Informatica
SQL Azure
ETL architecture
+4 more
Job Discription:-
  •   Minimum of 12 years of Experience with Informatica ETL, Database technologies.
  •   Experience with  Azure database technologies including Azure SQL Server, Azure Data Lake Exposure to Change data capture technology Lead and guide development of an Informatica based ETL architecture.
  •      Develop solution in highly demanding environment and provide hands on guidance to other team members. 
  •      Head complex ETL requirements and design. 
  •      Implement an Informatica based ETL solution fulfilling stringent performance requirements. 
  •      Collaborate with product development teams and senior designers to develop architectural requirements for the requirements.
  •      Assess requirements for completeness and accuracy. 
  •      Determine if requirements are actionable for ETL team. 
  •      Conduct impact assessment and determine size of effort based on requirements.
  •      Develop full SDLC project plans to implement ETL solution and identify resource requirements.
  •      Perform as active, leading role in shaping and enhancing overall ETL Informatica architecture and Identify, recommend and implement ETL process and architecture improvements.
  •      Assist and verify design of solution and production of all design phase deliverables. 
  •      Manage build phase and quality assure code to ensure fulfilling requirements and adhering to ETL architecture
Read more
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Find more jobs
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort