Cutshort logo
ETL architecture Jobs in Bangalore (Bengaluru)

11+ ETL architecture Jobs in Bangalore (Bengaluru) | ETL architecture Job openings in Bangalore (Bengaluru)

Apply to 11+ ETL architecture Jobs in Bangalore (Bengaluru) on CutShort.io. Explore the latest ETL architecture Job opportunities across top companies like Google, Amazon & Adobe.

icon
Signdesk
Anandhu Krishna
Posted by Anandhu Krishna
Bengaluru (Bangalore)
3 - 5 yrs
₹10L - ₹15L / yr
ETL architecture
MongoDB
Business Intelligence (BI)
Amazon Web Services (AWS)
Snow flake schema

We are seeking a skilled AWS ETL/ELT Data Architect with a specialization in MongoDB to join our team. The ideal candidate will possess comprehensive knowledge and hands-on experience

in designing, implementing, and managing ETL/ELT processes within AWS while also demonstrating proficiency in MongoDB database management.

This role requires expertise in data architecture, AWS services, and MongoDB to optimize data solutions effectively.


Responsibilities:


● Design, architect, and implement ETL/ELT processes within AWS, integrating data from various sources into data lakes or warehouses, and utilising MongoDB as part of the data ecosystem.

● Collaborate cross-functionally to assess data requirements, analyze sources, and strategize effective data integration within AWS environments, considering MongoDB's role in the architecture.

● Construct scalable and high-performance data pipelines within AWS while integrating MongoDB for optimal data storage, retrieval, and manipulation.

● Develop comprehensive documentation covering data architecture, flows, and the interplay between AWS services, MongoDB, and ETL/ELT processes from scratch.

● Perform thorough data profiling, validation, and troubleshooting, ensuring data accuracy, consistency, and integrity in conjunction with MongoDB management.

● Stay updated with AWS and MongoDB best practices, emerging technologies, and industry trends to propose innovative data solutions and implementations.

● Provide mentorship to junior team members and foster collaboration with stakeholders to deliver robust data solutions.

● Analyze data issues, identify and articulate the business impact of data problems

● Perform code reviews and ensure that all solutions are aligned with pre-defined architectural standards, guidelines, and best practices, and meet quality standards


Qualifications:


● Bachelor's or Master’s degree in Computer Science, Information Technology, or related field.

● Minimum 5 years of hands-on experience in ETL/ELT development, data architecture, or similar roles.

● Having implemented more than a minimum of 3-4 live projects in a similar field would be desirable.

● Expertise in designing and implementing AWS-based ETL/ELT processes using tools like AWS Glue, AWS Data Pipeline, etc.

Read more
Bengaluru (Bangalore)
4 - 8 yrs
₹17L - ₹40L / yr
Data Science
Machine Learning (ML)
Natural Language Processing (NLP)
Amazon Web Services (AWS)
Python
+4 more
Role: Machine Learning Engineer

As a machine learning engineer on the team, you will
• Help science and product teams innovate in developing and improving end-to-end
solutions to machine learning-based security/privacy control
• Partner with scientists to brainstorm and create new ways to collect/curate data
• Design and build infrastructure critical to solving problems in privacy-preserving machine
learning
• Help team self-organize and follow machine learning best practice.

Basic Qualifications

• 4+ years of experience contributing to the architecture and design (architecture, design
patterns, reliability and scaling) of new and current systems
• 4+ years of programming experience with at least one modern language such as Java,
C++, or C# including object-oriented design
• 4+ years of professional software development experience
• 4+ years of experience as a mentor, tech lead OR leading an engineering team
• 4+ years of professional software development experience in Big Data and Machine
Learning Fields
• Knowledge of common ML frameworks such as Tensorflow, PyTorch
• Experience with cloud provider Machine Learning tools such as AWS SageMaker
• Programming experience with at least two modern language such as Python, Java, C++,
or C# including object-oriented design
• 3+ years of experience contributing to the architecture and design (architecture, design
patterns, reliability and scaling) of new and current systems
• Experience in python
• BS in Computer Science or equivalent
Read more
Red.Health

at Red.Health

2 candid answers
Mayur Bellapu
Posted by Mayur Bellapu
Bengaluru (Bangalore)
3 - 6 yrs
₹15L - ₹30L / yr
PySpark
Data engineering
Big Data
Hadoop
Spark
+5 more

Job Description: Data Engineer

We are looking for a curious Data Engineer to join our extremely fast-growing Tech Team at StanPlus

 

About RED.Health (Formerly Stanplus Technologies)

Get to know the team:

Join our team and help us build the world’s fastest and most reliable emergency response system using cutting-edge technology.

Because every second counts in an emergency, we are building systems and flows with 4 9s of reliability to ensure that our technology is always there when people need it the most. We are looking for distributed systems experts who can help us perfect the architecture behind our key design principles: scalability, reliability, programmability, and resiliency. Our system features a powerful dispatch engine that connects emergency service providers with patients in real-time

.

Key Responsibilities

●     Build Data ETL Pipelines

●     Develop data set processes

●     Strong analytic skills related to working with unstructured datasets

●     Evaluate business needs and objectives

●     Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery

●     Interpret trends and patterns

●     Work with data and analytics experts to strive for greater functionality in our data system

●     Build algorithms and prototypes

●     Explore ways to enhance data quality and reliability

●     Work with the Executive, Product, Data, and D   esign teams, to assist with data-related technical issues and support their data infrastructure needs.

●     Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency, and other key business performance metrics.

 

Key Requirements

●     Proven experience as a data engineer, software developer, or similar of at least 3 years.

●     Bachelor's / Master’s degree in data engineering, big data analytics, computer engineering, or related field.

●     Experience with big data tools: Hadoop, Spark, Kafka, etc.

●     Experience with relational SQL and NoSQL databases, including Postgres and Cassandra.

●     Experience with data pipeline and workflow management tools: Azkaban, Luigi, Airflow, etc.

●     Experience with Azure, AWS cloud services: EC2, EMR, RDS, Redshift

●     Experience with BigQuery

●     Experience with stream-processing systems: Storm, Spark-Streaming, etc.

●     Experience with languages: Python, Java, C++, Scala, SQL, R, etc.

●     Good hands-on with Hive, Presto.

 


Read more
Thoughtworks

at Thoughtworks

1 video
27 recruiters
Sunidhi Thakur
Posted by Sunidhi Thakur
Bengaluru (Bangalore)
10 - 13 yrs
Best in industry
Data modeling
PySpark
Data engineering
Big Data
Hadoop
+10 more

Lead Data Engineer

 

Data Engineers develop modern data architecture approaches to meet key business objectives and provide end-to-end data solutions. You might spend a few weeks with a new client on a deep technical review or a complete organizational review, helping them to understand the potential that data brings to solve their most pressing problems. On other projects, you might be acting as the architect, leading the design of technical solutions, or perhaps overseeing a program inception to build a new product. It could also be a software delivery project where you're equally happy coding and tech-leading the team to implement the solution.

 

Job responsibilities

 

·      You might spend a few weeks with a new client on a deep technical review or a complete organizational review, helping them to understand the potential that data brings to solve their most pressing problems

·      You will partner with teammates to create complex data processing pipelines in order to solve our clients' most ambitious challenges

·      You will collaborate with Data Scientists in order to design scalable implementations of their models

·      You will pair to write clean and iterative code based on TDD

·      Leverage various continuous delivery practices to deploy, support and operate data pipelines

·      Advise and educate clients on how to use different distributed storage and computing technologies from the plethora of options available

·      Develop and operate modern data architecture approaches to meet key business objectives and provide end-to-end data solutions

·      Create data models and speak to the tradeoffs of different modeling approaches

·      On other projects, you might be acting as the architect, leading the design of technical solutions, or perhaps overseeing a program inception to build a new product

·      Seamlessly incorporate data quality into your day-to-day work as well as into the delivery process

·      Assure effective collaboration between Thoughtworks' and the client's teams, encouraging open communication and advocating for shared outcomes

 

Job qualifications Technical skills

·      You are equally happy coding and leading a team to implement a solution

·      You have a track record of innovation and expertise in Data Engineering

·      You're passionate about craftsmanship and have applied your expertise across a range of industries and organizations

·      You have a deep understanding of data modelling and experience with data engineering tools and platforms such as Kafka, Spark, and Hadoop

·      You have built large-scale data pipelines and data-centric applications using any of the distributed storage platforms such as HDFS, S3, NoSQL databases (Hbase, Cassandra, etc.) and any of the distributed processing platforms like Hadoop, Spark, Hive, Oozie, and Airflow in a production setting

·      Hands on experience in MapR, Cloudera, Hortonworks and/or cloud (AWS EMR, Azure HDInsights, Qubole etc.) based Hadoop distributions

·      You are comfortable taking data-driven approaches and applying data security strategy to solve business problems

·      You're genuinely excited about data infrastructure and operations with a familiarity working in cloud environments

·      Working with data excites you: you have created Big data architecture, you can build and operate data pipelines, and maintain data storage, all within distributed systems

 

Professional skills


·      Advocate your data engineering expertise to the broader tech community outside of Thoughtworks, speaking at conferences and acting as a mentor for more junior-level data engineers

·      You're resilient and flexible in ambiguous situations and enjoy solving problems from technical and business perspectives

·      An interest in coaching others, sharing your experience and knowledge with teammates

·      You enjoy influencing others and always advocate for technical excellence while being open to change when needed

Read more
Bengaluru (Bangalore)
2 - 8 yrs
₹4L - ₹10L / yr
Data governance
Data security
Data Analytics
Informatica
SQL
+4 more

Job Description

We are looking for a senior resource with Analyst skills and knowledge of IT projects, to support delivery of risk mitigation activities and automation in Aviva’s Global Finance Data Office. The successful candidate will bring structure to this new role in a developing team, with excellent communication, organisational and analytical skills. The Candidate will play the primary role of supporting data governance project/change activities. Candidates should be comfortable with ambiguity in a fast-paced and ever-changing environment. Preferred skills include knowledge of Data Governance, Informatica Axon, SQL, AWS. In our team, success is measured by results and we encourage flexible working where possible.

Key Responsibilities

  • Engage with stakeholders to drive delivery of the Finance Data Strategy
  • Support data governance project/change activities in Aviva’s Finance function.
  • Identify opportunities and implement Automations for enhanced performance of the Team

Required profile

  • Relevant work experience in at least one of the following: business/project analyst, project/change management and data analytics.
  • Proven track record of successful communication of analytical outcomes, including an ability to effectively communicate with both business and technical teams.
  • Ability to manage multiple, competing priorities and hold the team and stakeholders to account on progress.
  • Contribute, plan and execute end to end data governance framework.
  • Basic knowledge of IT systems/projects and the development lifecycle.
  • Experience gathering business requirements and reports.
  • Advanced experience of MS Excel data processing (VBA Macros).
  • Good communication

 

Additional Information

Degree in a quantitative or scientific field (e.g. Engineering, MBA Finance, Project Management) and/or experience in data governance/quality/privacy
Knowledge of Finance systems/processes
Experience in analysing large data sets using dedicated analytics tools

 

Designation – Assistant Manager TS

Location – Bangalore

Shift – 11 – 8 PM
Read more
AxionConnect Infosolutions Pvt Ltd
Shweta Sharma
Posted by Shweta Sharma
Pune, Bengaluru (Bangalore), Hyderabad, Nagpur, Chennai
5.5 - 7 yrs
₹20L - ₹25L / yr
Django
Flask
Snowflake
Snow flake schema
SQL
+4 more

Job Location: Hyderabad/Bangalore/ Chennai/Pune/Nagpur

Notice period: Immediate - 15 days

 

1.      Python Developer with Snowflake

 

Job Description :


  1. 5.5+ years of Strong Python Development Experience with Snowflake.
  2. Strong hands of experience with SQL ability to write complex queries.
  3. Strong understanding of how to connect to Snowflake using Python, should be able to handle any type of files
  4.  Development of Data Analysis, Data Processing engines using Python
  5. Good Experience in Data Transformation using Python. 
  6.  Experience in Snowflake data load using Python.
  7.  Experience in creating user-defined functions in Snowflake.
  8.  Snowsql implementation.
  9.  Knowledge of query performance tuning will be added advantage.
  10. Good understanding of Datawarehouse (DWH) concepts.
  11.  Interpret/analyze business requirements & functional specification
  12.  Good to have DBT, FiveTran, and AWS Knowledge.
Read more
Cubera Tech India Pvt Ltd
Bengaluru (Bangalore), Chennai
5 - 8 yrs
Best in industry
Data engineering
Big Data
Java
Python
Hibernate (Java)
+10 more

Data Engineer- Senior

Cubera is a data company revolutionizing big data analytics and Adtech through data share value principles wherein the users entrust their data to us. We refine the art of understanding, processing, extracting, and evaluating the data that is entrusted to us. We are a gateway for brands to increase their lead efficiency as the world moves towards web3.

What are you going to do?

Design & Develop high performance and scalable solutions that meet the needs of our customers.

Closely work with the Product Management, Architects and cross functional teams.

Build and deploy large-scale systems in Java/Python.

Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.

Create data tools for analytics and data scientist team members that assist them in building and optimizing their algorithms.

Follow best practices that can be adopted in Bigdata stack.

Use your engineering experience and technical skills to drive the features and mentor the engineers.

What are we looking for ( Competencies) :

Bachelor’s degree in computer science, computer engineering, or related technical discipline.

Overall 5 to 8 years of programming experience in Java, Python including object-oriented design.

Data handling frameworks: Should have a working knowledge of one or more data handling frameworks like- Hive, Spark, Storm, Flink, Beam, Airflow, Nifi etc.

Data Infrastructure: Should have experience in building, deploying and maintaining applications on popular cloud infrastructure like AWS, GCP etc.

Data Store: Must have expertise in one of general-purpose No-SQL data stores like Elasticsearch, MongoDB, Redis, RedShift, etc.

Strong sense of ownership, focus on quality, responsiveness, efficiency, and innovation.

Ability to work with distributed teams in a collaborative and productive manner.

Benefits:

Competitive Salary Packages and benefits.

Collaborative, lively and an upbeat work environment with young professionals.

Job Category: Development

Job Type: Full Time

Job Location: Bangalore

 

Read more
Ganit Business Solutions

at Ganit Business Solutions

3 recruiters
Viswanath Subramanian
Posted by Viswanath Subramanian
Remote, Chennai, Bengaluru (Bangalore), Mumbai
3 - 7 yrs
₹12L - ₹25L / yr
Machine Learning (ML)
Data Science
Natural Language Processing (NLP)
Computer Vision
R Programming
+5 more

Ganit has flipped the data science value chain as we do not start with a technique but for us, consumption comes first. With this philosophy, we have successfully scaled from being a small start-up to a 200 resource company with clients in the US, Singapore, Africa, UAE, and India. 

We are looking for experienced data enthusiasts who can make the data talk to them. 

 

You will: 

  • Understand business problems and translate business requirements into technical requirements. 
  • Conduct complex data analysis to ensure data quality & reliability i.e., make the data talk by extracting, preparing, and transforming it. 
  • Identify, develop and implement statistical techniques and algorithms to address business challenges and add value to the organization. 
  • Gather requirements and communicate findings in the form of a meaningful story with the stakeholders  
  • Build & implement data models using predictive modelling techniques. Interact with clients and provide support for queries and delivery adoption. 
  • Lead and mentor data analysts. 

 

We are looking for someone who has: 

 

  • Apart from your love for data and ability to code even while sleeping you would need the following. 
  • Minimum of 02 years of experience in designing and delivery of data science solutions. 
  • You should have successful projects of retail/BFSI/FMCG/Manufacturing/QSR in your kitty to show-off. 
  • Deep understanding of various statistical techniques, mathematical models, and algorithms to start the conversation with the data in hand. 
  • Ability to choose the right model for the data and translate that into a code using R, Python, VBA, SQL, etc. 
  • Bachelors/Masters degree in Engineering/Technology or MBA from Tier-1 B School or MSc. in Statistics or Mathematics 

Skillset Required:

  • Regression
  • Classification
  • Predictive Modelling
  • Prescriptive Modelling
  • Python
  • R
  • Descriptive Modelling
  • Time Series
  • Clustering
  •  

What is in it for you: 

 

  • Be a part of building the biggest brand in Data science. 
  • An opportunity to be a part of a young and energetic team with a strong pedigree. 
  • Work on awesome projects across industries and learn from the best in the industry, while growing at a hyper rate. 

 

Please Note:  

 

At Ganit, we are looking for people who love problem solving. You are encouraged to apply even if your experience does not precisely match the job description above. Your passion and skills will stand out and set you apart—especially if your career has taken some extraordinary twists and turns over the years. We welcome diverse perspectives, people who think rigorously and are not afraid to challenge assumptions in a problem. Join us and punch above your weight! 

Ganit is an equal opportunity employer and is committed to providing a work environment that is free from harassment and discrimination. 

All recruitment, selection procedures and decisions will reflect Ganit’s commitment to providing equal opportunity. All potential candidates will be assessed according to their skills, knowledge, qualifications, and capabilities. No regard will be given to factors such as age, gender, marital status, race, religion, physical impairment, or political opinions. 

Read more
Persistent Systems

at Persistent Systems

1 video
1 recruiter
Agency job
via Milestone Hr Consultancy by Haina khan
Bengaluru (Bangalore), Hyderabad, Pune
9 - 16 yrs
₹7L - ₹32L / yr
Big Data
Scala
Spark
Hadoop
Python
+1 more
Greetings..
 
We have urgent requirement for the post of Big Data Architect in reputed MNC company
 
 


Location:  Pune/Nagpur,Goa,Hyderabad/Bangalore

Job Requirements:

  • 9 years and above of total experience preferably in bigdata space.
  • Creating spark applications using Scala to process data.
  • Experience in scheduling and troubleshooting/debugging Spark jobs in steps.
  • Experience in spark job performance tuning and optimizations.
  • Should have experience in processing data using Kafka/Pyhton.
  • Individual should have experience and understanding in configuring Kafka topics to optimize the performance.
  • Should be proficient in writing SQL queries to process data in Data Warehouse.
  • Hands on experience in working with Linux commands to troubleshoot/debug issues and creating shell scripts to automate tasks.
  • Experience on AWS services like EMR.
Read more
Synapsica Technologies Pvt Ltd

at Synapsica Technologies Pvt Ltd

6 candid answers
1 video
Human Resources
Posted by Human Resources
Bengaluru (Bangalore)
3 - 5 yrs
₹12L - ₹20L / yr
Python
CI/CD
DVCS
Machine Learning (ML)
Kubernetes
+4 more

Introduction

http://www.synapsica.com/">Synapsica is a https://yourstory.com/2021/06/funding-alert-synapsica-healthcare-ivycap-ventures-endiya-partners/">series-A funded HealthTech startup founded by alumni from IIT Kharagpur, AIIMS New Delhi, and IIM Ahmedabad. We believe healthcare needs to be transparent and objective while being affordable. Every patient has the right to know exactly what is happening in their bodies and they don't have to rely on cryptic 2 liners given to them as a diagnosis. 

Towards this aim, we are building an artificial intelligence enabled cloud based platform to analyse medical images and create v2.0 of advanced radiology reporting.  We are backed by IvyCap, Endia Partners, YCombinator and other investors from India, US, and Japan. We are proud to have GE and The Spinal Kinetics as our partners. Here’s a small sample of what we’re building: https://www.youtube.com/watch?v=FR6a94Tqqls">https://www.youtube.com/watch?v=FR6a94Tqqls 


Your Roles and Responsibilities

We are looking for an experienced MLOps Engineer to join our engineering team and help us create dynamic software applications for our clients. In this role, you will be a key member of a team in decision making, implementations, development and advancement of ML operations of the core AI platform.

 

 

Roles and Responsibilities:

  • Work closely with a cross functional team to serve business goals and objectives.
  • Develop, Implement and Manage MLOps in cloud infrastructure for data preparation,deployment, monitoring and retraining models
  • Design and build application containerisation and orchestrate with Docker and Kubernetes in AWS platform. 
  • Build and maintain code, tools, packages in cloud

Requirements:

  • At Least 2+ years of experience in Data engineering 
  • At Least 3+ yr experience in Python with familiarity in popular ML libraries.
  • At Least 2+ years experience in model serving and pipelines
  • Working knowledge of containers like kubernetes , dockers, in AWS
  • Design distributed systems deployment at scale
  • Hands-on experience in coding and scripting
  • Ability to write effective scalable and modular code.
  • Familiarity with Git workflows, CI CD and NoSQL Mongodb
  • Familiarity with Airflow, DVC and MLflow is a plus
Read more
Kaplan

at Kaplan

6 recruiters
Kavya N
Posted by Kavya N
Bengaluru (Bangalore)
3 - 7 yrs
₹10L - ₹16L / yr
Business Intelligence (BI)
BI Developer
SQL Server Reporting Services (SSRS)

Do you have a passion for using your skills to develop innovative technologies?  Are you interested in working on a team of professionals at a globally respected education organization and using your talents for building solutions that help thousands of students achieve success?  If so, join us and take your career to the next level. We are building a team of talented individuals to work on innovative products for education.

The Report Developer is a key contributor for information / data  management solutions. Conceptualizes, designs and manages a wide range of business, academic and digital analytic solutions.  Executes multiple projects including data sourcing, migration, quality, design, and implementation. The Report Developer partners with BI/Technology, Data Science, marketing/media, student enrollment advisors, faculty, academic administrators, strategy, planning and creative/user experience teams to plan, execute and evaluate a broad range of business, academic, marketing, and operations initiatives.   Identifies best practices in producing reliable dashboards, analytics, and reporting. This role requires stakeholder interaction, thus appropriate service-orientation and communications skills are required.

 

Successful candidates will have good communication and presentation skills, critical thinking skills, and the ability to break down complex problems. Quick learners are preferred over many years of experience, and associates who aspire to make a difference over those who aim to just fill orders​.

Key Responsibilities:

  • Collaborates with SP&A team members / client groups to determine technical requirements of pending or current projects and develops analytical plans including proper selection of methodologies, techniques, KPIs and metrics.

  • Ensures assigned work is completed for on time delivery.  

  • Performs extraction of data sets from multiple sources/platforms including: student systems/data warehouse, web analytic tools, social listening tools, search tools, syndicated data, research & survey tools, etc. Implements hygiene and quality control steps.

  • Conducts time series data analysis, segmentation, various metrics and key performance indicators, research & test design, significance testing, variance and growth calculations, forecasting, return on investment.

  • Applies industry best practices in research, cutting edge analytic solutions with big data platforms and software to efficiently and effectively manage data.  

  • Supports training within and across teams on complex data and infrastructure topics.

  • Builds business intelligence reporting, adhoc solutions and / or dashboards across various platforms. 

  • Develops and presents organized and clearly articulated analysis for stakeholder presentations.

  • Applies knowledge of multi-channel marketing/research/academic learning principles. 

  • Participates in the full lifecycle of Business Intelligence and Data Warehousing development process.

Qualifications:

  • Bachelor's Degree (B.A./B.S.) or Master’s Degree (M.A./M.S.) in Computer Science, Decision Sciences, Information Management, or related fields.

  • 2-5 years of relevant experience in business, marketing, or academic reporting and analytics.

  • Strong analytical and critical thinking skills.

  • Strong written, verbal communications as well as presentation skills

  • Advanced knowledge of data specifications, data governance, data warehouse and data structures 

  • Proactive self-starter who can work collaboratively with cross-capability team members

  • Advanced knowledge and competency of Excel

  • In-depth knowledge of SQL, Tableau, Microsoft SQL Server, Reporting Services, Analysis Services, Transact SQL, Bower BI

  • Familiarity with cloud based technologies, a definite asset

Read more
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Find more jobs
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort