DB ETL Developer

at Digital Transformation services provider

Agency job
via HyrHub
icon
Bengaluru (Bangalore), Mumbai
icon
5 - 7 yrs
icon
₹7L - ₹15L / yr
icon
Full time
Skills
ETL
Databases
Informatica
Teradata
Principal Duties and Responsibilities:
Design, implement, and execute appropriate solutions and enhancements to ensure an improvement in
system reliability and performance.
Ensure project deadlines are met and are in alignment with the needs of the business unit, and coincide
with release management and governance.
Ensure that operational aspects of supported applications are included in architectural standards
Produce service metrics, analyze trends and identify opportunities to improve the level of service and
reduce cost as appropriate.
Support implementation activities.
Enable technical knowledge sharing across team
Work with vendors on designated areas
Skills Required:
Strong background with relational databases, primarily Teradata with 3+ years of experience
3+ years of experience in developing ETL processes using Informatica
3+ years of experience in reporting tools such as Business Objects
Strong understanding of UNIX and shell scripting
Thorough knowledge of SDLC (Software Development Life Cycle)
Excellent interpersonal and communication skills (verbal and written)

Skills Desired:
Exposure to Hadoop ecosystem.
Exposure to programming languages python/java
Exposure to Regulatory Reporting and Credit Risk

Nice to have-
Experience developing in ServiceNow (JavaScript, workflows, update sets)

Angular and Node.js experience a plus
Knowledge of database application concepts, SQL, query optimization
Experience with web application user interface and usability concepts

Understanding of secure software development concepts, especially in a cloud platform
Experience with monitoring, event/alert management and observability concepts a plus.
Exposure to financial industry
Source control (preferably Git) and continuous Integration tools
Why apply to jobs via Cutshort
Personalized job matches
Stop wasting time. Get matched with jobs that meet your skills, aspirations and preferences.
Verified hiring teams
See actual hiring teams, find common social connections or connect with them directly. No 3rd party agencies here.
Move faster with AI
We use AI to get you faster responses, recommendations and unmatched user experience.
2101133
Matches delivered
3712187
Network size
15000
Companies hiring

Similar jobs

AGM Data Engineering

at ACT FIBERNET

Founded 2008  •  Services  •  100-1000 employees  •  Profitable
Data engineering
Data Engineer
Hadoop
Informatica
Qlikview
Datapipeline
icon
Bengaluru (Bangalore)
icon
9 - 14 yrs
icon
₹20L - ₹36L / yr

Key  Responsibilities :

  • Development of proprietary processes and procedures designed to process various data streams around critical databases in the org
  • Manage technical resources around data technologies, including relational databases, NO SQL DBs, business intelligence databases, scripting languages, big data tools and technologies, visualization tools.
  • Creation of a project plan including timelines and critical milestones to success in support of the project
  • Identification of the vital skill sets/staff required to complete the project
  • Identification of crucial sources of the data needed to achieve the objective.

 

Skill Requirement :

  • Experience with data pipeline processes and tools
  • Well versed in the Data domains (Data Warehousing, Data Governance, MDM, Data Quality, Data Catalog, Analytics, BI, Operational Data Store, Metadata, Unstructured Data, ETL, ESB)
  • Experience with an existing ETL tool e.g Informatica and Ab initio etc
  • Deep understanding of big data systems like Hadoop, Spark, YARN, Hive, Ranger, Ambari
  • Deep knowledge of Qlik ecosystems like  Qlikview, Qliksense, and Nprinting
  • Python, or a similar programming language
  • Exposure to data science and machine learning
  • Comfort working in a fast-paced environment

Soft attributes :

  • Independence: Must have the ability to work on his/her own without constant direction or supervision. He/she must be self-motivated and possess a strong work ethic to strive to put forth extra effort continually
  • Creativity: Must be able to generate imaginative, innovative solutions that meet the needs of the organization. You must be a strategic thinker/solution seller and should be able to think of integrated solutions (with field force apps, customer apps, CCT solutions etc.). Hence, it would be best to approach each unique situation/challenge in different ways using the same tools.
  • Resilience: Must remain effective in high-pressure situations, using both positive and negative outcomes as an incentive to move forward toward fulfilling commitments to achieving personal and team goals.
Job posted by
Sumit Sindhwani

Senior Data Engineer

at Velocity.in

Founded 2019  •  Product  •  20-100 employees  •  Raised funding
ETL
Informatica
Data Warehouse (DWH)
Data engineering
Oracle
PostgreSQL
DevOps
Amazon Web Services (AWS)
NodeJS (Node.js)
Ruby on Rails (ROR)
React.js
Python
icon
Bengaluru (Bangalore)
icon
4 - 9 yrs
icon
₹15L - ₹35L / yr

We are an early stage start-up, building new fintech products for small businesses. Founders are IIT-IIM alumni, with prior experience across management consulting, venture capital and fintech startups. We are driven by the vision to empower small business owners with technology and dramatically improve their access to financial services. To start with, we are building a simple, yet powerful solution to address a deep pain point for these owners: cash flow management. Over time, we will also add digital banking and 1-click financing to our suite of offerings.

 

We have developed an MVP which is being tested in the market. We have closed our seed funding from marquee global investors and are now actively building a world class tech team. We are a young, passionate team with a strong grip on this space and are looking to on-board enthusiastic, entrepreneurial individuals to partner with us in this exciting journey. We offer a high degree of autonomy, a collaborative fast-paced work environment and most importantly, a chance to create unparalleled impact using technology.

 

Reach out if you want to get in on the ground floor of something which can turbocharge SME banking in India!

 

Technology stack at Velocity comprises a wide variety of cutting edge technologies like, NodeJS, Ruby on Rails, Reactive Programming,, Kubernetes, AWS, NodeJS, Python, ReactJS, Redux (Saga) Redis, Lambda etc. 

 

Key Responsibilities

  • Responsible for building data and analytical engineering pipelines with standard ELT patterns, implementing data compaction pipelines, data modelling and overseeing overall data quality

  • Work with the Office of the CTO as an active member of our architecture guild

  • Writing pipelines to consume the data from multiple sources

  • Writing a data transformation layer using DBT to transform millions of data into data warehouses.

  • Implement Data warehouse entities with common re-usable data model designs with automation and data quality capabilities

  • Identify downstream implications of data loads/migration (e.g., data quality, regulatory)

 

What To Bring

  • 5+ years of software development experience, a startup experience is a plus.

  • Past experience of working with Airflow and DBT is preferred

  • 5+ years of experience working in any backend programming language. 

  • Strong first-hand experience with data pipelines and relational databases such as Oracle, Postgres, SQL Server or MySQL

  • Experience with DevOps tools (GitHub, Travis CI, and JIRA) and methodologies (Lean, Agile, Scrum, Test Driven Development)

  • Experienced with the formulation of ideas; building proof-of-concept (POC) and converting them to production-ready projects

  • Experience building and deploying applications on on-premise and AWS or Google Cloud cloud-based infrastructure

  • Basic understanding of Kubernetes & docker is a must.

  • Experience in data processing (ETL, ELT) and/or cloud-based platforms

  • Working proficiency and communication skills in verbal and written English.

 

Job posted by
Newali Hazarika

Data Engineer

at surusha technology Pvt Ltd

Founded 2016  •  Products & Services  •  20-100 employees  •  Profitable
C#
SQL Azure
ETL
OLAP
SQL
Data engineering
icon
Remote only
icon
3 - 6 yrs
icon
₹3L - ₹6L / yr
Role - Data Engineer

Skillsets-Azure, Olap, Etl, sql, python, c#

exp range - 3+ to 4 years

salary-best in industry

notice period - Currently serving notice period (Immediate joiners are preferred)

location- remote work

job type -permanent role

it is full time and totally remote based


Note: For the interview 3 rounds are there -technical round, manager/client round, hr round
Job posted by
subham kumar

SQL Developer

at Fragma Data Systems

Founded 2015  •  Products & Services  •  employees  •  Profitable
Data Warehouse (DWH)
Informatica
ETL
SQL
SSIS
icon
Remote only
icon
5 - 7 yrs
icon
₹10L - ₹18L / yr
SQL Developer with Relevant experience of 7 Yrs with Strong Communication Skills.
 
Key responsibilities:
 
 
  • Creating, designing and developing data models
  • Prepare plans for all ETL (Extract/Transformation/Load) procedures and architectures
  • Validating results and creating business reports
  • Monitoring and tuning data loads and queries
  • Develop and prepare a schedule for a new data warehouse
  • Analyze large databases and recommend appropriate optimization for the same
  • Administer all requirements and design various functional specifications for data
  • Provide support to the Software Development Life cycle
  • Prepare various code designs and ensure efficient implementation of the same
  • Evaluate all codes and ensure the quality of all project deliverables
  • Monitor data warehouse work and provide subject matter expertise
  • Hands-on BI practices, data structures, data modeling, SQL skills
 
 

Experience
Experience Range

5 Years - 10 Years

Function Information Technology
Desired Skills
Must have Skills:  SQL

Hard Skills for a Data Warehouse Developer:
 
  • Hands-on experience with ETL tools e.g., DataStage, Informatica, Pentaho, Talend
  • Sound knowledge of SQL
  • Experience with SQL databases such as Oracle, DB2, and SQL
  • Experience using Data Warehouse platforms e.g., SAP, Birst
  • Experience designing, developing, and implementing Data Warehouse solutions
  • Project management and system development methodology
  • Ability to proactively research solutions and best practice
 
Soft Skills for Data Warehouse Developers:
 
  • Excellent Analytical skills
  • Excellent verbal and written communications
  • Strong organization skills
  • Ability to work on a team, as well as independently
Job posted by
Sandhya JD

Data Engineer

at Easebuzz

Founded 2016  •  Product  •  100-500 employees  •  Raised funding
Spotfire
Qlikview
Tableau
PowerBI
Data Visualization
Data Analytics
Apache Kafka
SQL
Amazon Web Services (AWS)
Big Data
DynamoDB
MongoDB
EMR
Amazon Redshift
ETL
Data architecture
Data modeling
icon
Pune
icon
2 - 4 yrs
icon
₹2L - ₹20L / yr

Company Profile:

 

Easebuzz is a payment solutions (fintech organisation) company which enables online merchants to accept, process and disburse payments through developer friendly APIs. We are focusing on building plug n play products including the payment infrastructure to solve complete business problems. Definitely a wonderful place where all the actions related to payments, lending, subscription, eKYC is happening at the same time.

 

We have been consistently profitable and are constantly developing new innovative products, as a result, we are able to grow 4x over the past year alone. We are well capitalised and have recently closed a fundraise of $4M in March, 2021 from prominent VC firms and angel investors. The company is based out of Pune and has a total strength of 180 employees. Easebuzz’s corporate culture is tied into the vision of building a workplace which breeds open communication and minimal bureaucracy. An equal opportunity employer, we welcome and encourage diversity in the workplace. One thing you can be sure of is that you will be surrounded by colleagues who are committed to helping each other grow.

 

Easebuzz Pvt. Ltd. has its presence in Pune, Bangalore, Gurugram.

 


Salary: As per company standards.

 

Designation: Data Engineering

 

Location: Pune

 

Experience with ETL, Data Modeling, and Data Architecture

Design, build and operationalize large scale enterprise data solutions and applications using one or more of AWS data and analytics services in combination with 3rd parties
- Spark, EMR, DynamoDB, RedShift, Kinesis, Lambda, Glue.

Experience with AWS cloud data lake for development of real-time or near real-time use cases

Experience with messaging systems such as Kafka/Kinesis for real time data ingestion and processing

Build data pipeline frameworks to automate high-volume and real-time data delivery

Create prototypes and proof-of-concepts for iterative development.

Experience with NoSQL databases, such as DynamoDB, MongoDB etc

Create and maintain optimal data pipeline architecture,

Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.


Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and AWS ‘big data’ technologies.

Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency and other key business performance metrics.

Work with stakeholders including the Executive, Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs.

Keep our data separated and secure across national boundaries through multiple data centers and AWS regions.

Create data tools for analytics and data scientist team members that assist them in building and optimizing our product into an innovative industry leader.

Evangelize a very high standard of quality, reliability and performance for data models and algorithms that can be streamlined into the engineering and sciences workflow

Build and enhance data pipeline architecture by designing and implementing data ingestion solutions.

 

Employment Type

Full-time

 

Job posted by
Amala Baby

Lead Data Engineer

at Discite Analytics Private Limited

Founded 2020  •  Products & Services  •  20-100 employees  •  Raised funding
Hadoop
Big Data
Data engineering
Spark
Apache Beam
Apache Kafka
Data Warehouse (DWH)
ETL
Python
Java
Scala
C++
MySQL
PostgreSQL
MongoDB
Cassandra
Windows Azure
Amazon Web Services (AWS)
icon
Ahmedabad
icon
4 - 7 yrs
icon
₹12L - ₹20L / yr
Responsibilities:
1. Communicate with the clients and understand their business requirements.
2. Build, train, and manage your own team of junior data engineers.
3. Assemble large, complex data sets that meet the client’s business requirements.
4. Identify, design and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
5. Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources, including the cloud.
6. Assist clients with data-related technical issues and support their data infrastructure requirements.
7. Work with data scientists and analytics experts to strive for greater functionality.

Skills required: (experience with at least most of these)
1. Experience with Big Data tools-Hadoop, Spark, Apache Beam, Kafka etc.
2. Experience with object-oriented/object function scripting languages: Python, Java, C++, Scala, etc.
3. Experience in ETL and Data Warehousing.
4. Experience and firm understanding of relational and non-relational databases like MySQL, MS SQL Server, Postgres, MongoDB, Cassandra etc.
5. Experience with cloud platforms like AWS, GCP and Azure.
6. Experience with workflow management using tools like Apache Airflow.
Job posted by
Uma Sravya B

DataStage Developer

at Datametica Solutions Private Limited

Founded 2013  •  Products & Services  •  100-1000 employees  •  Profitable
ETL
Data Warehouse (DWH)
IBM InfoSphere DataStage
DataStage
SQL
Linux/Unix
icon
Pune
icon
3 - 8 yrs
icon
₹5L - ₹20L / yr

Datametica is Hiring for Datastage Developer

  • Must have 3 to 8 years of experience in ETL Design and Development using IBM Datastage Components.
  • Should have extensive knowledge in Unix shell scripting.
  • Understanding of DW principles (Fact, Dimension tables, Dimensional Modelling and Data warehousing concepts).
  • Research, development, document and modification of ETL processes as per data architecture and modeling requirements.
  • Ensure appropriate documentation for all new development and modifications of the ETL processes and jobs.
  • Should be good in writing complex SQL queries.

About Us!

A global Leader in the Data Warehouse Migration and Modernization to the Cloud, we empower businesses by migrating their Data/Workload/ETL/Analytics to the Cloud by leveraging Automation.

 

We have expertise in transforming legacy Teradata, Oracle, Hadoop, Netezza, Vertica, Greenplum along with ETLs like Informatica, Datastage, AbInitio & others, to cloud-based data warehousing with other capabilities in data engineering, advanced analytics solutions, data management, data lake and cloud optimization.

 

Datametica is a key partner of the major cloud service providers - Google, Microsoft, Amazon, Snowflake.

 

We have our own products!

Eagle – Data warehouse Assessment & Migration Planning Product

Raven – Automated Workload Conversion Product

Pelican - Automated Data Validation Product, which helps automate and accelerate data migration to the cloud.

 

Why join us!

Datametica is a place to innovate, bring new ideas to live and learn new things. We believe in building a culture of innovation, growth and belonging. Our people and their dedication over these years are the key factors in achieving our success.

 

Benefits we Provide!

Working with Highly Technical and Passionate, mission-driven people

Subsidized Meals & Snacks

Flexible Schedule

Approachable leadership

Access to various learning tools and programs

Pet Friendly

Certification Reimbursement Policy

 

Check out more about us on our website below!

www.datametica.com

 

Job posted by
Sumangali Desai

Sr. Data Engineer ( a Fintech product company )

at Velocity.in

Founded 2019  •  Product  •  20-100 employees  •  Raised funding
Data engineering
Data Engineer
Big Data
Big Data Engineer
Python
Data Visualization
Data Warehouse (DWH)
Google Cloud Platform (GCP)
Data-flow analysis
Amazon Web Services (AWS)
PL/SQL
NOSQL Databases
PostgreSQL
ETL
data pipelining
icon
Bengaluru (Bangalore)
icon
4 - 8 yrs
icon
₹20L - ₹35L / yr

We are an early stage start-up, building new fintech products for small businesses. Founders are IIT-IIM alumni, with prior experience across management consulting, venture capital and fintech startups. We are driven by the vision to empower small business owners with technology and dramatically improve their access to financial services. To start with, we are building a simple, yet powerful solution to address a deep pain point for these owners: cash flow management. Over time, we will also add digital banking and 1-click financing to our suite of offerings.

 

We have developed an MVP which is being tested in the market. We have closed our seed funding from marquee global investors and are now actively building a world class tech team. We are a young, passionate team with a strong grip on this space and are looking to on-board enthusiastic, entrepreneurial individuals to partner with us in this exciting journey. We offer a high degree of autonomy, a collaborative fast-paced work environment and most importantly, a chance to create unparalleled impact using technology.

 

Reach out if you want to get in on the ground floor of something which can turbocharge SME banking in India!

 

Technology stack at Velocity comprises a wide variety of cutting edge technologies like, NodeJS, Ruby on Rails, Reactive Programming,, Kubernetes, AWS, NodeJS, Python, ReactJS, Redux (Saga) Redis, Lambda etc. 

 

Key Responsibilities

  • Responsible for building data and analytical engineering pipelines with standard ELT patterns, implementing data compaction pipelines, data modelling and overseeing overall data quality

  • Work with the Office of the CTO as an active member of our architecture guild

  • Writing pipelines to consume the data from multiple sources

  • Writing a data transformation layer using DBT to transform millions of data into data warehouses.

  • Implement Data warehouse entities with common re-usable data model designs with automation and data quality capabilities

  • Identify downstream implications of data loads/migration (e.g., data quality, regulatory)

 

What To Bring

  • 3+ years of software development experience, a startup experience is a plus.

  • Past experience of working with Airflow and DBT is preferred

  • 2+ years of experience working in any backend programming language. 

  • Strong first-hand experience with data pipelines and relational databases such as Oracle, Postgres, SQL Server or MySQL

  • Experience with DevOps tools (GitHub, Travis CI, and JIRA) and methodologies (Lean, Agile, Scrum, Test Driven Development)

  • Experienced with the formulation of ideas; building proof-of-concept (POC) and converting them to production-ready projects

  • Experience building and deploying applications on on-premise and AWS or Google Cloud cloud-based infrastructure

  • Basic understanding of Kubernetes & docker is a must.

  • Experience in data processing (ETL, ELT) and/or cloud-based platforms

  • Working proficiency and communication skills in verbal and written English.

 

 

 

Job posted by
chinnapareddy S

Business Intelligence Developer

at Symansys Technologies India Pvt Ltd

SSIS
Tableau
SQL Server Integration Services (SSIS)
Intelligence
Business Intelligence (BI)
ETL
Datawarehousing
icon
Remote, Noida
icon
6 - 9 yrs
icon
₹15L - ₹23L / yr
We are Hiring For BI Developer.
Experience: 6-9 yrs
Location: NoidaJob Description:
  • Must Have 3-4 Experience in SSIS, Mysql
  • Good Experience in Tableau
  • Experience in SQL Server.
  • 1+ year of Experience in Tableau
  • Knowledge of ETL Tool
  • Knowledge of Dataware Housing
Job posted by
Preet Kaur

Senior Data Engineer

at Bookr Inc

Founded 2019  •  Products & Services  •  20-100 employees  •  Raised funding
Big Data
Hadoop
Spark
Data engineering
Data Warehouse (DWH)
ETL
EMR
Amazon Redshift
PostgreSQL
SQL
Scala
Java
Python
airflow
icon
Remote, Chennai, Bengaluru (Bangalore)
icon
4 - 7 yrs
icon
₹15L - ₹35L / yr

In this role you'll get.

  • Being part of core team member for data platform, setup platform foundation while adhering all required quality standards and design patterns
  • Write efficient and quality code that can scale
  • Adopt Bookr quality standards, recommend process standards and best practices
  • Research, learn & adapt new technologies to solve problems & improve existing solutions
  • Contribute to engineering excellence backlog
  • Identify performance issues
  • Effective code and design reviews
  • Improve reliability of overall production system by proactively identifying patterns of failure
  • Leading and mentoring junior engineers by example
  • End-to-end ownership of stories (including design, serviceability, performance, failure handling)
  • Strive hard to provide the best experience to anyone using our products
  • Conceptualise innovative and elegant solutions to solve challenging big data problems
  • Engage with Product Management and Business to drive the agenda, set your priorities and deliver awesome products
  • Adhere to company policies, procedures, mission, values, and standards of ethics and integrity

 

On day one we'll expect you to.

  • B. E/B. Tech from a reputed institution
  • Minimum 5 years of software development experience and at least a year experience in leading/guiding people
  • Expert coding skills in Python/PySpark or Java/Scala
  • Deep understanding in Big Data Ecosystem - Hadoop and Spark
  • Must have project experience with Spark
  • Ability to independently troubleshoot Spark jobs
  • Good understanding of distributed systems
  • Fast learner and quickly adapt to new technologies
  • Prefer individuals with high ownership and commitment
  • Expert hands on experience with RDBMS
  • Fast learner and quickly adapt to new technologies
  • Prefer individuals with high ownership and commitment
  • Ability to work independently as well as working collaboratively in a team

 

Added bonuses you have.

  • Hands on experience with EMR/Glue/Data bricks
  • Hand on experience with Airflow
  • Hands on experience with AWS Big Data ecosystem

 

We are looking for passionate Engineers who are always hungry for challenging problems. We believe in creating opportunistic, yet balanced, work environment for savvy, entrepreneurial tech individuals. We are thriving on remote work with team working across multiple timezones.

 

 

  • Flexible hours & Remote work - We are a results focused bunch, so we encourage you to work whenever and wherever you feel most creative and focused.
  • Unlimited PTOWe want you to feel free to recharge your batteries when you need it!
  • Stock Options - Opportunity to participate in Company stock plan
  • Flat hierarchy - Team leaders at your fingertips
  • BFC(Stands for bureaucracy-free company). We're action oriented and don't bother with dragged-out meetings or pointless admin exercises - we'd rather get our hands dirty!
  • Working along side Leaders - You being part of core team, will give you opportunity to directly work with founding and management team

 

Job posted by
Nimish Mehta
Did not find a job you were looking for?
icon
Search for relevant jobs from 10000+ companies such as Google, Amazon & Uber actively hiring on Cutshort.
Get to hear about interesting companies hiring right now
iconFollow Cutshort
Want to apply to this role at Digital Transformation services provider?
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Learn more
Get to hear about interesting companies hiring right now
iconFollow Cutshort