ETL Jobs in Bangalore (Bengaluru)

Explore top ETL Job opportunities in Bangalore (Bengaluru) from Top Companies & Startups. All jobs are added by verified employees who can be contacted directly below.
icon

at TIFIN FINTECH

2 recruiters
DP
Posted by Vrishali Mishra
Bengaluru (Bangalore)
2 - 3 yrs
₹4L - ₹6L / yr
Docker
Kubernetes
DevOps
Amazon Web Services (AWS)
Windows Azure
+10 more

TIFIN is a fintech company backed by industry leaders including JP Morgan, Morningstar, Broadridge and Hamilton Lane.

 

We build engaging experiences through powerful AI and personalization. We leverage the combined power of investment intelligence, data science, and technology to make investing a more engaging experience and a more powerful driver of financial well-being.

 

At TIFIN, design and behavioral thinking enables engaging customer centered experiences along with software and application programming interfaces (APIs). We use investment science and intelligence to build algorithmic engines inside the software and APIs to enable better investor outcomes.

 

We hope to change the world of wealth in ways that personalized delivery has changed the world of movies, music and more. In a world where every individual is unique, we believe in the power of AI-based personalization to match individuals to financial advice and investments is necessary to driving wealth goals

OUR VALUES:

  • Shared Understanding through Listening and Speaking the Truth. We communicate with radical candor, precision and compassion to create a shared understanding. We challenge, but once a decision is made, commit fully. We listen attentively, speak candidly.
  • Teamwork for Teamwin. We believe in win together, learn together. We fly in formation. We cover each other’s backs. We inspire each other with our energy and attitude.
  • Make Magic for our Users. We center around the voice of the customer. With deep empathy for our clients, we create technology that transforms investor experiences.
  • Grow at the Edge. We are driven by personal growth. We get out of our comfort zone and keep egos aside to find our genius zones. We strive to be the best we can possibly be. No excuses.
  • Innovate with Creative Solutions. We believe that disruptive innovation begins with curiosity and creativity. We challenge the status quo and problem solve to find new answers.


WHAT YOU'LL BE DOING:


As part of TIFIN’s technology division, you will be leading the DevOps function of a software product demonstrating leadership abilities

 

WHO ARE YOU:

  • Lead the end-to-end technology/infrastructure environments
  • Troubleshoot any issues that arise from deployments and other automations
  • Setup and manage security configurations
  • Implement systems/tools/processes to monitor performance and security integrity of the technology stack
  • Implementing CI/CD from our source control platform (e.g. gitlab)
  • Develop automation tools and dashboards to manage and monitor the infrastructure
  • Provide technical guidance during software development
  • Stay current with industry trends and source new ways for our business to improve
  • Setup / decommission technology assets. Maintain the asset and configuration
  • Maintain inventory of the relevant environments

Skills Required:

  • 2+ years of experience with substantial experience in a DevOps Engineering
  • Strong experience designing and implementing highly available, scalable solutions
  • Expertise in planning/implementing BCP / DR policies in line with company objectives
  • Strong experience with Linux servers and their administration/troubleshooting
  • Strong understanding of Networking concepts and best practices
  • Working experience with Docker and Kubernetes
  • Hands-on experience with AWS & GCP services like VPC, EC2, S3, ELB, RDS, ECS/EKS, IAM, CloudFront, CloudWatch, SQS/SNS, App Engine, etc.
  • Strong experience with databases such as PostgreSQL and Redis
  • Knowledge of scripting languages such as Python and Bash
  • Expertise in Git (GitHub/ GitLab)
  • Experience working with Data Lakes and ETL pipelines
  • Experience with project workflow tools such as Jira in an Agile-Scrum environment
  • Experience with open-source technologies and cloud services
  • Strong communication & interpersonal skills and ability to explain protocol and processes to the team
  • Strong troubleshooting skills with the ability to spot issues before they become problems

 

COMPENSATION AND BENEFITS PACKAGE:
Competitive and commensurate to experience + discretionary annual bonus + ESOPs

About the Tifin Group: The Tifin group combines expertise in finance, technology, entrepreneurship and investing to start and help build a portfolio of brands and companies in areas of investments, wealth management and asset management.

TIFIN companies are centered around the user and emphasize design innovation to build operating systems. We focus on simplifying and democratizing financial science to make it more holistic and integral to users’ lives.

Read more

at EnterpriseMinds

2 recruiters
DP
Posted by Rani Galipalli
Bengaluru (Bangalore), Pune, Mumbai
6 - 8 yrs
₹25L - ₹28L / yr
ETL
Informatica
Data Warehouse (DWH)
ETL management
SQL
+1 more

Your key responsibilities

 

  • Create and maintain optimal data pipeline architecture. Should have experience in building batch/real-time ETL Data Pipelines. Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources
  • The individual will be responsible for solution design, integration, data sourcing, transformation, database design and implementation of complex data warehousing solutions.
  • Responsible for development, support, maintenance, and implementation of a complex project module
  • Provide expertise in area and advanced knowledge of applications programming and ensure application design adheres to the overall architecture blueprint
  • Utilize advanced knowledge of system flow and develop standards for coding, testing, debugging, and implementation
  • Resolve variety of high impact problems/projects through in-depth evaluation of complex business processes, system processes, and industry standards
  • Continually improve ongoing reporting and analysis processes, automating or simplifying self-service support.
  • complete reporting solutions.
  • Preparation of HLD about architecture of the application and high level design.
  • Preparation of LLD about job design, job description and in detail information of the jobs.
  • Preparation of Unit Test cases and execution of the same.
  • Provide technical guidance and mentoring to application development teams throughout all the phases of the software development life cycle

Skills and attributes for success

 

  • Strong experience in SQL. Proficient in writing performant SQL working with large data volumes. Proficiency in writing and debugging complex SQLs.
  • Strong experience in database system Microsoft Azure. Experienced in Azure Data Factory.
  • Strong in Data Warehousing concepts. Experience with large-scale data warehousing architecture and data modelling.
  • Should have enough experience to work on Power Shell Scripting
  • Able to guide the team through the development, testing and implementation stages and review the completed work effectively
  • Able to make quick decisions and solve technical problems to provide an efficient environment for project implementation
  • Primary owner of delivery, timelines. Review code was written by other engineers.
  • Maintain highest levels of development practices including technical design, solution development, systems configuration, test documentation/execution, issue identification and resolution, writing clean, modular and self-sustaining code, with repeatable quality and predictability
  • Must have understanding of business intelligence development in the IT industry
  • Outstanding written and verbal communication skills
  • Should be adept in SDLC process - requirement analysis, time estimation, design, development, testing and maintenance
  • Hands-on experience in installing, configuring, operating, and monitoring CI/CD pipeline tools
  • Should be able to orchestrate and automate pipeline
  • Good to have : Knowledge of distributed systems such as Hadoop, Hive, Spark

 

To qualify for the role, you must have

 

  • Bachelor's Degree in Computer Science, Economics, Engineering, IT, Mathematics, or related field preferred
  • More than 6 years of experience in ETL development projects
  • Proven experience in delivering effective technical ETL strategies
  • Microsoft Azure project experience
  • Technologies: ETL- ADF, SQL, Azure components (must-have), Python (nice to have)

 

Ideally, you’ll also have

Read more

at Think n Solutions

2 recruiters
DP
Posted by TnS HR
Bengaluru (Bangalore)
2 - 12 yrs
Best in industry
Microsoft SQL Server
SQL Server Integration Services (SSIS)
SQL Server Reporting Services (SSRS)
Amazon Web Services (AWS)
SQL Azure
+9 more

Criteria:

  • BE/MTech/MCA/MSc
  • 3+yrs Hands on Experience in TSQL / PL SQL / PG SQL or NOSQL
  • Immediate joiners preferred*
  • Candidates will be selected based on logical/technical and scenario-based testing

 

Note: Candidates who have attended the interview process with TnS in the last 6 months will not be eligible.

 

Job Description:

 

  1. Technical Skills Desired:
    1. Experience in MS SQL Server and one of these Relational DB’s, PostgreSQL / AWS Aurora DB / MySQL / Oracle / NOSQL DBs (MongoDB / DynamoDB / DocumentDB) in an application development environment and eagerness to switch
    2. Design database tables, views, indexes
    3. Write functions and procedures for Middle Tier Development Team
    4. Work with any front-end developers in completing the database modules end to end (hands-on experience in parsing of JSON & XML in Stored Procedures would be an added advantage).
    5. Query Optimization for performance improvement
    6. Design & develop SSIS Packages or any other Transformation tools for ETL

 

  1. Functional Skills Desired:
    1. Banking / Insurance / Retail domain would be a
    2. Interaction with a client a

3.      Good to Have Skills:

  1. Knowledge in a Cloud Platform (AWS / Azure)
  2. Knowledge on version control system (SVN / Git)
  3. Exposure to Quality and Process Management
  4. Knowledge in Agile Methodology

 

  1. Soft skills: (additional)
    1. Team building (attitude to train, work along, mentor juniors)
    2. Communication skills (all kinds)
    3. Quality consciousness
    4. Analytical acumen to all business requirements

Think out-of-box for business solution
Read more

at Klubworks

4 recruiters
DP
Posted by Anupam Arya
Bengaluru (Bangalore)
3 - 6 yrs
₹12L - ₹18L / yr
Data Analytics
MS-Excel
MySQL
Python
Business Analysis
+9 more
We are looking to hire a Senior Data Analyst to join our data team. You will take responsibility for managing our master data set, developing reports, and troubleshooting data issues. To do well in this role you need a very fine eye for detail, experience as a data analyst, and a deep understanding of the popular data analysis tools and databases.

Responsibilities
  • Interpret data, analyze results using statistical techniques and provide ongoing reports
  • Develop and implement databases, data collection systems, data analytics, and other strategies that optimize statistical efficiency and quality
  • Acquire data from primary or secondary data sources and maintain databases/data systems
  • Identify, analyze, and interpret trends or patterns in complex data sets
  • Filter and clean data by reviewing computer reports, printouts, and performance indicators to locate and correct code problems
  • Work with the teams to prioritize business and information needs
  • Locate and define new process improvement opportunities

Requirements- 
  • Minimum 3 year of working experience as a Data Analyst or Business Data Analyst
  • Technical expertise with data models, database design development, data mining, and segmentation techniques
  • Strong knowledge of and experience with reporting packages (Business Objects etc), databases (SQL, etc), programming (XML, JavaScript, or ETL frameworks)
  • Knowledge of statistics and experience using statistical packages for analyzing datasets (Excel, SPSS, SAS, etc)
  • Strong analytical skills with the ability to collect, organize, analyze, and disseminate significant amounts of information with attention to detail and accuracy
  • Excellent written and verbal communication skills for coordinating across teams.
  • A drive to learn and master new technologies and techniques.
Read more

Talent500

Agency job
via Talent500 by ANSR by Raghu R
Bengaluru (Bangalore)
1 - 10 yrs
₹5L - ₹30L / yr
Python
ETL
SQL
SQL Server Reporting Services (SSRS)
Data Warehouse (DWH)
+6 more

A proficient, independent contributor that assists in technical design, development, implementation, and support of data pipelines; beginning to invest in less-experienced engineers.

Responsibilities:

- Design, Create and maintain on premise and cloud based data integration pipelines. 
- Assemble large, complex data sets that meet functional/non functional business requirements.
- Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
- Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources.
- Build analytics tools that utilize the data pipeline to provide actionable insights into key business performance metrics.
- Work with stakeholders including the Executive, Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs.
- Create data pipelines to enable BI, Analytics and Data Science teams that assist them in building and optimizing their systems
- Assists in the onboarding, training and development of team members.
- Reviews code changes and pull requests for standardization and best practices
- Evolve existing development to be automated, scalable, resilient, self-serve platforms
- Assist the team in the design and requirements gathering for technical and non technical work to drive the direction of projects

 

Technical & Business Expertise:

-Hands on integration experience in SSIS/Mulesoft
- Hands on experience Azure Synapse
- Proven advanced level of writing database experience in SQL Server
- Proven advanced level of understanding about Data Lake
- Proven intermediate level of writing Python or similar programming language
- Intermediate understanding of Cloud Platforms (GCP) 
- Intermediate understanding of Data Warehousing
- Advanced Understanding of Source Control (Github)

Read more

at IntelliFlow Solutions Pvt Ltd

2 candid answers
DP
Posted by Divyashree Abhilash
Remote, Bengaluru (Bangalore)
3 - 8 yrs
₹10L - ₹12L / yr
DevOps
Kubernetes
Docker
Amazon Web Services (AWS)
Windows Azure
+3 more
IntelliFlow.ai is a next-gen technology SaaS Platform company providing tools for companies to design, build and deploy enterprise applications with speed and scale. It innovates and simplifies the application development process through its flagship product, IntelliFlow. It allows business engineers and developers to build enterprise-grade applications to run frictionless operations through rapid development and process automation. IntelliFlow is a low-code platform to make business better with faster time-to-market and succeed sooner.

Looking for an experienced candidate with strong development and programming experience, knowledge preferred-

  • Cloud computing (i.e. Kubernetes, AWS, Google Cloud, Azure)
  • Coming from a strong development background and has programming experience with Java and/or NodeJS (other programming languages such as Groovy/python are a big bonus)
  • Proficient with Unix systems and bash
  • Proficient with git/GitHub/GitLab/bitbucket

 

Desired skills-

  • Docker
  • Kubernetes
  • Jenkins
  • Experience in any scripting language (Phyton, Shell Scripting, Java Script)
  • NGINX / Load Balancer
  • Splunk / ETL tools
Read more
Agency job
via posterity consulting by Kapil Tiwari
Bengaluru (Bangalore)
3 - 7 yrs
₹8L - ₹14L / yr
Data engineering
Big Data
Google Cloud Platform (GCP)
ETL
Datawarehousing
+6 more
You'll have the following skills & experience:

• Problem Solving:. Resolving production issues to fix service P1-4 issues. Problems relating to
introducing new technology, and resolving major issues in the platform and/or service.
• Software Development Concepts: Understands and is experienced with the use of a wide range of
programming concepts and is also aware of and has applied a range of algorithms.
• Commercial & Risk Awareness: Able to understand & evaluate both obvious and subtle commercial
risks, especially in relation to a programme.
Experience you would be expected to have
• Cloud: experience with one of the following cloud vendors: AWS, Azure or GCP
• GCP : Experience prefered, but learning essential.
• Big Data: Experience with Big Data methodology and technologies
• Programming : Python or Java worked with Data (ETL)
• DevOps: Understand how to work in a Dev Ops and agile way / Versioning / Automation / Defect
Management – Mandatory
• Agile methodology - knowledge of Jira
Read more

at Klubworks

4 recruiters
DP
Posted by Anupam Arya
Bengaluru (Bangalore)
3 - 6 yrs
Best in industry
Python
ETL
Informatica
Data Warehouse (DWH)
Big Data
+3 more
About the role
 
You will be building data pipelines that transform raw, unstructured data into formats data scientists can use for analysis. You will be responsible for creating and maintaining the analytics infrastructure that enables almost every other data function. This includes architectures such as databases, servers, and large-scale processing systems.
 
Responsibilities:
  • Responsible for setting up a  scalable Data warehouse
  • Building data pipeline mechanisms  to integrate the data from various sources for all of Klub’s data.  
  • Setup data as a service to expose the needed data as part of APIs. 
  • Have a good understanding on how the finance data works.
  • Standardize and optimize design thinking across the technology team.
  • Collaborate with stakeholders across engineering teams to come up with short and long-term architecture decisions.
  • Build robust data models that will help to support various reporting requirements for the business , ops and the leadership team. 
  • Participate in peer reviews , provide code/design comments.
  • Own the problem and deliver to success.

Requirements:
  • Overall 3+ years of industry experience
  • Prior experience on Backend and Data Engineering systems
  • Should have at least 1 + years of working experience in distributed systems
  • Deep understanding on python tech stack with the libraries like Flask, scipy, numpy, pytest frameworks.
  • Good understanding of Apache Airflow or similar orchestration tools. 
  • Good knowledge on data warehouse technologies like Apache Hive or similar. Good knowledge on Apache PySpark or similar. 
  • Good knowledge on how to build analytics services on the data for different reporting and BI needs. 
  • Good knowledge on data pipeline/ETL tools Hevo data or similar. Good knowledge on Trino / graphQL or similar query engine technologies. 
  • Deep understanding of concepts on Dimensional Data Models. Familiarity with RDBMS (MySQL/ PostgreSQL) , NoSQL (MongoDB/DynamoDB) databases & caching(redis or similar).
  • Should be proficient in writing SQL queries. 
  • Good knowledge on kafka. Be able to write clean, maintainable code.
 
Nice to have
  • Built a Data Warehouse from the scratch and set up a scalable data infrastructure.
  • Prior experience in fintech would be a plus.
  • Prior experience on data modelling.
Read more

at Porter.in

4 recruiters
DP
Posted by Satyajit Mittra
Bengaluru (Bangalore)
2 - 6 yrs
₹15L - ₹24L / yr
ETL
Informatica
Data Warehouse (DWH)
Amazon Redshift
Python
+4 more

Data Engineer II

 

Overview

We are looking for a savvy Data Engineer to join our growing team of analytics experts. The hire will be responsible for expanding and optimizing our data and data pipeline architecture, as well as optimizing data flow and collection for cross functional teams. The ideal candidate is an experienced data pipeline builder and data wrangler who enjoys optimizing data systems and building them from the ground up. The Data Engineer will support our software developers, database architects, data analysts and data scientists on data initiatives and will ensure optimal data delivery architecture is consistent throughout ongoing projects. They must be self-directed and comfortable supporting the data needs of multiple teams, systems and products. The right candidate will be excited by the prospect of optimizing or even re-designing our company’s data architecture to support our next generation of products and data initiatives.

Data Engineering Stack

  • Warehouse : Redshift, S3, Snowflake
  • Database : PostgreSQL, Amazon Redshift
  • ETL : Airflow + DBT + Custom-made Python + Amundsen (Discovery) + Hevo/Fivetran
  • Analytics : Python / R / SQL + Excel / PPT, Google Colab
  • Business Intelligence / Visualization : Metabase + Google Data Studio
  • Frameworks : Spark + Dash + StreamLit
  • Collaboration : Git, Notion
  • Work Management : JIRA, ClickUp

Roles and Responsibilities

  • Create and maintain optimal data architecture pipeline.
  • Assemble large, complex data sets that meet functional / non-functional business requirements.
  • Build & design the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using Snowflake Cloud Data Warehouse as well as PostgreSQL.
  • Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
  • Perform root cause analysis on internal and external data, processes and share insights with different stakeholders using reporting and visualizations tools.
  • Work with other data engineers, data ingestion specialists, and experts across cross-functional teams.
  • Work on large datasets to meet functional and business requirements.
  • Work in an Agile environment with Scrum teams.
  • Implement processes and systems to monitor data quality, ensuring Production and Analytics data is always accurate and timely available.
  • Design, develop, maintain, and test scalable data pipelines to support continuing increase in data volume and complexity.
  • Writes unit/integration tests, contributes to engineering wiki, and prepares SOP documents.
  • Help build the infrastructure required for optimal ETL / ELT of data from a wide variety of data sources.
  • Adhere to all documented architecture, design & deployment standards & processes to ensure compliance with policies.
  • Contribute to innovation at team level.

Qualification & Skills required

  • Graduate degree in Engineering.
  • 3+ years of Experience in a Data Engineer or in a similar role.
  • 2+ years of Experience in database architecture, data warehousing, data modeling, consulting, business intelligence or analytics space with experience in PL/SQL, Snowflake, and an additional object-oriented programming language (e.g., Python, Java, JavaScript).
  • At Least 1+ year of Experience with AWS services like Amazon Managed Workflows for Apache Airflow, Lambda, S3, Cloudwatch etc.
  • Must have experience with ETL / ELT processes and tools with incremental and CDC loads. Understanding of ETL / ELT patterns and when to use each.
  • Experience in reporting and visualization tools like Metabase, PowerBI, Tableau etc.
  • Expert in SQL (Postgres/Redshift/Snowflake or Similar Database). Sound knowledge and proven experience in CTEs (Common Table Expressions), Aggregate Functions, sub-queries, cursors, anonymous blocks etc.
  • Experience with sound knowledge on database concepts like Entity-relationship, Data modeling, DDL and DML statements etc.
  • Preferred experience with Snowflake Data Cloud and its features. Snowflake core certification is a plus.
  • Agile methodologies and experience working in Scrum preferred.
  • Great numerical and analytical skills. Along with verbal and written communication skills.


Company Overview:

At Porter, we are passionate about improving productivity. We want to help businesses, large and small,  optimize their last-mile operations and empower them to unleash the growth of their core functions. Last-mile delivery logistics is one of the biggest and fastest growing sectors of the economy with a market cap upwards of 50 billion USD and a growth rate exceeding 15% CAGR.

Porter is the fastest growing leader in this sector with operations in 15 major cities, a fleet size exceeding  1L registered and 50k active driver-partners and a customer base nearing a million with 3.5M being monthly active. Our industry-best technology platform has raised over 150 million USD from investors including Sequoia Capital, Kae Capital and Mahindra Partners, LGT Aspada, Tiger Global and Vitruvian Partners.

We are addressing a massive problem and going after a huge market. We’re trying to create a household name in transportation and our ambition is to disrupt all facets of last-mile logistics including warehousing and LTL transportation. At Porter, we’re here to do the best work of our lives.

If you want to do the same and love the challenges and opportunities of a fast-paced work environment,  then we believe Porter is the right place for you.

Company URL: https://porter.in/

Read more

at Deep-Rooted.co (formerly Clover)

6 candid answers
1 video
DP
Posted by Likhithaa D
Bengaluru (Bangalore)
3 - 6 yrs
₹12L - ₹15L / yr
Java
Python
SQL
AWS Lambda
HTTP
+5 more

Deep-Rooted.Co is on a mission to get Fresh, Clean, Community (Local farmer) produce from harvest to reach your home with a promise of quality first! Our values are rooted in trust, convenience, and dependability, with a bunch of learning & fun thrown in.


Founded out of Bangalore by Arvind, Avinash, Guru and Santosh, with the support of our Investors Accel, Omnivore & Mayfield, we raised $7.5 million in Seed, Series A and Debt funding till date from investors include ACCEL, Omnivore, Mayfield among others. Our brand Deep-Rooted.Co which was launched in August 2020 was the first of its kind as India’s Fruits & Vegetables (F&V) which is present in Bangalore & Hyderabad and on a journey of expansion to newer cities which will be managed seamlessly through Tech platform that has been designed and built to transform the Agri-Tech sector.


Deep-Rooted.Co is committed to building a diverse and inclusive workplace and is an equal-opportunity employer.  

How is this possible? It’s because we work with smart people. We are looking for Engineers in Bangalore to work with the Product Leader (Founder) and CTO and this is a meaningful project for us and we are sure you will love the project as it touches everyday life and is fun. This will be a virtual consultation.


We want to start the conversation about the project we have for you, but before that, we want to connect with you to know what’s on your mind. Do drop a note sharing your mobile number and letting us know when we can catch up.

Purpose of the role:

* As a startup we have data distributed all across various sources like Excel, Google Sheets, Databases etc. We need swift decision making based a on a lot of data that exists as we grow. You help us bring together all this data and put it in a data model that can be used in business decision making.
* Handle nuances of Excel and Google Sheets API.
* Pull data in and manage it growth, freshness and correctness.
* Transform data in a format that aids easy decision-making for Product, Marketing and Business Heads.
* Understand the business problem, solve the same using the technology and take it to production - no hand offs - full path to production is yours.

Technical expertise:
* Good Knowledge And Experience with Programming languages - Java, SQL,Python.
* Good Knowledge of Data Warehousing, Data Architecture.
* Experience with Data Transformations and ETL; 
* Experience with API tools and more closed systems like Excel, Google Sheets etc.
* Experience AWS Cloud Platform and Lambda
* Experience with distributed data processing tools.
* Experiences with container-based deployments on cloud.

Skills:
Java, SQL, Python, Data Build Tool, Lambda, HTTP, Rest API, Extract Transform Load.
Read more

Diggibyte Technology private Limited

Agency job
Bengaluru (Bangalore)
5 - 7 yrs
₹9L - ₹15L / yr
ETL
Informatica
Data Warehouse (DWH)
SQL
Dimensional modeling
+1 more

Data Modeler JD: - 

 

1. Understand and translate business needs into dimension models supporting long-term solutions

 

2. Experience on building models on ERwin or similar tools.

 

3. Experience and understanding on dimensional data model, customer 360 and Entity relationship model.

 

4. Work with the Development team to implement data strategies, build data flows and develop conceptual data models.

 

5. Create logical and physical data models using best practices to ensure high data quality and reduced redundancy

 

6. Optimize and update logical and physical data models to support new and existing projects

 

7. Maintain conceptual, logical, and physical data models along with corresponding metadata

 

8. Develop best practices for standard naming conventions and coding practices to ensure consistency of data models

 

9. Recommend opportunities for reuse of data models in new environments

 

10. Perform reverse engineering of physical data models from databases and SQL scripts

 

11. Evaluate models and physical databases for variances and discrepancies

 

12. Validate business data objects for accuracy and completeness

 

13. Analyze data-related system integration challenges and propose appropriate solutions

 

14. Develop data models according to company standards

 

15. Guide System Analysts, Engineers, Programmers and others on project limitations and capabilities, performance requirements and interfaces

 

16. Good to have Home appliance/Retail domain knowledge and Azure Synapse.

 

Job Functions: Information Technology 

 

Employment Type - Full-time 

 

Thank you!

 

 

Read more

at Mobile Programming LLC

1 video
34 recruiters
DP
Posted by keerthi varman
Bengaluru (Bangalore)
3 - 8 yrs
₹10L - ₹14L / yr
Oracle SQL Developer
PL/SQL
ETL
Informatica
Data Warehouse (DWH)
+4 more
The role and responsibilities of Oracle or PL/SQL Developer and Database Administrator:
• Working Knowledge of XML, JSON, Shell and other DBMS scripts
• Hands on Experience on Oracle 11G,12c. Working knowledge of Oracle 18 and 19c
• Analysis, design, coding, testing, debugging and documentation. Complete knowledge of
Software Development Life Cycle (SDLC).
• Writing Complex Queries, stored procedures, functions and packages
• Knowledge of REST Services, UTL functions, DBMS functions and data integration is required
• Good knowledge on table level partitions, row locks and experience in OLTP.
• Should be aware about ETL tools, Data Migration, Data Mapping functionalities
• Understand the business requirement, transform/design the same into business solutions.
Perform data modelling and implement the business rules using Oracle database objects.
• Define source to target data mapping and data transformation logic as per the business
need.
• Should have worked on Materialised views creation and maintenance. Experience in
Performance tuning, impact analysis required
• Monitoring and optimizing the performance of the database. Planning for backup and
recovery of database information. Maintaining archived data. Backing up and restoring
databases.
• Hands on Experience on SQL Developer
Read more

Tata Digital Pvt Ltd

Agency job
via Seven N Half by Priya Singh
Mumbai, Bengaluru (Bangalore)
10 - 15 yrs
₹20L - ₹37L / yr
Service Integration and Management
Environment Specialist
ETL
Test cases
  • Implementing Environment solutions for projects in a dynamic corporate environment
  • Communicating and collaborating with project and technical teams on Environment requirements, delivery and support
  • Delivering and Maintaining Environment Management Plans, Bookings, Access Details and Schedules for Projects
  • Working with Environment Team on Technical Environment Delivery Solutions
  • Troubleshooting, managing and tracking Environment Incidents & Service Requests in conjunction with technical teams and external partners via the service management tool
  • Leadership support in the North Sydney office
  • Mentoring, guiding and leading other team
  • Creation of new test environments
  • Provisioning infrastructure and platform
Test environment configuration (module, system, sub-module)

  • Test data provisioning (privatization, traceability, ETL, segregation)
  • Endpoint integration
  • Monitoring the test environment
  • Updating/deleting outdated test-environments and their details
  • Investigation of test environment issues and at times, co- ordination till its resolution
Read more

at Quicken Inc

2 recruiters
DP
Posted by Shreelakshmi M
Bengaluru (Bangalore)
5 - 8 yrs
Best in industry
ETL
Informatica
Data Warehouse (DWH)
Python
ETL QA
+1 more
  • Graduate+ in Mathematics, Statistics, Computer Science, Economics, Business, Engineering or equivalent work experience.
  • Total experience of 5+ years with at least 2 years in managing data quality for high scale data platforms.
  • Good knowledge of SQL querying.
  • Strong skill in analysing data and uncovering patterns using SQL or Python.
  • Excellent understanding of data warehouse/big data concepts such data extraction, data transformation, data loading (ETL process).
  • Strong background in automation and building automated testing frameworks for data ingestion and transformation jobs.
  • Experience in big data technologies a big plus.
  • Experience in machine learning, especially in data quality applications a big plus.
  • Experience in building data quality automation frameworks a big plus.
  • Strong experience working with an Agile development team with rapid iterations. 
  • Very strong verbal and written communication, and presentation skills.
  • Ability to quickly understand business rules.
  • Ability to work well with others in a geographically distributed team.
  • Keen observation skills to analyse data, highly detail oriented.
  • Excellent judgment, critical-thinking, and decision-making skills; can balance attention to detail with swift execution.
  • Able to identify stakeholders, build relationships, and influence others to get work done.
  • Self-directed and self-motivated individual who takes complete ownership of the product and its outcome.
Read more

Japanese MNC

Agency job
via CIEL HR Services by sundari chitra
Bengaluru (Bangalore)
5 - 10 yrs
₹7L - ₹15L / yr
ETL
Informatica
Data Warehouse (DWH)
IBM InfoSphere DataStage
Datastage
We are looking ETL Datastage developer for a Japanese MNC.

Role: ETL Datastage developer.

Eperience: 5 years

Location: Bangalore (WFH as of now).

Roles:

Design, develop, and schedule DataStage ETL jobs to extract data from disparate source systems, transform, and load data into EDW for data mart consumption, self-service analytics, and data visualization tools. 

Provides hands-on technical solutions to business challenges & translates them into process/technical solutions. 

Conduct code reviews to communicate high-level design approaches with team members to validate strategic business needs and architectural guidelines are met.

 Evaluate and recommend technical feasibility and effort estimates of proposed technology solutions. Provide operational instructions for dev, QA, and production code deployments while adhering to internal Change Management processes. 

Coordinate Control-M scheduler jobs and dependencies Recommend and implement ETL process performance tuning strategies and methodologies. Conducts and supports data validation, unit testing, and QA integration activities. 

Compose and update technical documentation to ensure compliance to department policies and standards. Create transformation queries, stored procedures for ETL processes, and development automations. 

Interested candidates can forward your profiles.
Read more
DP
Posted by Alfiya Khan
Pune, Bengaluru (Bangalore)
6 - 8 yrs
₹15L - ₹25L / yr
Big Data
Data Warehouse (DWH)
Data modeling
Apache Spark
Data integration
+10 more
Company Profile
XpressBees – a logistics company started in 2015 – is amongst the fastest growing
companies of its sector. While we started off rather humbly in the space of
ecommerce B2C logistics, the last 5 years have seen us steadily progress towards
expanding our presence. Our vision to evolve into a strong full-service logistics
organization reflects itself in our new lines of business like 3PL, B2B Xpress and cross
border operations. Our strong domain expertise and constant focus on meaningful
innovation have helped us rapidly evolve as the most trusted logistics partner of
India. We have progressively carved our way towards best-in-class technology
platforms, an extensive network reach, and a seamless last mile management
system. While on this aggressive growth path, we seek to become the one-stop-shop
for end-to-end logistics solutions. Our big focus areas for the very near future
include strengthening our presence as service providers of choice and leveraging the
power of technology to improve efficiencies for our clients.

Job Profile
As a Lead Data Engineer in the Data Platform Team at XpressBees, you will build the data platform
and infrastructure to support high quality and agile decision-making in our supply chain and logistics
workflows.
You will define the way we collect and operationalize data (structured / unstructured), and
build production pipelines for our machine learning models, and (RT, NRT, Batch) reporting &
dashboarding requirements. As a Senior Data Engineer in the XB Data Platform Team, you will use
your experience with modern cloud and data frameworks to build products (with storage and serving
systems)
that drive optimisation and resilience in the supply chain via data visibility, intelligent decision making,
insights, anomaly detection and prediction.

What You Will Do
• Design and develop data platform and data pipelines for reporting, dashboarding and
machine learning models. These pipelines would productionize machine learning models
and integrate with agent review tools.
• Meet the data completeness, correction and freshness requirements.
• Evaluate and identify the data store and data streaming technology choices.
• Lead the design of the logical model and implement the physical model to support
business needs. Come up with logical and physical database design across platforms (MPP,
MR, Hive/PIG) which are optimal physical designs for different use cases (structured/semi
structured). Envision & implement the optimal data modelling, physical design,
performance optimization technique/approach required for the problem.
• Support your colleagues by reviewing code and designs.
• Diagnose and solve issues in our existing data pipelines and envision and build their
successors.

Qualifications & Experience relevant for the role

• A bachelor's degree in Computer Science or related field with 6 to 9 years of technology
experience.
• Knowledge of Relational and NoSQL data stores, stream processing and micro-batching to
make technology & design choices.
• Strong experience in System Integration, Application Development, ETL, Data-Platform
projects. Talented across technologies used in the enterprise space.
• Software development experience using:
• Expertise in relational and dimensional modelling
• Exposure across all the SDLC process
• Experience in cloud architecture (AWS)
• Proven track record in keeping existing technical skills and developing new ones, so that
you can make strong contributions to deep architecture discussions around systems and
applications in the cloud ( AWS).

• Characteristics of a forward thinker and self-starter that flourishes with new challenges
and adapts quickly to learning new knowledge
• Ability to work with a cross functional teams of consulting professionals across multiple
projects.
• Knack for helping an organization to understand application architectures and integration
approaches, to architect advanced cloud-based solutions, and to help launch the build-out
of those systems
• Passion for educating, training, designing, and building end-to-end systems.
Read more

at Ushur Technologies Pvt Ltd

1 video
2 recruiters
DP
Posted by Priyanka N
Bengaluru (Bangalore)
6 - 12 yrs
Best in industry
MongoDB
Spark
Hadoop
Big Data
Data engineering
+5 more
What You'll Do:
● Our Infrastructure team is looking for an excellent Big Data Engineer to join a core group that
designs the industry’s leading Micro-Engagement Platform. This role involves design and
implementation of architectures and frameworks of big data for industry’s leading intelligent
workflow automation platform. As a specialist in Ushur Engineering team, your responsibilities will
be to:
● Use your in-depth understanding to architect and optimize databases and data ingestion pipelines
● Develop HA strategies, including replica sets and sharding to for highly available clusters
● Recommend and implement solutions to improve performance, resource consumption, and
resiliency
● On an ongoing basis, identify bottlenecks in databases in development and production
environments and propose solutions
● Help DevOps team with your deep knowledge in the area of database performance, scaling,
tuning, migration & version upgrades
● Provide verifiable technical solutions to support operations at scale and with high availability
● Recommend appropriate data processing toolset and big data ecosystems to adopt
● Design and scale databases and pipelines across multiple physical locations on cloud
● Conduct Root-cause analysis of data issues
● Be self-driven, constantly research and suggest latest technologies

The experience you need:
● Engineering degree in Computer Science or related field
● 10+ years of experience working with databases, most of which should have been around
NoSql technologies
● Expertise in implementing and maintaining distributed, Big data pipelines and ETL
processes
● Solid experience in one of the following cloud-native data platforms (AWS Redshift/ Google
BigQuery/ SnowFlake)
● Exposure to real time processing techniques like Apache Kafka and CDC tools
(Debezium, Qlik Replicate)
● Strong experience in Linux Operating System
● Solid knowledge of database concepts, MongoDB, SQL, and NoSql internals
● Experience with backup and recovery for production and non-production environments
● Experience in security principles and its implementation
● Exceptionally passionate about always keeping the product quality bar at an extremely
high level
Nice-to-haves
● Proficient with one or more of Python/Node.Js/Java/similar languages

Why you want to Work with Us:
● Great Company Culture. We pride ourselves on having a values-based culture that
is welcoming, intentional, and respectful. Our internal NPS of over 65 speaks for
itself - employees recommend Ushur as a great place to work!
● Bring your whole self to work. We are focused on building a diverse culture, with
innovative ideas where you and your ideas are valued. We are a start-up and know
that every person has a significant impact!
● Rest and Relaxation. 13 Paid leaves, wellness Fridays offs (aka a day off to care
for yourself- every last Friday of the month), 12 paid sick Leaves, and more!
● Health Benefits. Preventive health checkups, Medical Insurance covering the
dependents, wellness sessions, and health talks at the office
● Keep learning. One of our core values is Growth Mindset - we believe in lifelong
learning. Certification courses are reimbursed. Ushur Community offers wide
resources for our employees to learn and grow.
● Flexible Work. In-office or hybrid working model, depending on position and
location. We seek to create an environment for all our employees where they can
thrive in both their profession and personal life.
Read more

at PayU

1 video
6 recruiters
DP
Posted by Vishakha Sonde
Remote, Bengaluru (Bangalore)
2 - 5 yrs
₹5L - ₹20L / yr
Python
ETL
Data engineering
Informatica
SQL
+2 more

Role: Data Engineer  
Company: PayU

Location: Bangalore/ Mumbai

Experience : 2-5 yrs


About Company:

PayU is the payments and fintech business of Prosus, a global consumer internet group and one of the largest technology investors in the world. Operating and investing globally in markets with long-term growth potential, Prosus builds leading consumer internet companies that empower people and enrich communities.

The leading online payment service provider in 36 countries, PayU is dedicated to creating a fast, simple and efficient payment process for merchants and buyers. Focused on empowering people through financial services and creating a world without financial borders where everyone can prosper, PayU is one of the biggest investors in the fintech space globally, with investments totalling $700 million- to date. PayU also specializes in credit products and services for emerging markets across the globe. We are dedicated to removing risks to merchants, allowing consumers to use credit in ways that suit them and enabling a greater number of global citizens to access credit services.

Our local operations in Asia, Central and Eastern Europe, Latin America, the Middle East, Africa and South East Asia enable us to combine the expertise of high growth companies with our own unique local knowledge and technology to ensure that our customers have access to the best financial services.

India is the biggest market for PayU globally and the company has already invested $400 million in this region in last 4 years. PayU in its next phase of growth is developing a full regional fintech ecosystem providing multiple digital financial services in one integrated experience. We are going to do this through 3 mechanisms: build, co-build/partner; select strategic investments. 

PayU supports over 350,000+ merchants and millions of consumers making payments online with over 250 payment methods and 1,800+ payment specialists. The markets in which PayU operates represent a potential consumer base of nearly 2.3 billion people and a huge growth potential for merchants. 

Job responsibilities:

  • Design infrastructure for data, especially for but not limited to consumption in machine learning applications 
  • Define database architecture needed to combine and link data, and ensure integrity across different sources 
  • Ensure performance of data systems for machine learning to customer-facing web and mobile applications using cutting-edge open source frameworks, to highly available RESTful services, to back-end Java based systems 
  • Work with large, fast, complex data sets to solve difficult, non-routine analysis problems, applying advanced data handling techniques if needed 
  • Build data pipelines, includes implementing, testing, and maintaining infrastructural components related to the data engineering stack.
  • Work closely with Data Engineers, ML Engineers and SREs to gather data engineering requirements to prototype, develop, validate and deploy data science and machine learning solutions

Requirements to be successful in this role: 

  • Strong knowledge and experience in Python, Pandas, Data wrangling, ETL processes, statistics, data visualisation, Data Modelling and Informatica.
  • Strong experience with scalable compute solutions such as in Kafka, Snowflake
  • Strong experience with workflow management libraries and tools such as Airflow, AWS Step Functions etc. 
  • Strong experience with data engineering practices (i.e. data ingestion pipelines and ETL) 
  • A good understanding of machine learning methods, algorithms, pipelines, testing practices and frameworks 
  • Preferred) MEng/MSc/PhD degree in computer science, engineering, mathematics, physics, or equivalent (preference: DS/ AI) 
  • Experience with designing and implementing tools that support sharing of data, code, practices across organizations at scale 
Read more

at Gormalone LLP

3 recruiters
DP
Posted by Dhwani Rambhia
Bengaluru (Bangalore)
2 - 5 yrs
₹5L - ₹20L / yr
Data Science
Machine Learning (ML)
Natural Language Processing (NLP)
Computer Vision
Data Analytics
+18 more

DATA SCIENTIST-MACHINE LEARNING                           

GormalOne LLP. Mumbai IN

 

Job Description

GormalOne is a social impact Agri tech enterprise focused on farmer-centric projects. Our vision is to make farming highly profitable for the smallest farmer, thereby ensuring India's “Nutrition security”. Our mission is driven by the use of advanced technology. Our technology will be highly user-friendly, for the majority of farmers, who are digitally naive. We are looking for people, who are keen to use their skills to transform farmers' lives. You will join a highly energized and competent team that is working on advanced global technologies such as OCR, facial recognition, and AI-led disease prediction amongst others.

 

GormalOne is looking for a machine learning engineer to join. This collaborative yet dynamic, role is suited for candidates who enjoy the challenge of building, testing, and deploying end-to-end ML pipelines and incorporating ML Ops best practices across different technology stacks supporting a variety of use cases. We seek candidates who are curious not only about furthering their own knowledge of ML Ops best practices through hands-on experience but can simultaneously help uplift the knowledge of their colleagues.

 

Location: Bangalore

 

Roles & Responsibilities

  • Individual contributor
  • Developing and maintaining an end-to-end data science project
  • Deploying scalable applications on different platform
  • Ability to analyze and enhance the efficiency of existing products

 

What are we looking for?

  • 3 to 5 Years of experience as a Data Scientist
  • Skilled in Data Analysis, EDA, Model Building, and Analysis.
  • Basic coding skills in Python
  • Decent knowledge of Statistics
  • Creating pipelines for ETL and ML models.
  • Experience in the operationalization of ML models
  • Good exposure to Deep Learning, ANN, DNN, CNN, RNN, and LSTM.
  • Hands-on experience in Keras, PyTorch or Tensorflow

 

 

Basic Qualifications

  • Tech/BE in Computer Science or Information Technology
  • Certification in AI, ML, or Data Science is preferred.
  • Master/Ph.D. in a relevant field is preferred.

 

 

Preferred Requirements

  • Exp in tools and packages like Tensorflow, MLFlow, Airflow
  • Exp in object detection techniques like YOLO
  • Exposure to cloud technologies
  • Operationalization of ML models
  • Good understanding and exposure to MLOps

 

 

Kindly note: Salary shall be commensurate with qualifications and experience

 

 

 

 

Read more
DP
Posted by Vineeta Bajaj
Bengaluru (Bangalore), Mumbai
5 - 8 yrs
₹5L - ₹15L / yr
ETL
Informatica
Data Warehouse (DWH)
SQL
Python
+7 more

The Nitty-Gritties

Location: Bengaluru/Mumbai

About the Role:

Freight Tiger is growing exponentially, and technology is at the centre of it. Our Engineers love solving complex industry problems by building modular and scalable solutions using cutting-edge technology. Your peers will be an exceptional group of Software Engineers, Quality Assurance Engineers, DevOps Engineers, and Infrastructure and Solution Architects.

This role is responsible for developing data pipelines and data engineering components to support strategic initiatives and ongoing business processes. This role works with leads, analysts, and data scientists to understand requirements, develop technical solutions, and ensure the reliability and performance of the data engineering solutions.

This role provides an opportunity to directly impact business outcomes for sales, underwriting, claims and operations functions across multiple use cases by providing them data for their analytical modelling needs.

Key Responsibilities

  • Create and maintain a data pipeline.
  • Build and deploy ETL infrastructure for optimal data delivery.
  • Work with various product, design and executive teams to troubleshoot data-related issues.
  • Create tools for data analysts and scientists to help them build and optimise the product.
  • Implement systems and processes for data access controls and guarantees.
  • Distil the knowledge from experts in the field outside the org and optimise internal data systems.




Preferred Qualifications/Skills

  • Should have 5+ years of relevant experience.
  • Strong analytical skills.
  • Degree in Computer Science, Statistics, Informatics, Information Systems.
  • Strong project management and organisational skills.
  • Experience supporting and working with cross-functional teams in a dynamic environment.
  • SQL guru with hands-on experience on various databases.
  • NoSQL databases like Cassandra, and MongoDB.
  • Experience with Snowflake, Redshift.
  • Experience with tools like Airflow, and Hevo.
  • Experience with Hadoop, Spark, Kafka, and Flink.
  • Programming experience in Python, Java, and Scala.
Read more

at InnovAccer

3 recruiters
DP
Posted by Jyoti Kaushik
Noida, Bengaluru (Bangalore), Pune, Hyderabad
4 - 7 yrs
₹4L - ₹16L / yr
ETL
SQL
Data Warehouse (DWH)
Informatica
Datawarehousing
+2 more

We are looking for a Senior Data Engineer to join the Customer Innovation team, who will be responsible for acquiring, transforming, and integrating customer data onto our Data Activation Platform from customers’ clinical, claims, and other data sources. You will work closely with customers to build data and analytics solutions to support their business needs, and be the engine that powers the partnership that we build with them by delivering high-fidelity data assets.

In this role, you will work closely with our Product Managers, Data Scientists, and Software Engineers to build the solution architecture that will support customer objectives. You'll work with some of the brightest minds in the industry, work with one of the richest healthcare data sets in the world, use cutting-edge technology, and see your efforts affect products and people on a regular basis. The ideal candidate is someone that

  • Has healthcare experience and is passionate about helping heal people,
  • Loves working with data,
  • Has an obsessive focus on data quality,
  • Is comfortable with ambiguity and making decisions based on available data and reasonable assumptions,
  • Has strong data interrogation and analysis skills,
  • Defaults to written communication and delivers clean documentation, and,
  • Enjoys working with customers and problem solving for them.

A day in the life at Innovaccer:

  • Define the end-to-end solution architecture for projects by mapping customers’ business and technical requirements against the suite of Innovaccer products and Solutions.
  • Measure and communicate impact to our customers.
  • Enabling customers on how to activate data themselves using SQL, BI tools, or APIs to solve questions they have at speed.

What You Need:

  • 4+ years of experience in a Data Engineering role, a Graduate degree in Computer Science, Statistics, Informatics, Information Systems, or another quantitative field.
  • 4+ years of experience working with relational databases like Snowflake, Redshift, or Postgres.
  • Intermediate to advanced level SQL programming skills.
  • Data Analytics and Visualization (using tools like PowerBI)
  • The ability to engage with both the business and technical teams of a client - to document and explain technical problems or concepts in a clear and concise way.
  • Ability to work in a fast-paced and agile environment.
  • Easily adapt and learn new things whether it’s a new library, framework, process, or visual design concept.

What we offer:

  • Industry certifications: We want you to be a subject matter expert in what you do. So, whether it’s our product or our domain, we’ll help you dive in and get certified.
  • Quarterly rewards and recognition programs: We foster learning and encourage people to take risks. We recognize and reward your hard work.
  • Health benefits: We cover health insurance for you and your loved ones.
  • Sabbatical policy: We encourage people to take time off and rejuvenate, learn new skills, and pursue their interests so they can generate new ideas with Innovaccer.
  • Pet-friendly office and open floor plan: No boring cubicles.
Read more

at Wissen Technology

4 recruiters
DP
Posted by Lokesh Manikappa
Bengaluru (Bangalore)
5 - 12 yrs
₹15L - ₹35L / yr
ETL
Informatica
Data Warehouse (DWH)
Data modeling
Spark
+5 more

Job Description

The applicant must have a minimum of 5 years of hands-on IT experience, working on a full software lifecycle in Agile mode.

Good to have experience in data modeling and/or systems architecture.
Responsibilities will include technical analysis, design, development and perform enhancements.

You will participate in all/most of the following activities:
- Working with business analysts and other project leads to understand requirements.
- Modeling and implementing database schemas in DB2 UDB or other relational databases.
- Designing, developing, maintaining and Data processing using Python, DB2, Greenplum, Autosys and other technologies

 

Skills /Expertise Required :

Work experience in developing large volume database (DB2/Greenplum/Oracle/Sybase).

Good experience in writing stored procedures, integration of database processing, tuning and optimizing database queries.

Strong knowledge of table partitions, high-performance loading and data processing.
Good to have hands-on experience working with Perl or Python.
Hands on development using Spark / KDB / Greenplum platform will be a strong plus.
Designing, developing, maintaining and supporting Data Extract, Transform and Load (ETL) software using Informatica, Shell Scripts, DB2 UDB and Autosys.
Coming up with system architecture/re-design proposals for greater efficiency and ease of maintenance and developing software to turn proposals into implementations.

Need to work with business analysts and other project leads to understand requirements.
Strong collaboration and communication skills

Read more

Product based company

Agency job
via Zyvka Global Services by Ridhima Sharma
Bengaluru (Bangalore)
3 - 12 yrs
₹5L - ₹30L / yr
Spark
Hadoop
Big Data
Data engineering
PySpark
+6 more

Responsibilities:

  • Should act as a technical resource for the Data Science team and be involved in creating and implementing current and future Analytics projects like data lake design, data warehouse design, etc.
  • Analysis and design of ETL solutions to store/fetch data from multiple systems like Google Analytics, CleverTap, CRM systems etc.
  • Developing and maintaining data pipelines for real time analytics as well as batch analytics use cases.
  • Collaborate with data scientists and actively work in the feature engineering and data preparation phase of model building
  • Collaborate with product development and dev ops teams in implementing the data collection and aggregation solutions
  • Ensure quality and consistency of the data in Data warehouse and follow best data governance practices
  • Analyse large amounts of information to discover trends and patterns
  • Mine and analyse data from company databases to drive optimization and improvement of product development, marketing techniques and business strategies.\

Requirements

  • Bachelor’s or Masters in a highly numerate discipline such as Engineering, Science and Economics
  • 2-6 years of proven experience working as a Data Engineer preferably in ecommerce/web based or consumer technologies company
  • Hands on experience of working with different big data tools like Hadoop, Spark , Flink, Kafka and so on
  • Good understanding of AWS ecosystem for big data analytics
  • Hands on experience in creating data pipelines either using tools or by independently writing scripts
  • Hands on experience in scripting languages like Python, Scala, Unix Shell scripting and so on
  • Strong problem solving skills with an emphasis on product development.
  • Experience using business intelligence tools e.g. Tableau, Power BI would be an added advantage (not mandatory)
Read more

at Velocity.in

2 recruiters
DP
Posted by Newali Hazarika
Bengaluru (Bangalore)
4 - 9 yrs
₹15L - ₹35L / yr
ETL
Informatica
Data Warehouse (DWH)
Data engineering
Oracle
+7 more

We are an early stage start-up, building new fintech products for small businesses. Founders are IIT-IIM alumni, with prior experience across management consulting, venture capital and fintech startups. We are driven by the vision to empower small business owners with technology and dramatically improve their access to financial services. To start with, we are building a simple, yet powerful solution to address a deep pain point for these owners: cash flow management. Over time, we will also add digital banking and 1-click financing to our suite of offerings.

 

We have developed an MVP which is being tested in the market. We have closed our seed funding from marquee global investors and are now actively building a world class tech team. We are a young, passionate team with a strong grip on this space and are looking to on-board enthusiastic, entrepreneurial individuals to partner with us in this exciting journey. We offer a high degree of autonomy, a collaborative fast-paced work environment and most importantly, a chance to create unparalleled impact using technology.

 

Reach out if you want to get in on the ground floor of something which can turbocharge SME banking in India!

 

Technology stack at Velocity comprises a wide variety of cutting edge technologies like, NodeJS, Ruby on Rails, Reactive Programming,, Kubernetes, AWS, NodeJS, Python, ReactJS, Redux (Saga) Redis, Lambda etc. 

 

Key Responsibilities

  • Responsible for building data and analytical engineering pipelines with standard ELT patterns, implementing data compaction pipelines, data modelling and overseeing overall data quality

  • Work with the Office of the CTO as an active member of our architecture guild

  • Writing pipelines to consume the data from multiple sources

  • Writing a data transformation layer using DBT to transform millions of data into data warehouses.

  • Implement Data warehouse entities with common re-usable data model designs with automation and data quality capabilities

  • Identify downstream implications of data loads/migration (e.g., data quality, regulatory)

 

What To Bring

  • 5+ years of software development experience, a startup experience is a plus.

  • Past experience of working with Airflow and DBT is preferred

  • 5+ years of experience working in any backend programming language. 

  • Strong first-hand experience with data pipelines and relational databases such as Oracle, Postgres, SQL Server or MySQL

  • Experience with DevOps tools (GitHub, Travis CI, and JIRA) and methodologies (Lean, Agile, Scrum, Test Driven Development)

  • Experienced with the formulation of ideas; building proof-of-concept (POC) and converting them to production-ready projects

  • Experience building and deploying applications on on-premise and AWS or Google Cloud cloud-based infrastructure

  • Basic understanding of Kubernetes & docker is a must.

  • Experience in data processing (ETL, ELT) and/or cloud-based platforms

  • Working proficiency and communication skills in verbal and written English.

 

Read more

at SteelEye

1 video
3 recruiters
Agency job
via wrackle by Naveen Taalanki
Bengaluru (Bangalore)
1 - 8 yrs
₹10L - ₹40L / yr
Python
ETL
Jenkins
CI/CD
pandas
+6 more
Roles & Responsibilties
Expectations of the role
This role will be reporting into Technical Lead (Support). You will be expected to resolve bugs in the platform that are identified by Customers and Internal Teams. This role will progress towards SDE-2 in 12-15 months where the developer will be working on solving complex problems around scale and building out new features.
 
What will you do?
  • Fix issues with plugins for our Python-based ETL pipelines
  • Help with automation of standard workflow
  • Deliver Python microservices for provisioning and managing cloud infrastructure
  • Responsible for any refactoring of code
  • Effectively manage challenges associated with handling large volumes of data working to tight deadlines
  • Manage expectations with internal stakeholders and context-switch in a fast-paced environment
  • Thrive in an environment that uses AWS and Elasticsearch extensively
  • Keep abreast of technology and contribute to the engineering strategy
  • Champion best development practices and provide mentorship to others
What are we looking for?
  • First and foremost you are a Python developer, experienced with the Python Data stack
  • You love and care about data
  • Your code is an artistic manifest reflecting how elegant you are in what you do
  • You feel sparks of joy when a new abstraction or pattern arises from your code
  • You support the manifests DRY (Don’t Repeat Yourself) and KISS (Keep It Short and Simple)
  • You are a continuous learner
  • You have a natural willingness to automate tasks
  • You have critical thinking and an eye for detail
  • Excellent ability and experience of working to tight deadlines
  • Sharp analytical and problem-solving skills
  • Strong sense of ownership and accountability for your work and delivery
  • Excellent written and oral communication skills
  • Mature collaboration and mentoring abilities
  • We are keen to know your digital footprint (community talks, blog posts, certifications, courses you have participated in or you are keen to, your personal projects as well as any kind of contributions to the open-source communities if any)
Nice to have:
  • Delivering complex software, ideally in a FinTech setting
  • Experience with CI/CD tools such as Jenkins, CircleCI
  • Experience with code versioning (git / mercurial / subversion)
Read more

at Impetus Technologies

1 recruiter
DP
Posted by Gangadhar T.M
Bengaluru (Bangalore), Hyderabad, Pune, Indore, Gurugram, Noida
10 - 17 yrs
₹25L - ₹50L / yr
Product Management
Big Data
Data Warehouse (DWH)
ETL
Hi All, 
Greetings! We are looking for Product Manager for our Data modernization product. We need a resource with good knowledge on Big Data/DWH. should have strong Stakeholders management and Presentation skills
Read more

Top Multinational Fintech Startup

Agency job
via wrackle by Naveen Taalanki
Bengaluru (Bangalore)
4 - 7 yrs
₹20L - ₹30L / yr
Spark
Hadoop
Big Data
Data engineering
PySpark
+5 more
Roles & Responsibilties
What will you do?
  • Deliver plugins for our Python-based ETL pipelines
  • Deliver Python microservices for provisioning and managing cloud infrastructure
  • Implement algorithms to analyse large data sets
  • Draft design documents that translate requirements into code
  • Effectively manage challenges associated with handling large volumes of data working to tight deadlines
  • Manage expectations with internal stakeholders and context-switch in a fast-paced environment
  • Thrive in an environment that uses AWS and Elasticsearch extensively
  • Keep abreast of technology and contribute to the engineering strategy
  • Champion best development practices and provide mentorship to others
What are we looking for?
  • First and foremost you are a Python developer, experienced with the Python Data stack
  • You love and care about data
  • Your code is an artistic manifest reflecting how elegant you are in what you do
  • You feel sparks of joy when a new abstraction or pattern arises from your code
  • You support the manifests DRY (Don’t Repeat Yourself) and KISS (Keep It Short and Simple)
  • You are a continuous learner
  • You have a natural willingness to automate tasks
  • You have critical thinking and an eye for detail
  • Excellent ability and experience of working to tight deadlines
  • Sharp analytical and problem-solving skills
  • Strong sense of ownership and accountability for your work and delivery
  • Excellent written and oral communication skills
  • Mature collaboration and mentoring abilities
  • We are keen to know your digital footprint (community talks, blog posts, certifications, courses you have participated in or you are keen to, your personal projects as well as any kind of contributions to the open-source communities if any)
Nice to have:
  • Delivering complex software, ideally in a FinTech setting
  • Experience with CI/CD tools such as Jenkins, CircleCI
  • Experience with code versioning (git / mercurial / subversion)
Read more

PRODUCT ENGINEERING BASED MNC

Agency job
via Exploro Solutions by ramya ramchandran
Bengaluru (Bangalore)
5 - 8 yrs
₹12L - ₹25L / yr
Scala
Spark
Apache Spark
ETL
SQL

 Strong experience on SQL and relational databases

  • Good programming exp on Scala & Spark
  • Good exp on ETL batch data pipelines development and migration/upgrading
  • Python – Good to have.
  • AWS – Good to have
  • Knowledgeable in the areas of Big data/Hadoop/S3/HIVE. Nice to have exp on ETL frameworks (ex: Airflow, Flume, Oozie etc.)
  • Ability to work independently, take ownership and strong troubleshooting/debugging skills
  • Good communication and collaboration skills
Read more

software and consultancy company

Agency job
via Exploro Solutions by ramya ramchandran
Bengaluru (Bangalore), Chennai
6 - 8 yrs
₹12L - ₹30L / yr
Amazon Web Services (AWS)
ETL
Informatica
Data Warehouse (DWH)
SQL

Primary Skills

 6 to 8 years of relevant work experience in ETL tools

 Good knowledge working in AWS Cloud Data Bases like Aurora DB and ecosystem and tools (AWS DMS)

Migrating databases to AWS Cloud would be Mandatory

Sound knowledge of SQL and procedural language.

Possess solid experience of writing complex SQL queries and optimizing SQL query performance

Knowledge of data ingestion one-off feed, change data capture, incremental batch

 

Additional Skills :

Experience in Unix/Linux systems and writing shell scripts would be nice to have

Java knowledge would be an added advantage

Knowledge in Spark Python for building ETL pipelines on cloud would be preferable

Read more

at SteelEye

1 video
3 recruiters
DP
Posted by akanksha rajput
Bengaluru (Bangalore)
4 - 8 yrs
₹20L - ₹30L / yr
ETL
Informatica
Data Warehouse (DWH)
Python
pandas

About us

SteelEye is the only regulatory compliance technology and data analytics firm that offers transaction reporting, record keeping, trade reconstruction, best execution and data insight in one comprehensive solution. The firm’s scalable secure data storage platform offers encryption at rest and in flight and best-in-class analytics to help financial firms meet regulatory obligations and gain competitive advantage.

The company has a highly experienced management team and a strong board, who have decades of technology and management experience and worked in senior positions at many leading international financial businesses. We are a young company that shares a commitment to learning, being smart, working hard and being honest in all we do and striving to do that better each day. We value all our colleagues equally and everyone should feel able to speak up, propose an idea, point out a mistake and feel safe, happy and be themselves at work.

Being part of a start-up can be equally exciting as it is challenging. You will be part of the SteelEye team not just because of your talent but also because of your entrepreneurial flare which we thrive on at SteelEye. This means we want you to be curious, contribute, ask questions and share ideas. We encourage you to get involved in helping shape our business. What you'll do

What you will do?

  • Deliver plugins for our python based ETL pipelines.
  • Deliver python services for provisioning and managing cloud infrastructure.
  • Design, Develop, Unit Test, and Support code in production.
  • Deal with challenges associated with large volumes of data.
  • Manage expectations with internal stakeholders and context switch between multiple deliverables as priorities change.
  • Thrive in an environment that uses AWS and Elasticsearch extensively.
  • Keep abreast of technology and contribute to the evolution of the product.
  • Champion best practices and provide mentorship.

What we're looking for

  • Python 3.
  • Python libraries used for data (such as pandas, numpy).
  • AWS.
  • Elasticsearch.
  • Performance tuning.
  • Object Oriented Design and Modelling.
  • Delivering complex software, ideally in a FinTech setting.
  • CI/CD tools.
  • Knowledge of design patterns.
  • Sharp analytical and problem-solving skills.
  • Strong sense of ownership.
  • Demonstrable desire to learn and grow.
  • Excellent written and oral communication skills.
  • Mature collaboration and mentoring abilities.

What will you get?

  • This is an individual contributor role. So, if you are someone who loves to code and solve complex problems and build amazing products and not worry about anything else, this is the role for you.
  • You will have the chance to learn from the best in the business who have worked across the world and are technology geeks.
  • Company that always appreciates ownership and initiative. If you are someone who is full of ideas, this role is for you.
Read more
Bengaluru (Bangalore)
1 - 4 yrs
₹7L - ₹12L / yr
SQL Server Integration Services (SSIS)
SQL
ETL
Informatica
Data Warehouse (DWH)
+4 more

About Company:

Working with a multitude of clients populating the FTSE and Fortune 500s, Audit Partnership is a people focused organization with a strong belief in our employees. We hire the best people to provide the best services to our clients.

APL offers profit recovery services to organizations of all sizes across a number of sectors. APL was borne out of a desire to offer an alternative from the stagnant service provision on offer in the profit recovery industry.

Every year we cover million of pounds for our clients and also work closely with them, sharing our audit findings to minimize future losses. Our dedicated and highly experienced audit teams utilize progressive & dynamic financial service solutions & industry leading technology to achieve maximum success.

We provide dynamic work environments focused on delivering data-driven solutions at a rapidly increased pace over traditional development. Be a part of our passionate and motivated team who are excited to use the latest in software technologies within financial services.

Headquartered in the UK, we have expanded from a small team in 2002 to a market leading organization serving clients across the globe while keeping our clients at the heart of all decisions we make.


Job description:

We are looking for a high-potential, enthusiastic SQL Data Engineer with a strong desire to build a career in data analysis, database design and application solutions. Reporting directly to our UK based Technology team, you will provide support to our global operation in the delivery of data analysis, conversion, and application development to our core audit functions.

Duties will include assisting with data loads, using T-SQL to analyse data, front-end code changes, data housekeeping, data administration, and supporting the Data Services team as a whole.  Your contribution will grow in line with your experience and skills, becoming increasingly involved in the core service functions and client delivery.  A self-starter with a deep commitment to the highest standards of quality and customer service. We are offering a fantastic career opportunity, working for a leading international financial services organisation, serving the world’s largest organisations.

 

What we are looking for:

  • 1-2 years of previous experience in a similar role
  • Data analysis and conversion skills using Microsoft SQL Server is essential
  • An understanding of relational database design and build
  • Schema design, normalising data, indexing, query performance analysis
  • Ability to analyse complex data to identify patterns and detect anomalies
  • Assisting with ETL design and implementation projects
  • Knowledge or experience in one or more of the key technologies below would be preferable:
    • Microsoft SQL Server (SQL Server Management Studio, Stored Procedure writing etc)
    • T-SQL
    • Programming languages (C#, VB, Python etc)
    • Use of Python to manipulate and import data
    •  
    • Experience of ETL/automation advantageous but not essential (SSIS/Prefect/Azure)
  • A self-starter who can drive projects with minimal guidance
  • Meeting stakeholders to agree system requirements
  • Someone who is enthusiastic and eager to learn
  • Very good command of English and excellent communication skills

 

Perks & Benefits:

  • A fantastic work life balance
  • Competitive compensation and benefits
  • Exposure of working with Fortune 500 organization
  • Expert guidance and nurture from global leaders
  • Opportunities for career and personal advancement with our continued global growth strategy
  • Industry leading training programs
  • A working environment that is exciting, fun and engaging

 

Read more

at Amagi Media Labs

3 recruiters
DP
Posted by Rajesh C
Bengaluru (Bangalore)
5 - 7 yrs
₹10L - ₹20L / yr
Spotfire
Qlikview
Tableau
PowerBI
Data Visualization
+5 more

Data Analyst

Job Description

 

Summary

Are you passionate about handling large & complex data problems, want to make an impact and have the desire to work on ground-breaking big data technologies? Then we are looking for you.

 

At Amagi, great ideas have a way of becoming great products, services, and customer experiences very quickly. Bring passion and dedication to your job and there's no telling what you could accomplish. Would you like to work in a fast-paced environment where your technical abilities will be challenged on a day-to-day basis? If so, Amagi’s Data Engineering and Business Intelligence team is looking for passionate, detail-oriented, technical savvy, energetic team members who like to think outside the box.

 

Amagi’s Data warehouse team deals with petabytes of data catering to a wide variety of real-time, near real-time and batch analytical solutions. These solutions are an integral part of business functions such as Sales/Revenue, Operations, Finance, Marketing and Engineering, enabling critical business decisions. Designing, developing, scaling and running these big data technologies using native technologies of AWS and GCP are a core part of our daily job.

 

Key Qualifications

  • Experience in building highly cost optimised data analytics solutions
  • Experience in designing and building dimensional data models to improve accessibility, efficiency and quality of data
  • Experience (hands on) in building high quality ETL applications, data pipelines and analytics solutions ensuring data privacy and regulatory compliance.
  • Experience in working with AWS or GCP
  • Experience with relational and NoSQL databases
  • Experience to full stack web development (Preferably Python)
  • Expertise with data visualisation systems such as Tableau and Quick Sight
  • Proficiency in writing advanced SQL queries with expertise in performance tuning handling large data volumes
  • Familiarity with ML/AÍ technologies is a plus
  • Demonstrate strong understanding of development processes and agile methodologies
  • Strong analytical and communication skills. Should be self-driven, highly motivated and ability to learn quickly

 

Description

Data Analytics is at the core of our work, and you will have the opportunity to:

 

  • Design Data-warehousing solutions on Amazon S3 with Athena, Redshift, GCP Bigtable etc
  • Lead quick prototypes by integrating data from multiple sources
  • Do advanced Business Analytics through ad-hoc SQL queries
  • Work on Sales Finance reporting solutions using tableau, HTML5, React applications

 

We build amazing experiences and create depth in knowledge for our internal teams and our leadership. Our team is a friendly bunch of people that help each other grow and have a passion for technology, R&D, modern tools and data science.

 

Our work relies on deep understanding of the company needs and an ability to go through vast amounts of internal data such as sales, KPIs, forecasts, Inventory etc. One of the key expectations of this role would be to do data analytics, building data lakes, end to end reporting solutions etc. If you have a passion for cost optimised analytics and data engineering and are eager to learn advanced data analytics at a large scale, this might just be the job for you..

 

Education & Experience

A bachelor’s/master’s degree in Computer Science with 5 to 7 years of experience and previous experience in data engineering is a plus.

 

Read more
Bengaluru (Bangalore)
4 - 8 yrs
₹9L - ₹14L / yr
Data Warehouse (DWH)
Informatica
ETL
CI/CD
SQL

 

Role: Talend Production Support Consultant

 

Brief Job Description:  

  • Involve in release deployment and monitoring of the ETL pipelines.
  • Closely work with the development team and business team to provide operational support.
  • Candidate should have good knowledge and hands on experience on below tools/technologies:

Talend (Talend Studio, TAC, TMC),SAP BODS,SQL,HIVE & Azure(Azure fundamentals, ADB,ADF)

  • Hands on experience in CI/CD is an added advantage.

As discussed, please provide your Linkedin ID URL & a valid ID proof of yours.

 

Please confirm as well, you will relocate to Bangalore once required.

Read more

at Amagi Media Labs

3 recruiters
DP
Posted by Rajesh C
Bengaluru (Bangalore), Chennai
12 - 15 yrs
₹50L - ₹60L / yr
Data Science
Machine Learning (ML)
ETL
Data Warehouse (DWH)
Amazon Web Services (AWS)
+5 more
Job Title: Data Architect
Job Location: Chennai

Job Summary
The Engineering team is seeking a Data Architect. As a Data Architect, you will drive a
Data Architecture strategy across various Data Lake platforms. You will help develop
reference architecture and roadmaps to build highly available, scalable and distributed
data platforms using cloud based solutions to process high volume, high velocity and
wide variety of structured and unstructured data. This role is also responsible for driving
innovation, prototyping, and recommending solutions. Above all, you will influence how
users interact with Conde Nast’s industry-leading journalism.
Primary Responsibilities
Data Architect is responsible for
• Demonstrated technology and personal leadership experience in architecting,
designing, and building highly scalable solutions and products.
• Enterprise scale expertise in data management best practices such as data integration,
data security, data warehousing, metadata management and data quality.
• Extensive knowledge and experience in architecting modern data integration
frameworks, highly scalable distributed systems using open source and emerging data
architecture designs/patterns.
• Experience building external cloud (e.g. GCP, AWS) data applications and capabilities is
highly desirable.
• Expert ability to evaluate, prototype and recommend data solutions and vendor
technologies and platforms.
• Proven experience in relational, NoSQL, ELT/ETL technologies and in-memory
databases.
• Experience with DevOps, Continuous Integration and Continuous Delivery technologies
is desirable.
• This role requires 15+ years of data solution architecture, design and development
delivery experience.
• Solid experience in Agile methodologies (Kanban and SCRUM)
Required Skills
• Very Strong Experience in building Large Scale High Performance Data Platforms.
• Passionate about technology and delivering solutions for difficult and intricate
problems. Current on Relational Databases and No sql databases on cloud.
• Proven leadership skills, demonstrated ability to mentor, influence and partner with
cross teams to deliver scalable robust solutions..
• Mastery of relational database, NoSQL, ETL (such as Informatica, Datastage etc) /ELT
and data integration technologies.
• Experience in any one of Object Oriented Programming (Java, Scala, Python) and
Spark.
• Creative view of markets and technologies combined with a passion to create the
future.
• Knowledge on cloud based Distributed/Hybrid data-warehousing solutions and Data
Lake knowledge is mandate.
• Good understanding of emerging technologies and its applications.
• Understanding of code versioning tools such as GitHub, SVN, CVS etc.
• Understanding of Hadoop Architecture and Hive SQL
• Knowledge in any one of the workflow orchestration
• Understanding of Agile framework and delivery

Preferred Skills:
● Experience in AWS and EMR would be a plus
● Exposure in Workflow Orchestration like Airflow is a plus
● Exposure in any one of the NoSQL database would be a plus
● Experience in Databricks along with PySpark/Spark SQL would be a plus
● Experience with the Digital Media and Publishing domain would be a
plus
● Understanding of Digital web events, ad streams, context models

About Condé Nast

CONDÉ NAST INDIA (DATA)
Over the years, Condé Nast successfully expanded and diversified into digital, TV, and social
platforms - in other words, a staggering amount of user data. Condé Nast made the right
move to invest heavily in understanding this data and formed a whole new Data team
entirely dedicated to data processing, engineering, analytics, and visualization. This team
helps drive engagement, fuel process innovation, further content enrichment, and increase
market revenue. The Data team aimed to create a company culture where data was the
common language and facilitate an environment where insights shared in real-time could
improve performance.
The Global Data team operates out of Los Angeles, New York, Chennai, and London. The
team at Condé Nast Chennai works extensively with data to amplify its brands' digital
capabilities and boost online revenue. We are broadly divided into four groups, Data
Intelligence, Data Engineering, Data Science, and Operations (including Product and
Marketing Ops, Client Services) along with Data Strategy and monetization. The teams built
capabilities and products to create data-driven solutions for better audience engagement.
What we look forward to:
We want to welcome bright, new minds into our midst and work together to create diverse
forms of self-expression. At Condé Nast, we encourage the imaginative and celebrate the
extraordinary. We are a media company for the future, with a remarkable past. We are
Condé Nast, and It Starts Here.
Read more

With a global provider of Business Process Management.

Agency job
via Jobdost by Mamatha A
Bengaluru (Bangalore), Pune, Delhi, Gurugram, Nashik, Vizag
3 - 5 yrs
₹8L - ₹12L / yr
Oracle
Business Intelligence (BI)
PowerBI
Oracle Warehouse Builder
Informatica
+3 more
Oracle BI developer wiith 6+ years experience working on Oracle warehouse design, development and
testing
Good knowledge of Informatica ETL, Oracle Analytics Server
Analytical ability to design warehouse as per user requirements mainly in Finance and HR domain
Good skills to analyze existing ETL, dashboard to understand the logic and do enhancements as per
requirements
Good communication skills and written communication
Qualifications
Master or Bachelor degree in Engineering/Computer Science /Information Technology
Additional information
Excellent verbal and written communication skills
Read more

Leading Sales Enabler

Agency job
via Qrata by Blessy Fernandes
Bengaluru (Bangalore)
5 - 10 yrs
₹25L - ₹40L / yr
ETL
Spark
Python
Amazon Redshift
5+ years of experience in a Data Engineer role.
 Proficiency in Linux.
 Must have SQL knowledge and experience working with relational databases,
query authoring (SQL) as well as familiarity with databases including Mysql,
Mongo, Cassandra, and Athena.
 Must have experience with Python/Scala.
 Must have experience with Big Data technologies like Apache Spark.
 Must have experience with Apache Airflow.
 Experience with data pipeline and ETL tools like AWS Glue.
 Experience working with AWS cloud services: EC2, S3, RDS, Redshift.
Read more

at Infonex Technologies

1 recruiter
DP
Posted by Vinay Ramesh
Bengaluru (Bangalore)
4 - 7 yrs
₹6L - ₹30L / yr
Informatica
ETL
SQL
Linux/Unix
Oracle
+1 more
  • Experience implementing large-scale ETL processes using Informatica PowerCenter.
  • Design high-level ETL process and data flow from the source system to target databases.
  • Strong experience with Oracle databases and strong SQL.
  • Develop & unit test Informatica ETL processes for optimal performance utilizing best practices.
  • Performance tune Informatica ETL mappings and report queries.
  • Develop database objects like Stored Procedures, Functions, Packages, and Triggers using SQL and PL/SQL.
  • Hands-on Experience in Unix.
  • Experience in Informatica Cloud (IICS).
  • Work with appropriate leads and review high-level ETL design, source to target data mapping document, and be the point of contact for any ETL-related questions.
  • Good understanding of project life cycle, especially tasks within the ETL phase.
  • Ability to work independently and multi-task to meet critical deadlines in a rapidly changing environment.
  • Excellent communication and presentation skills.
  • Effectively worked on the Onsite and Offshore work model.
Read more

Our Client company is into Computer Software. (EC1)

Agency job
via Multi Recruit by Manjunath Multirecruit
Bengaluru (Bangalore)
3 - 8 yrs
₹13L - ₹22L / yr
ETL
IDQ
  • Participate in planning, implementation of solutions, and transformation programs from legacy system to a cloud-based system
  • Work with the team on Analysis, High level and low-level design for solutions using ETL or ELT based solutions and DB services in RDS
  • Work closely with the architect and engineers to design systems that effectively reflect business needs, security requirements, and service level requirements
  • Own deliverables related to design and implementation
  • Own Sprint tasks and drive the team towards the goal while understanding the change and release process defined by the organization.
  • Excellent communication skills, particularly those relating to complex findings and presenting them to ensure audience appeal at various levels of the organization
  • Ability to integrate research and best practices into problem avoidance and continuous improvement
  • Must be able to perform as an effective member in a team-oriented environment, maintain a positive attitude, and achieve desired results while working with minimal supervision


Basic Qualifications:

  • Minimum of 5+ years of technical work experience in the implementation of complex, large scale, enterprise-wide projects including analysis, design, core development, and delivery
  • Minimum of 3+ years of experience with expertise in Informatica ETL, Informatica Power Center, and Informatica Data Quality
  • Experience with Informatica MDM tool is good to have
  • Should be able to understand the scope of the work and ask for clarifications
  • Should have advanced SQL skills. Including complex PL/SQL coding skills
  • Knowledge of Agile is plus
  • Well-versed with SOAP, Webservice, and REST API.
  • Hand on development using Java would be a plus.

 

 

 

Read more

Leading Sales Platform

Agency job
via Qrata by Blessy Fernandes
Bengaluru (Bangalore)
5 - 10 yrs
₹30L - ₹45L / yr
Big Data
ETL
Spark
Data engineering
Data governance
+4 more
Work with product managers and development leads to create testing strategies · Develop and scale automated data validation framework · Build and monitor key metrics of data health across the entire Big Data pipelines · Early alerting and escalation process to quickly identify and remedy quality issues before something ever goes ‘live’ in front of the customer · Build/refine tools and processes for quick root cause diagnostics · Contribute to the creation of quality assurance standards, policies, and procedures to influence the DQ mind-set across the company
Required skills and experience: · Solid experience working in Big Data ETL environments with Spark and Java/Scala/Python · Strong experience with AWS cloud technologies (EC2, EMR, S3, Kinesis, etc) · Experience building monitoring/alerting frameworks with tools like Newrelic and escalations with slack/email/dashboard integrations, etc · Executive-level communication, prioritization, and team leadership skills
Read more

Leading StartUp Focused On Employee Growth

Agency job
via Qrata by Blessy Fernandes
Bengaluru (Bangalore)
2 - 6 yrs
₹25L - ₹45L / yr
Data engineering
Data Analytics
Big Data
Apache Spark
airflow
+8 more
2+ years of experience in a Data Engineer role.
● Proficiency in Linux.
● Experience working with AWS cloud services: EC2, S3, RDS, Redshift.
● Must have SQL knowledge and experience working with relational databases, query
authoring (SQL) as well as familiarity with databases including Mysql, Mongo, Cassandra,
and Athena.
● Must have experience with Python/Scala.
● Must have experience with Big Data technologies like Apache Spark.
● Must have experience with Apache Airflow.
● Experience with data pipelines and ETL tools like AWS Glue.
Read more

at Velocity.in

2 recruiters
DP
Posted by chinnapareddy S
Bengaluru (Bangalore)
4 - 8 yrs
₹20L - ₹35L / yr
Data engineering
Data Engineer
Big Data
Big Data Engineer
Python
+10 more

We are an early stage start-up, building new fintech products for small businesses. Founders are IIT-IIM alumni, with prior experience across management consulting, venture capital and fintech startups. We are driven by the vision to empower small business owners with technology and dramatically improve their access to financial services. To start with, we are building a simple, yet powerful solution to address a deep pain point for these owners: cash flow management. Over time, we will also add digital banking and 1-click financing to our suite of offerings.

 

We have developed an MVP which is being tested in the market. We have closed our seed funding from marquee global investors and are now actively building a world class tech team. We are a young, passionate team with a strong grip on this space and are looking to on-board enthusiastic, entrepreneurial individuals to partner with us in this exciting journey. We offer a high degree of autonomy, a collaborative fast-paced work environment and most importantly, a chance to create unparalleled impact using technology.

 

Reach out if you want to get in on the ground floor of something which can turbocharge SME banking in India!

 

Technology stack at Velocity comprises a wide variety of cutting edge technologies like, NodeJS, Ruby on Rails, Reactive Programming,, Kubernetes, AWS, NodeJS, Python, ReactJS, Redux (Saga) Redis, Lambda etc. 

 

Key Responsibilities

  • Responsible for building data and analytical engineering pipelines with standard ELT patterns, implementing data compaction pipelines, data modelling and overseeing overall data quality

  • Work with the Office of the CTO as an active member of our architecture guild

  • Writing pipelines to consume the data from multiple sources

  • Writing a data transformation layer using DBT to transform millions of data into data warehouses.

  • Implement Data warehouse entities with common re-usable data model designs with automation and data quality capabilities

  • Identify downstream implications of data loads/migration (e.g., data quality, regulatory)

 

What To Bring

  • 3+ years of software development experience, a startup experience is a plus.

  • Past experience of working with Airflow and DBT is preferred

  • 2+ years of experience working in any backend programming language. 

  • Strong first-hand experience with data pipelines and relational databases such as Oracle, Postgres, SQL Server or MySQL

  • Experience with DevOps tools (GitHub, Travis CI, and JIRA) and methodologies (Lean, Agile, Scrum, Test Driven Development)

  • Experienced with the formulation of ideas; building proof-of-concept (POC) and converting them to production-ready projects

  • Experience building and deploying applications on on-premise and AWS or Google Cloud cloud-based infrastructure

  • Basic understanding of Kubernetes & docker is a must.

  • Experience in data processing (ETL, ELT) and/or cloud-based platforms

  • Working proficiency and communication skills in verbal and written English.

 

 

 

Read more

world’s fastest growing consumer internet company

Agency job
via Hunt & Badge Consulting Pvt Ltd by Chandramohan Subramanian
Bengaluru (Bangalore)
5 - 8 yrs
₹20L - ₹35L / yr
Big Data
Data engineering
Big Data Engineering
Data Engineer
ETL
+5 more

Data Engineer JD:

  • Designing, developing, constructing, installing, testing and maintaining the complete data management & processing systems.
  • Building highly scalable, robust, fault-tolerant, & secure user data platform adhering to data protection laws.
  • Taking care of the complete ETL (Extract, Transform & Load) process.
  • Ensuring architecture is planned in such a way that it meets all the business requirements.
  • Exploring new ways of using existing data, to provide more insights out of it.
  • Proposing ways to improve data quality, reliability & efficiency of the whole system.
  • Creating data models to reduce system complexity and hence increase efficiency & reduce cost.
  • Introducing new data management tools & technologies into the existing system to make it more efficient.
  • Setting up monitoring and alarming on data pipeline jobs to detect failures and anomalies

What do we expect from you?

  • BS/MS in Computer Science or equivalent experience
  • 5 years of recent experience in Big Data Engineering.
  • Good experience in working with Hadoop and Big Data technologies like HDFS, Pig, Hive, Zookeeper, Storm, Spark, Airflow and NoSQL systems
  • Excellent programming and debugging skills in Java or Python.
  • Apache spark, python, hands on experience in deploying ML models
  • Has worked on streaming and realtime pipelines
  • Experience with Apache Kafka or has worked with any of Spark Streaming, Flume or Storm

 

 

 

 

 

 

 

 

 

 

 

 

Focus Area:

 

R1

Data structure & Algorithms

R2

Problem solving + Coding

R3

Design (LLD)

 

Read more

at httpscarestackcom

1 recruiter
DP
Posted by Biby Mathew
Bengaluru (Bangalore), trivandrum, Thiruvananthapuram, Cochin
10 - 15 yrs
₹15L - ₹35L / yr
Architecture
Data architecture
Data Architect
Microsoft SQL Server
Performance Testing
+5 more
Responsibilities
• Lead the Development Team.
• Strong in SQL Coding and performance tuning.
• Strong in Data warehousing concepts and design.
• Strong in ETL process and implementation.
• Participate in database design and architecture to support application development
activities.
• Experience in writing and reviewing complex functions, stored procedures, and custom
scripts to support application development.
• Tuning application queries by analysing execution plans.
• Experience in data migration between different DB systems.
• Expertise in trouble shooting day to day production issues in DB.
• Oversee progress of development team to ensure consistency with initial design
Skills and Qualifications
• 8+ years of professional experience with at least 2 years as an architect.
• Deep knowledge in SQL Server is a must. Knowledge in MySQL and Vertica would be an
asset.
• Ability to work in a very dynamic environment & ready to support production
environment.
• Organised, self-motivated, team player, actions and result oriented.
• Ability to successfully work under tight timelines.
• Good verbal and written communication skills.
• Ability to meet production targets and milestones.
Read more
DP
Posted by Sachin Rout
Bengaluru (Bangalore)
3 - 8 yrs
₹6L - ₹14L / yr
SQL Server Integration Services (SSIS)
SSIS
SQL Server Reporting Services (SSRS)
SSRS
Microsoft SQL Server
+4 more

Primary Responsibilities:

  • We need strong SQL database development skills using the MS SQL server.
  • Strong skill in SQL server integration services (SSIS) for ETL development.
  • Strong Experience in full life cycle database development project using SQL server.
  • Experience in designing and implementing complex logical and physical data models.
  • Exposure to web services & web technologies (Javascript, Jquery, CSS, HTML).
  • Knowledge of other high-level languages (PERL, Python) will be an added advantage
  • Nice to have SQL certification.

Good to have:

•             Bachelor’s degree or a minimum of 3+ years of formal industry/professional experience in software development – Healthcare background preferred.
Read more

AI enabled SAAS organisation

Agency job
via Rize @ People Konnect Pvt. Ltd. by Kalindi Maheshwari
Bengaluru (Bangalore)
1 - 8 yrs
₹5L - ₹40L / yr
Data engineering
Data Engineer
AWS Lambda
Microservices
ETL
+8 more
Required Skills & Experience:
• 2+ years of experience in data engineering & strong understanding of data engineering principles using big data technologies
• Excellent programming skills in Python is mandatory
• Expertise in relational databases (MSSQL/MySQL/Postgres) and expertise in SQL. Exposure to NoSQL such as Cassandra. MongoDB will be a plus.
• Exposure to deploying ETL pipelines such as AirFlow, Docker containers & Lambda functions
• Experience in AWS loud services such as AWS CLI, Glue, Kinesis etc
• Experience using Tableau for data visualization is a plus
• Ability to demonstrate a portfolio of projects (GitHub, papers, etc.) is a plus
• Motivated, can-do attitude and desire to make a change is a must
• Excellent communication skills
Read more

Our Client company is into Computer Software. (EC1)

Agency job
via Multi Recruit by Fiona RKS
Bengaluru (Bangalore)
3 - 5 yrs
₹12L - ₹15L / yr
ETL
Snowflake
snow flake
Data engineering
SQL
+1 more
  • Create and maintain optimal data pipeline architecture
  • Assemble large, complex data sets that meet functional / non-functional business requirements.
  • Author data services using a variety of programming languages
  • Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
  • Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using Snowflake Cloud Datawarehouse as well as SQL and Azure ‘big data’ technologies
  • Work with stakeholders including the Executive, Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs.
  • Keep our data separated and secure across national boundaries through multiple data centers and Azure regions.
  • Create data tools for analytics and data scientist team members that assist them in building and optimizing our product into an innovative industry leader.
  • Work with data and analytics experts to strive for greater functionality in our data systems.
  • Work in an Agile environment with Scrum teams.
  • Ensure data quality and help in achieving data governance.

Basic Qualifications

  • 3+ years of experience in a Data Engineer or Software Engineer role
  • Undergraduate degree required (Graduate degree preferred) in Computer Science, Statistics, Informatics, Information Systems or another quantitative field.
  • Experience using the following software/tools:
  • Experience with “Snowflake Cloud Datawarehouse”
  • Experience with Azure cloud services: ADLS, ADF, ADLA, AAS
  • Experience with data pipeline and workflow management tools
  • Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases
  • Understanding of Datawarehouse (DWH) systems, and migration from DWH to data lakes/Snowflake
  • Understanding of ELT and ETL patterns and when to use each. Understanding of data models and transforming data into the models
  • Strong analytic skills related to working with unstructured datasets
  • Build processes supporting data transformation, data structures, metadata, dependency and workload management
  • Experience supporting and working with cross-functional teams in a dynamic environment.
Read more

at Futurense Technologies

1 recruiter
DP
Posted by Rajendra Dasigari
Bengaluru (Bangalore)
2 - 7 yrs
₹6L - ₹12L / yr
ETL
Data Warehouse (DWH)
Apache Hive
Informatica
Data engineering
+5 more
1. Create and maintain optimal data pipeline architecture
2. Assemble large, complex data sets that meet business requirements
3. Identify, design, and implement internal process improvements
4. Optimize data delivery and re-design infrastructure for greater scalability
5. Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and AWS technologies
6. Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency, and other key business performance metrics
7. Work with internal and external stakeholders to assist with data-related technical issues and support data infrastructure needs
8. Create data tools for analytics and data scientist team members
 
Skills Required:
 
1. Working knowledge of ETL on any cloud (Azure / AWS / GCP)
2. Proficient in Python (Programming / Scripting)
3. Good understanding of any of the data warehousing concepts (Snowflake / AWS Redshift / Azure Synapse Analytics / Google Big Query / Hive)
4. In-depth understanding of principles of database structure
5.  Good understanding of any of the ETL technologies (Informatica PowerCenter / AWS Glue / Data Factory / SSIS / Spark / Matillion / Talend / Azure)
6. Proficient in SQL (query solving)
7. Knowledge in Change case Management / Version Control – (VSS / DevOps / TFS / GitHub, Bit bucket, CICD Jenkin)
Read more

IT solutions specialized in Apps Lifecycle management. (MG1)

Agency job
via Multi Recruit by Ayub Pasha
Bengaluru (Bangalore)
5 - 6 yrs
₹8L - ₹10L / yr
Data migration
Data Warehouse (DWH)
ETL
SQL
PostgreSQL
+4 more
  • Excellent working knowledge on Data Warehousing /Data Migration activity using an ETL tool.
  • Strong Data Integration, PostgreSQL/Oracle Database skills, Shell Scripting, Python programming, and development know-how.
  • Hands-on experience in working with and generating XML documents.
  • Good analytical and business process understanding capability.
  • Familiarized with Data Models, Source-Target Data Mapping, Transactional, and Master Data concepts.
  • Well-experienced in High level/Detailed design, Performance tuning of ETL jobs.
  • Very good communication skills, interpersonal skills, stakeholder management skills, self-motivated, quick learner, team player.
  • Exposure to After Sales Business Domain is highly preferred.
  • Experience using HP ALM, Jira for ticketing.
  • Experience release management

 

Read more

at Rentomojo

1 video
5 recruiters
DP
Posted by Anand Pandey
Bengaluru (Bangalore)
1 - 2 yrs
₹5L - ₹7L / yr
Business Analysis
Windows Azure
PySpark
SQL
Data Warehouse (DWH)
+4 more
RESPONSIBILITIES & OWNERSHIP: THINGS THE ROLE CAN'T MISS
  • Setting KPIs, monitoring key trends, and helping stakeholders by generating insights from the data delivered.
  • Understanding user behaviour and performing root-cause analysis of changes in data trends across different verticals.
  • Get answers to business questions, identify areas of improvement, and identify opportunities for growth.
  • Work on ad-hoc requests for data and analysis.
  • Work with Cross functional Teams as when required to automate reports and create informative dashboards based on problem statements.

WHO COULD BE A GREAT FIT:
Functional Experience
  • 1-2 years of experience working in Analytics as a Business or Data Analyst.
  • Analytical mind with a problem-solving aptitude.
  • Familiarity with Microsoft Azure & AWS PySpark, Python, Data Bricks, Metabase, Understanding of APIs, data warehouse and ETL etc.
  • Proficient in writing Complex Queries in SQL.
  • Experience in Performing hands-on analysis on data and across multiple datasets and databases primarily using Excel, Google Sheets and R.
  • Ability to work across cross-functional teams proactively.
Read more

at 1CH

1 recruiter
DP
Posted by Sathish Sukumar
Chennai, Bengaluru (Bangalore), Hyderabad, NCR (Delhi | Gurgaon | Noida), Mumbai, Pune
4 - 15 yrs
₹10L - ₹25L / yr
Data engineering
Data engineer
ETL
SSIS
ADF
+3 more
  • Expertise in designing and implementing enterprise scale database (OLTP) and Data warehouse solutions.
  • Hands on experience in implementing Azure SQL Database, Azure SQL Date warehouse (Azure Synapse Analytics) and big data processing using Azure Databricks and Azure HD Insight.
  • Expert in writing T-SQL programming for complex stored procedures, functions, views and query optimization.
  • Should be aware of Database development for both on-premise and SAAS Applications using SQL Server and PostgreSQL.
  • Experience in ETL and ELT implementations using Azure Data Factory V2 and SSIS.
  • Experience and expertise in building machine learning models using Logistic and linear regression, Decision tree  and Random forest Algorithms.
  • PolyBase queries for exporting and importing data into Azure Data Lake.
  • Building data models both tabular and multidimensional using SQL Server data tools.
  • Writing data preparation, cleaning and processing steps using Python, SCALA, and R.
  • Programming experience using python libraries NumPy, Pandas and Matplotlib.
  • Implementing NOSQL databases and writing queries using cypher.
  • Designing end user visualizations using Power BI, QlikView and Tableau.
  • Experience working with all versions of SQL Server 2005/2008/2008R2/2012/2014/2016/2017/2019
  • Experience using the expression languages MDX and DAX.
  • Experience in migrating on-premise SQL server database to Microsoft Azure.
  • Hands on experience in using Azure blob storage, Azure Data Lake Storage Gen1 and Azure Data Lake Storage Gen2.
  • Performance tuning complex SQL queries, hands on experience using SQL Extended events.
  • Data modeling using Power BI for Adhoc reporting.
  • Raw data load automation using T-SQL and SSIS
  • Expert in migrating existing on-premise database to SQL Azure.
  • Experience in using U-SQL for Azure Data Lake Analytics.
  • Hands on experience in generating SSRS reports using MDX.
  • Experience in designing predictive models using Python and SQL Server.
  • Developing machine learning models using Azure Databricks and SQL Server
Read more
Get to hear about interesting companies hiring right now
iconFollow Cutshort
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Learn more
Get to hear about interesting companies hiring right now
iconFollow Cutshort