ETL Jobs in Bangalore (Bengaluru)

Explore top ETL Job opportunities in Bangalore (Bengaluru) from Top Companies & Startups. All jobs are added by verified employees who can be contacted directly below.
icon

Client Company

icon
Bengaluru (Bangalore), Hyderabad, Chennai, Pune, Jaipur
icon
5 - 7 yrs
icon
₹10L - ₹20L / yr
Python
SQL
Databases
Data modeling
Apache Flume
+10 more



5-7 years of relevant industry experience

Demonstrated ability to analyze large data sets to identify gaps and inconsistencies, provide data insights, and advance effective product solutions

Expertise in SQL Query authoring and working knowledge of relational databases

Good exp on Data Analysis, Data Modeling, Data Processing & Data Validations

Experience using ETL frameworks (ex: Airflow, Flume, Oozie etc.) to build and deploy production-quality ETL pipelines

(Preferred) Experience building batch data pipelines in Spark Scala or Java

Good understanding of distributed storage and compute (S3, Hive, Spark)

General software engineering skills (Java or Python or Scala, Github)

Good communication skills, both written and verbal

Ability to work independently

Read more

Diggibyte Technology private Limited

Agency job
via KUKULKAN by Pragathi P
icon
Bengaluru (Bangalore)
icon
5 - 7 yrs
icon
₹9L - ₹15L / yr
ETL
Informatica
Data Warehouse (DWH)
SQL
Dimensional modeling
+1 more

Data Modeler JD: - 

 

1. Understand and translate business needs into dimension models supporting long-term solutions

 

2. Experience on building models on ERwin or similar tools.

 

3. Experience and understanding on dimensional data model, customer 360 and Entity relationship model.

 

4. Work with the Development team to implement data strategies, build data flows and develop conceptual data models.

 

5. Create logical and physical data models using best practices to ensure high data quality and reduced redundancy

 

6. Optimize and update logical and physical data models to support new and existing projects

 

7. Maintain conceptual, logical, and physical data models along with corresponding metadata

 

8. Develop best practices for standard naming conventions and coding practices to ensure consistency of data models

 

9. Recommend opportunities for reuse of data models in new environments

 

10. Perform reverse engineering of physical data models from databases and SQL scripts

 

11. Evaluate models and physical databases for variances and discrepancies

 

12. Validate business data objects for accuracy and completeness

 

13. Analyze data-related system integration challenges and propose appropriate solutions

 

14. Develop data models according to company standards

 

15. Guide System Analysts, Engineers, Programmers and others on project limitations and capabilities, performance requirements and interfaces

 

16. Good to have Home appliance/Retail domain knowledge and Azure Synapse.

 

Job Functions: Information Technology 

 

Employment Type - Full-time 

 

Thank you!

 

 

Read more
icon
Bengaluru (Bangalore)
icon
3 - 8 yrs
icon
₹10L - ₹14L / yr
Oracle SQL Developer
PL/SQL
ETL
Informatica
Data Warehouse (DWH)
+4 more
The role and responsibilities of Oracle or PL/SQL Developer and Database Administrator:
• Working Knowledge of XML, JSON, Shell and other DBMS scripts
• Hands on Experience on Oracle 11G,12c. Working knowledge of Oracle 18 and 19c
• Analysis, design, coding, testing, debugging and documentation. Complete knowledge of
Software Development Life Cycle (SDLC).
• Writing Complex Queries, stored procedures, functions and packages
• Knowledge of REST Services, UTL functions, DBMS functions and data integration is required
• Good knowledge on table level partitions, row locks and experience in OLTP.
• Should be aware about ETL tools, Data Migration, Data Mapping functionalities
• Understand the business requirement, transform/design the same into business solutions.
Perform data modelling and implement the business rules using Oracle database objects.
• Define source to target data mapping and data transformation logic as per the business
need.
• Should have worked on Materialised views creation and maintenance. Experience in
Performance tuning, impact analysis required
• Monitoring and optimizing the performance of the database. Planning for backup and
recovery of database information. Maintaining archived data. Backing up and restoring
databases.
• Hands on Experience on SQL Developer
Read more

Tata Digital Pvt Ltd

Agency job
via Seven N Half by Priya Singh
icon
Mumbai, Bengaluru (Bangalore)
icon
10 - 15 yrs
icon
₹20L - ₹37L / yr
Service Integration and Management
Environment Specialist
ETL
Test cases
  • Implementing Environment solutions for projects in a dynamic corporate environment
  • Communicating and collaborating with project and technical teams on Environment requirements, delivery and support
  • Delivering and Maintaining Environment Management Plans, Bookings, Access Details and Schedules for Projects
  • Working with Environment Team on Technical Environment Delivery Solutions
  • Troubleshooting, managing and tracking Environment Incidents & Service Requests in conjunction with technical teams and external partners via the service management tool
  • Leadership support in the North Sydney office
  • Mentoring, guiding and leading other team
  • Creation of new test environments
  • Provisioning infrastructure and platform
Test environment configuration (module, system, sub-module)

  • Test data provisioning (privatization, traceability, ETL, segregation)
  • Endpoint integration
  • Monitoring the test environment
  • Updating/deleting outdated test-environments and their details
  • Investigation of test environment issues and at times, co- ordination till its resolution
Read more
DP
Posted by Shreelakshmi M
icon
Bengaluru (Bangalore)
icon
5 - 8 yrs
icon
Best in industry
ETL
Informatica
Data Warehouse (DWH)
Python
ETL QA
+1 more
  • Graduate+ in Mathematics, Statistics, Computer Science, Economics, Business, Engineering or equivalent work experience.
  • Total experience of 5+ years with at least 2 years in managing data quality for high scale data platforms.
  • Good knowledge of SQL querying.
  • Strong skill in analysing data and uncovering patterns using SQL or Python.
  • Excellent understanding of data warehouse/big data concepts such data extraction, data transformation, data loading (ETL process).
  • Strong background in automation and building automated testing frameworks for data ingestion and transformation jobs.
  • Experience in big data technologies a big plus.
  • Experience in machine learning, especially in data quality applications a big plus.
  • Experience in building data quality automation frameworks a big plus.
  • Strong experience working with an Agile development team with rapid iterations. 
  • Very strong verbal and written communication, and presentation skills.
  • Ability to quickly understand business rules.
  • Ability to work well with others in a geographically distributed team.
  • Keen observation skills to analyse data, highly detail oriented.
  • Excellent judgment, critical-thinking, and decision-making skills; can balance attention to detail with swift execution.
  • Able to identify stakeholders, build relationships, and influence others to get work done.
  • Self-directed and self-motivated individual who takes complete ownership of the product and its outcome.
Read more

Japanese MNC

Agency job
via CIEL HR Services by sundari chitra
icon
Bengaluru (Bangalore)
icon
5 - 10 yrs
icon
₹7L - ₹15L / yr
ETL
Informatica
Data Warehouse (DWH)
IBM InfoSphere DataStage
Datastage
We are looking ETL Datastage developer for a Japanese MNC.

Role: ETL Datastage developer.

Eperience: 5 years

Location: Bangalore (WFH as of now).

Roles:

Design, develop, and schedule DataStage ETL jobs to extract data from disparate source systems, transform, and load data into EDW for data mart consumption, self-service analytics, and data visualization tools. 

Provides hands-on technical solutions to business challenges & translates them into process/technical solutions. 

Conduct code reviews to communicate high-level design approaches with team members to validate strategic business needs and architectural guidelines are met.

 Evaluate and recommend technical feasibility and effort estimates of proposed technology solutions. Provide operational instructions for dev, QA, and production code deployments while adhering to internal Change Management processes. 

Coordinate Control-M scheduler jobs and dependencies Recommend and implement ETL process performance tuning strategies and methodologies. Conducts and supports data validation, unit testing, and QA integration activities. 

Compose and update technical documentation to ensure compliance to department policies and standards. Create transformation queries, stored procedures for ETL processes, and development automations. 

Interested candidates can forward your profiles.
Read more
DP
Posted by Alfiya Khan
icon
Pune, Bengaluru (Bangalore)
icon
6 - 8 yrs
icon
₹15L - ₹25L / yr
Big Data
Data Warehouse (DWH)
Data modeling
Apache Spark
Data integration
+10 more
Company Profile
XpressBees – a logistics company started in 2015 – is amongst the fastest growing
companies of its sector. While we started off rather humbly in the space of
ecommerce B2C logistics, the last 5 years have seen us steadily progress towards
expanding our presence. Our vision to evolve into a strong full-service logistics
organization reflects itself in our new lines of business like 3PL, B2B Xpress and cross
border operations. Our strong domain expertise and constant focus on meaningful
innovation have helped us rapidly evolve as the most trusted logistics partner of
India. We have progressively carved our way towards best-in-class technology
platforms, an extensive network reach, and a seamless last mile management
system. While on this aggressive growth path, we seek to become the one-stop-shop
for end-to-end logistics solutions. Our big focus areas for the very near future
include strengthening our presence as service providers of choice and leveraging the
power of technology to improve efficiencies for our clients.

Job Profile
As a Lead Data Engineer in the Data Platform Team at XpressBees, you will build the data platform
and infrastructure to support high quality and agile decision-making in our supply chain and logistics
workflows.
You will define the way we collect and operationalize data (structured / unstructured), and
build production pipelines for our machine learning models, and (RT, NRT, Batch) reporting &
dashboarding requirements. As a Senior Data Engineer in the XB Data Platform Team, you will use
your experience with modern cloud and data frameworks to build products (with storage and serving
systems)
that drive optimisation and resilience in the supply chain via data visibility, intelligent decision making,
insights, anomaly detection and prediction.

What You Will Do
• Design and develop data platform and data pipelines for reporting, dashboarding and
machine learning models. These pipelines would productionize machine learning models
and integrate with agent review tools.
• Meet the data completeness, correction and freshness requirements.
• Evaluate and identify the data store and data streaming technology choices.
• Lead the design of the logical model and implement the physical model to support
business needs. Come up with logical and physical database design across platforms (MPP,
MR, Hive/PIG) which are optimal physical designs for different use cases (structured/semi
structured). Envision & implement the optimal data modelling, physical design,
performance optimization technique/approach required for the problem.
• Support your colleagues by reviewing code and designs.
• Diagnose and solve issues in our existing data pipelines and envision and build their
successors.

Qualifications & Experience relevant for the role

• A bachelor's degree in Computer Science or related field with 6 to 9 years of technology
experience.
• Knowledge of Relational and NoSQL data stores, stream processing and micro-batching to
make technology & design choices.
• Strong experience in System Integration, Application Development, ETL, Data-Platform
projects. Talented across technologies used in the enterprise space.
• Software development experience using:
• Expertise in relational and dimensional modelling
• Exposure across all the SDLC process
• Experience in cloud architecture (AWS)
• Proven track record in keeping existing technical skills and developing new ones, so that
you can make strong contributions to deep architecture discussions around systems and
applications in the cloud ( AWS).

• Characteristics of a forward thinker and self-starter that flourishes with new challenges
and adapts quickly to learning new knowledge
• Ability to work with a cross functional teams of consulting professionals across multiple
projects.
• Knack for helping an organization to understand application architectures and integration
approaches, to architect advanced cloud-based solutions, and to help launch the build-out
of those systems
• Passion for educating, training, designing, and building end-to-end systems.
Read more
icon
Bengaluru (Bangalore)
icon
6 - 12 yrs
icon
Best in industry
MongoDB
Spark
Hadoop
Big Data
Data engineering
+5 more
What You'll Do:
● Our Infrastructure team is looking for an excellent Big Data Engineer to join a core group that
designs the industry’s leading Micro-Engagement Platform. This role involves design and
implementation of architectures and frameworks of big data for industry’s leading intelligent
workflow automation platform. As a specialist in Ushur Engineering team, your responsibilities will
be to:
● Use your in-depth understanding to architect and optimize databases and data ingestion pipelines
● Develop HA strategies, including replica sets and sharding to for highly available clusters
● Recommend and implement solutions to improve performance, resource consumption, and
resiliency
● On an ongoing basis, identify bottlenecks in databases in development and production
environments and propose solutions
● Help DevOps team with your deep knowledge in the area of database performance, scaling,
tuning, migration & version upgrades
● Provide verifiable technical solutions to support operations at scale and with high availability
● Recommend appropriate data processing toolset and big data ecosystems to adopt
● Design and scale databases and pipelines across multiple physical locations on cloud
● Conduct Root-cause analysis of data issues
● Be self-driven, constantly research and suggest latest technologies

The experience you need:
● Engineering degree in Computer Science or related field
● 10+ years of experience working with databases, most of which should have been around
NoSql technologies
● Expertise in implementing and maintaining distributed, Big data pipelines and ETL
processes
● Solid experience in one of the following cloud-native data platforms (AWS Redshift/ Google
BigQuery/ SnowFlake)
● Exposure to real time processing techniques like Apache Kafka and CDC tools
(Debezium, Qlik Replicate)
● Strong experience in Linux Operating System
● Solid knowledge of database concepts, MongoDB, SQL, and NoSql internals
● Experience with backup and recovery for production and non-production environments
● Experience in security principles and its implementation
● Exceptionally passionate about always keeping the product quality bar at an extremely
high level
Nice-to-haves
● Proficient with one or more of Python/Node.Js/Java/similar languages

Why you want to Work with Us:
● Great Company Culture. We pride ourselves on having a values-based culture that
is welcoming, intentional, and respectful. Our internal NPS of over 65 speaks for
itself - employees recommend Ushur as a great place to work!
● Bring your whole self to work. We are focused on building a diverse culture, with
innovative ideas where you and your ideas are valued. We are a start-up and know
that every person has a significant impact!
● Rest and Relaxation. 13 Paid leaves, wellness Fridays offs (aka a day off to care
for yourself- every last Friday of the month), 12 paid sick Leaves, and more!
● Health Benefits. Preventive health checkups, Medical Insurance covering the
dependents, wellness sessions, and health talks at the office
● Keep learning. One of our core values is Growth Mindset - we believe in lifelong
learning. Certification courses are reimbursed. Ushur Community offers wide
resources for our employees to learn and grow.
● Flexible Work. In-office or hybrid working model, depending on position and
location. We seek to create an environment for all our employees where they can
thrive in both their profession and personal life.
Read more
DP
Posted by Komal Samudrala
icon
Hyderabad, Bengaluru (Bangalore), Pune
icon
5 - 8 yrs
icon
₹25L - ₹32L / yr
Alteryx
AWS CloudFormation
Google Cloud Platform (GCP)
ETL
SQL
+3 more
Alteryx Engineer

Job Description

Qualifications

• Like us, you’re a high performer who’s an expert at your craft, constantly challenging the status quo
• A minimum of 5+ years in a data-focused business development/ alliances, sales engineering, solutions architect, consulting, or engineering experience
• Complex enterprise systems and problem-solving skills
• Deep Experience with one or more cloud platforms and architectures (AWS, GCP, Azure)
• Experience with one or more enterprise-level security deployments: Single Sign-on (SSO), Active Directory (AD), Lightweight Directory Access Protocol (LDAP), Kerberos (or equivalent)
• Data industry experience in Analytic applications, Application integration, Extract Transform Load (ETL), Extract Load Transform (ELT), Business Intelligence, DataViz, Software as a Service (SaaS), Platform as a Service (PaaS), Cloud Data Warehouse/Data Lake technologies (BigQuery/Snowflake/Redshift/Databricks/Synapse)
• Demonstrable working knowledge of Unix/Linux OS and Filesystems
• Demonstrable working knowledge of SQL
• Excellent verbal, written, and presentation skills
• Comfortable and quick with learning new technologies as needed
• Experience with Kubernetes
• Experience in selling Enterprise subscription-based cloud software
• Experience supporting workflows using Airflow or other enterprise workflow tools
• Some programming experience in Python/Java/R

Responsibilities

• This ranges from an executive-level discussion on overall business strategy to deep technical engagements with product and engineering teams
• Develop and continually refine deep Trifacta product knowledge which is part of the Alteryx Analytic Cloud
• Data savvy and proficient at communicating data manipulation concepts
• Create and execute high impact Technical/Architectural presentations and top-notch programs/workshops/demos, for Partner Technical and Architectural enablement
• Enable and support partner-led presentations, demonstrations, and technical evaluations by providing a technical environment and product expertise
• Distill and communicate partner needs and product feedback to Product Management, Engineering, Marketing and Sales
• Collaborate with Alteryx's Cloud Alliances Leadership & Product Management to develop comprehensive technical plans for strategic partners, including identifying, incubating, and bringing to market service/solution offerings based on the Alteryx cloud platform and services
• Provide oversight, guidance, and assistance during the partner's sales process to ensure mutual success
• Represent Alteryx at partner events, and work with partners to develop integrated solutions, demos, joint blog posts and whitepapers


Read more
DP
Posted by Komal Samudrala
icon
Hyderabad, Bengaluru (Bangalore), Pune
icon
3 - 5 yrs
icon
₹10L - ₹15L / yr
ETL
T-SQL
Azure Data Factory
Data Warehouse (DWH)
Informatica
+2 more

Data Engineer

 

ESSENTIAL DUTIES AND RESPONSIBILITIES

  • Data Warehouse Architecture;
  • Data Model Design;
  • Data pipeline maintenance/testing;
  • Machine learning algorithm deployment;
  • Managing data and meta-data;
  • Setting up data-access tools;
  • Maintain System Reliability;
  • Other related duties as required.

JOB REQUIREMENTS

  • Good understanding of data warehouse models, including data marts and data lakes;
  • Strong skillset in T-SQL, DAX, and Power Query;
  • Strong skillset in Azure Data Factory, Azure Synapse, and Power BI;
  • Knowledge or experience in handling security-sensitive data;
  • Good understanding of ETL fundamentals and building efficient data pipelines;
  • Prior experience developing, integrating, maintaining, monitoring, and performance tuning of ETL, data pipeline, API, data extract, and ad hoc queries;
  • Strong organizational and time management skills;
  • Strong interpersonal and communication skills (both oral and written) and the ability to work well with employees at all levels of the organization;
  • Able to work independently and collaboratively in a team environment.

EDUCATION, EXPERIENCE AND/OR CREDENTIALS

  • BA/BS in Computer Science or similar technical discipline (or equivalent experience);
  • 3+ years in a Data Engineering or Data Warehousing role;
  • 3+ years of python coding experience;
  • Cloud-certified Azure Data Engineer preferred.

 

Read more

at PayU

DP
Posted by Vishakha Sonde
icon
Remote, Bengaluru (Bangalore)
icon
2 - 5 yrs
icon
₹5L - ₹20L / yr
Python
ETL
Data engineering
Informatica
SQL
+2 more

Role: Data Engineer  
Company: PayU

Location: Bangalore/ Mumbai

Experience : 2-5 yrs


About Company:

PayU is the payments and fintech business of Prosus, a global consumer internet group and one of the largest technology investors in the world. Operating and investing globally in markets with long-term growth potential, Prosus builds leading consumer internet companies that empower people and enrich communities.

The leading online payment service provider in 36 countries, PayU is dedicated to creating a fast, simple and efficient payment process for merchants and buyers. Focused on empowering people through financial services and creating a world without financial borders where everyone can prosper, PayU is one of the biggest investors in the fintech space globally, with investments totalling $700 million- to date. PayU also specializes in credit products and services for emerging markets across the globe. We are dedicated to removing risks to merchants, allowing consumers to use credit in ways that suit them and enabling a greater number of global citizens to access credit services.

Our local operations in Asia, Central and Eastern Europe, Latin America, the Middle East, Africa and South East Asia enable us to combine the expertise of high growth companies with our own unique local knowledge and technology to ensure that our customers have access to the best financial services.

India is the biggest market for PayU globally and the company has already invested $400 million in this region in last 4 years. PayU in its next phase of growth is developing a full regional fintech ecosystem providing multiple digital financial services in one integrated experience. We are going to do this through 3 mechanisms: build, co-build/partner; select strategic investments. 

PayU supports over 350,000+ merchants and millions of consumers making payments online with over 250 payment methods and 1,800+ payment specialists. The markets in which PayU operates represent a potential consumer base of nearly 2.3 billion people and a huge growth potential for merchants. 

Job responsibilities:

  • Design infrastructure for data, especially for but not limited to consumption in machine learning applications 
  • Define database architecture needed to combine and link data, and ensure integrity across different sources 
  • Ensure performance of data systems for machine learning to customer-facing web and mobile applications using cutting-edge open source frameworks, to highly available RESTful services, to back-end Java based systems 
  • Work with large, fast, complex data sets to solve difficult, non-routine analysis problems, applying advanced data handling techniques if needed 
  • Build data pipelines, includes implementing, testing, and maintaining infrastructural components related to the data engineering stack.
  • Work closely with Data Engineers, ML Engineers and SREs to gather data engineering requirements to prototype, develop, validate and deploy data science and machine learning solutions

Requirements to be successful in this role: 

  • Strong knowledge and experience in Python, Pandas, Data wrangling, ETL processes, statistics, data visualisation, Data Modelling and Informatica.
  • Strong experience with scalable compute solutions such as in Kafka, Snowflake
  • Strong experience with workflow management libraries and tools such as Airflow, AWS Step Functions etc. 
  • Strong experience with data engineering practices (i.e. data ingestion pipelines and ETL) 
  • A good understanding of machine learning methods, algorithms, pipelines, testing practices and frameworks 
  • Preferred) MEng/MSc/PhD degree in computer science, engineering, mathematics, physics, or equivalent (preference: DS/ AI) 
  • Experience with designing and implementing tools that support sharing of data, code, practices across organizations at scale 
Read more
DP
Posted by Dhwani Rambhia
icon
Bengaluru (Bangalore)
icon
2 - 5 yrs
icon
₹5L - ₹20L / yr
Data Science
Machine Learning (ML)
Natural Language Processing (NLP)
Computer Vision
Data Analytics
+18 more

DATA SCIENTIST-MACHINE LEARNING                           

GormalOne LLP. Mumbai IN

 

Job Description

GormalOne is a social impact Agri tech enterprise focused on farmer-centric projects. Our vision is to make farming highly profitable for the smallest farmer, thereby ensuring India's “Nutrition security”. Our mission is driven by the use of advanced technology. Our technology will be highly user-friendly, for the majority of farmers, who are digitally naive. We are looking for people, who are keen to use their skills to transform farmers' lives. You will join a highly energized and competent team that is working on advanced global technologies such as OCR, facial recognition, and AI-led disease prediction amongst others.

 

GormalOne is looking for a machine learning engineer to join. This collaborative yet dynamic, role is suited for candidates who enjoy the challenge of building, testing, and deploying end-to-end ML pipelines and incorporating ML Ops best practices across different technology stacks supporting a variety of use cases. We seek candidates who are curious not only about furthering their own knowledge of ML Ops best practices through hands-on experience but can simultaneously help uplift the knowledge of their colleagues.

 

Location: Bangalore

 

Roles & Responsibilities

  • Individual contributor
  • Developing and maintaining an end-to-end data science project
  • Deploying scalable applications on different platform
  • Ability to analyze and enhance the efficiency of existing products

 

What are we looking for?

  • 3 to 5 Years of experience as a Data Scientist
  • Skilled in Data Analysis, EDA, Model Building, and Analysis.
  • Basic coding skills in Python
  • Decent knowledge of Statistics
  • Creating pipelines for ETL and ML models.
  • Experience in the operationalization of ML models
  • Good exposure to Deep Learning, ANN, DNN, CNN, RNN, and LSTM.
  • Hands-on experience in Keras, PyTorch or Tensorflow

 

 

Basic Qualifications

  • Tech/BE in Computer Science or Information Technology
  • Certification in AI, ML, or Data Science is preferred.
  • Master/Ph.D. in a relevant field is preferred.

 

 

Preferred Requirements

  • Exp in tools and packages like Tensorflow, MLFlow, Airflow
  • Exp in object detection techniques like YOLO
  • Exposure to cloud technologies
  • Operationalization of ML models
  • Good understanding and exposure to MLOps

 

 

Kindly note: Salary shall be commensurate with qualifications and experience

 

 

 

 

Read more
icon
Bengaluru (Bangalore), Mumbai
icon
5 - 8 yrs
icon
₹5L - ₹15L / yr
ETL
Informatica
Data Warehouse (DWH)
SQL
Python
+7 more

The Nitty-Gritties

Location: Bengaluru/Mumbai

About the Role:

Freight Tiger is growing exponentially, and technology is at the centre of it. Our Engineers love solving complex industry problems by building modular and scalable solutions using cutting-edge technology. Your peers will be an exceptional group of Software Engineers, Quality Assurance Engineers, DevOps Engineers, and Infrastructure and Solution Architects.

This role is responsible for developing data pipelines and data engineering components to support strategic initiatives and ongoing business processes. This role works with leads, analysts, and data scientists to understand requirements, develop technical solutions, and ensure the reliability and performance of the data engineering solutions.

This role provides an opportunity to directly impact business outcomes for sales, underwriting, claims and operations functions across multiple use cases by providing them data for their analytical modelling needs.

Key Responsibilities

  • Create and maintain a data pipeline.
  • Build and deploy ETL infrastructure for optimal data delivery.
  • Work with various product, design and executive teams to troubleshoot data-related issues.
  • Create tools for data analysts and scientists to help them build and optimise the product.
  • Implement systems and processes for data access controls and guarantees.
  • Distil the knowledge from experts in the field outside the org and optimise internal data systems.




Preferred Qualifications/Skills

  • Should have 5+ years of relevant experience.
  • Strong analytical skills.
  • Degree in Computer Science, Statistics, Informatics, Information Systems.
  • Strong project management and organisational skills.
  • Experience supporting and working with cross-functional teams in a dynamic environment.
  • SQL guru with hands-on experience on various databases.
  • NoSQL databases like Cassandra, and MongoDB.
  • Experience with Snowflake, Redshift.
  • Experience with tools like Airflow, and Hevo.
  • Experience with Hadoop, Spark, Kafka, and Flink.
  • Programming experience in Python, Java, and Scala.
Read more
DP
Posted by Jyoti Kaushik
icon
Noida, Bengaluru (Bangalore), Pune, Hyderabad
icon
4 - 7 yrs
icon
₹4L - ₹16L / yr
ETL
SQL
Data Warehouse (DWH)
Informatica
Datawarehousing
+2 more

We are looking for a Senior Data Engineer to join the Customer Innovation team, who will be responsible for acquiring, transforming, and integrating customer data onto our Data Activation Platform from customers’ clinical, claims, and other data sources. You will work closely with customers to build data and analytics solutions to support their business needs, and be the engine that powers the partnership that we build with them by delivering high-fidelity data assets.

In this role, you will work closely with our Product Managers, Data Scientists, and Software Engineers to build the solution architecture that will support customer objectives. You'll work with some of the brightest minds in the industry, work with one of the richest healthcare data sets in the world, use cutting-edge technology, and see your efforts affect products and people on a regular basis. The ideal candidate is someone that

  • Has healthcare experience and is passionate about helping heal people,
  • Loves working with data,
  • Has an obsessive focus on data quality,
  • Is comfortable with ambiguity and making decisions based on available data and reasonable assumptions,
  • Has strong data interrogation and analysis skills,
  • Defaults to written communication and delivers clean documentation, and,
  • Enjoys working with customers and problem solving for them.

A day in the life at Innovaccer:

  • Define the end-to-end solution architecture for projects by mapping customers’ business and technical requirements against the suite of Innovaccer products and Solutions.
  • Measure and communicate impact to our customers.
  • Enabling customers on how to activate data themselves using SQL, BI tools, or APIs to solve questions they have at speed.

What You Need:

  • 4+ years of experience in a Data Engineering role, a Graduate degree in Computer Science, Statistics, Informatics, Information Systems, or another quantitative field.
  • 4+ years of experience working with relational databases like Snowflake, Redshift, or Postgres.
  • Intermediate to advanced level SQL programming skills.
  • Data Analytics and Visualization (using tools like PowerBI)
  • The ability to engage with both the business and technical teams of a client - to document and explain technical problems or concepts in a clear and concise way.
  • Ability to work in a fast-paced and agile environment.
  • Easily adapt and learn new things whether it’s a new library, framework, process, or visual design concept.

What we offer:

  • Industry certifications: We want you to be a subject matter expert in what you do. So, whether it’s our product or our domain, we’ll help you dive in and get certified.
  • Quarterly rewards and recognition programs: We foster learning and encourage people to take risks. We recognize and reward your hard work.
  • Health benefits: We cover health insurance for you and your loved ones.
  • Sabbatical policy: We encourage people to take time off and rejuvenate, learn new skills, and pursue their interests so they can generate new ideas with Innovaccer.
  • Pet-friendly office and open floor plan: No boring cubicles.
Read more
DP
Posted by Lokesh Manikappa
icon
Bengaluru (Bangalore)
icon
5 - 12 yrs
icon
₹15L - ₹35L / yr
ETL
Informatica
Data Warehouse (DWH)
Data modeling
Spark
+5 more

Job Description

The applicant must have a minimum of 5 years of hands-on IT experience, working on a full software lifecycle in Agile mode.

Good to have experience in data modeling and/or systems architecture.
Responsibilities will include technical analysis, design, development and perform enhancements.

You will participate in all/most of the following activities:
- Working with business analysts and other project leads to understand requirements.
- Modeling and implementing database schemas in DB2 UDB or other relational databases.
- Designing, developing, maintaining and Data processing using Python, DB2, Greenplum, Autosys and other technologies

 

Skills /Expertise Required :

Work experience in developing large volume database (DB2/Greenplum/Oracle/Sybase).

Good experience in writing stored procedures, integration of database processing, tuning and optimizing database queries.

Strong knowledge of table partitions, high-performance loading and data processing.
Good to have hands-on experience working with Perl or Python.
Hands on development using Spark / KDB / Greenplum platform will be a strong plus.
Designing, developing, maintaining and supporting Data Extract, Transform and Load (ETL) software using Informatica, Shell Scripts, DB2 UDB and Autosys.
Coming up with system architecture/re-design proposals for greater efficiency and ease of maintenance and developing software to turn proposals into implementations.

Need to work with business analysts and other project leads to understand requirements.
Strong collaboration and communication skills

Read more
icon
Bengaluru (Bangalore), Mangalore
icon
5 - 9 yrs
icon
Best in industry
Spotfire
Qlikview
Tableau
PowerBI
Data Visualization
+5 more
CodeCraft Technologies is a multi-award-winning creative engineering company offering design and technology solutions on
mobile, web, and cloud platforms.

We are looking for a Senior Data Analyst to join our team.

Responsibilities:

• Design, develop and maintain scaled, automated, user-friendly systems, statistical analysis and
prediction, reports, dashboards, etc. that will support the needs of the business.
• Apply deep analytic and business intelligence skills to extract meaningful insights and learning from
large and complicated data sets using the appropriate statistical forecasting model and Machine
Learning models.
• Be hands-on with ETL to build data pipeline to support automated reporting.
• Ingest data from various sources (e.g.: Google Ad Manager, LiveIntent, various SSPs, etc.). Integrate
directly wherever possible/necessary
• Integrate the platform to other operational systems such as billing, Sales (Salesforce), etc.
• Serve as liaison between the business and technical teams to achieve the goal of providing actionable
insights into current business performance, and ad hoc investigations to support future improvements
or innovations.
• This will require data gathering and manipulation, problem solving, and communication of insights
and recommendations.
• Build various data visualizations to tell the story of business trends, patterns, and outliers through rich
visualizations.
• Build dashboards for various stakeholders (e.g.: Sr. Leadership team, Media Sales team, Ops team,
Advertisers, Finance & Accounting, etc.)
• Recognize and adopt best practices in reporting and analysis: data integrity, test design, analysis,
validation, and documentation.

Required Skillset:
• Extensive experience in SQL, data modelling, and data preparation required.
• Experience working with DataStage and Kafka is a plus.
• Overall knowledge of MDM and Data Governance processes preferred.
• Experience with MicroStrategy, PowerBI or related tools like Tableau is a must.
• Experience working with databases like Teradata, Oracle and SQL server is required.
• Experience working with Azure cloud databases is a plus.
• Experience with Agile methodology preferred.
• Excellent problem solving, analytical, and troubleshooting skills.
• Great teamwork and interpersonal skills.
• Takes initiative and tackles challenges with enthusiasm.
Read more

Product based company

Agency job
via Zyvka Global Services by Ridhima Sharma
icon
Bengaluru (Bangalore)
icon
3 - 12 yrs
icon
₹5L - ₹30L / yr
Spark
Hadoop
Big Data
Data engineering
PySpark
+6 more

Responsibilities:

  • Should act as a technical resource for the Data Science team and be involved in creating and implementing current and future Analytics projects like data lake design, data warehouse design, etc.
  • Analysis and design of ETL solutions to store/fetch data from multiple systems like Google Analytics, CleverTap, CRM systems etc.
  • Developing and maintaining data pipelines for real time analytics as well as batch analytics use cases.
  • Collaborate with data scientists and actively work in the feature engineering and data preparation phase of model building
  • Collaborate with product development and dev ops teams in implementing the data collection and aggregation solutions
  • Ensure quality and consistency of the data in Data warehouse and follow best data governance practices
  • Analyse large amounts of information to discover trends and patterns
  • Mine and analyse data from company databases to drive optimization and improvement of product development, marketing techniques and business strategies.\

Requirements

  • Bachelor’s or Masters in a highly numerate discipline such as Engineering, Science and Economics
  • 2-6 years of proven experience working as a Data Engineer preferably in ecommerce/web based or consumer technologies company
  • Hands on experience of working with different big data tools like Hadoop, Spark , Flink, Kafka and so on
  • Good understanding of AWS ecosystem for big data analytics
  • Hands on experience in creating data pipelines either using tools or by independently writing scripts
  • Hands on experience in scripting languages like Python, Scala, Unix Shell scripting and so on
  • Strong problem solving skills with an emphasis on product development.
  • Experience using business intelligence tools e.g. Tableau, Power BI would be an added advantage (not mandatory)
Read more
DP
Posted by Newali Hazarika
icon
Bengaluru (Bangalore)
icon
4 - 9 yrs
icon
₹15L - ₹35L / yr
ETL
Informatica
Data Warehouse (DWH)
Data engineering
Oracle
+7 more

We are an early stage start-up, building new fintech products for small businesses. Founders are IIT-IIM alumni, with prior experience across management consulting, venture capital and fintech startups. We are driven by the vision to empower small business owners with technology and dramatically improve their access to financial services. To start with, we are building a simple, yet powerful solution to address a deep pain point for these owners: cash flow management. Over time, we will also add digital banking and 1-click financing to our suite of offerings.

 

We have developed an MVP which is being tested in the market. We have closed our seed funding from marquee global investors and are now actively building a world class tech team. We are a young, passionate team with a strong grip on this space and are looking to on-board enthusiastic, entrepreneurial individuals to partner with us in this exciting journey. We offer a high degree of autonomy, a collaborative fast-paced work environment and most importantly, a chance to create unparalleled impact using technology.

 

Reach out if you want to get in on the ground floor of something which can turbocharge SME banking in India!

 

Technology stack at Velocity comprises a wide variety of cutting edge technologies like, NodeJS, Ruby on Rails, Reactive Programming,, Kubernetes, AWS, NodeJS, Python, ReactJS, Redux (Saga) Redis, Lambda etc. 

 

Key Responsibilities

  • Responsible for building data and analytical engineering pipelines with standard ELT patterns, implementing data compaction pipelines, data modelling and overseeing overall data quality

  • Work with the Office of the CTO as an active member of our architecture guild

  • Writing pipelines to consume the data from multiple sources

  • Writing a data transformation layer using DBT to transform millions of data into data warehouses.

  • Implement Data warehouse entities with common re-usable data model designs with automation and data quality capabilities

  • Identify downstream implications of data loads/migration (e.g., data quality, regulatory)

 

What To Bring

  • 5+ years of software development experience, a startup experience is a plus.

  • Past experience of working with Airflow and DBT is preferred

  • 5+ years of experience working in any backend programming language. 

  • Strong first-hand experience with data pipelines and relational databases such as Oracle, Postgres, SQL Server or MySQL

  • Experience with DevOps tools (GitHub, Travis CI, and JIRA) and methodologies (Lean, Agile, Scrum, Test Driven Development)

  • Experienced with the formulation of ideas; building proof-of-concept (POC) and converting them to production-ready projects

  • Experience building and deploying applications on on-premise and AWS or Google Cloud cloud-based infrastructure

  • Basic understanding of Kubernetes & docker is a must.

  • Experience in data processing (ETL, ELT) and/or cloud-based platforms

  • Working proficiency and communication skills in verbal and written English.

 

Read more
DP
Posted by Baiju Sukumaran
icon
Vadodara, Bengaluru (Bangalore), Ahmedabad, Pune, Kolkata, Hyderabad
icon
6 - 8 yrs
icon
Best in industry
Datawarehousing
Microsoft Windows Azure
ETL
Relational Database (RDBMS)
SQL Server Integration Services (SSIS)
+7 more
Technical Skills
Mandatory (Minimum 4 years of working experience)
 3+ years of experience leading data warehouse implementation with technical architectures , ETL / ELT ,
reporting / analytic tools and scripting (end to end implementation)
 Experienced in Microsoft Azure (Azure SQL Managed Instance , Data Factory , Azure Synapse, Azure Monitoring ,
Azure DevOps , Event Hubs , Azure AD Security)
 Deep experience in using any BI tools such as Power BI/Tableau, QlikView/SAP-BO etc.,
 Experienced in ETL tools such as SSIS, Talend/Informatica/Pentaho
 Expertise in using RDBMSes like Oracle, SQL Server as source or target and online analytical processing (OLAP)
 Experienced in SQL/T-SQL/ DML/DDL statements, stored procedure, function, trigger, indexes, cursor
 Expertise in building and organizing advanced DAX calculations and SSAS cubes
 Experience in data/dimensional modelling, analysis, design, testing, development, and implementation
 Experienced in advanced data warehouse concepts using structured, semi-structured and un-structured data
 Experienced with real time ingestion, change data capture, real time & batch processing
 Good knowledge of meta data management and data governance
 Great problem solving skills, with a strong bias for quality and design excellence
 Experienced in developing dashboards with a focus on usability, performance, flexibility, testability, and
standardization.
 Familiarity with development in cloud environments like AWS / Azure / Google

Good To Have (1+ years of working experience)
 Experience working with Snowflake, Amazon RedShift
Soft Skills
 Good verbal and written communication skills
 Ability to collaborate and work effectively in a team.
 Excellent analytical and logical skills
Read more
DP
Posted by Baiju Sukumaran
icon
Remote, Vadodara, Bengaluru (Bangalore), Pune, Ahmedabad
icon
7 - 9 yrs
icon
Best in industry
Spotfire
Qlikview
Tableau
PowerBI
Data Visualization
+6 more
Key Responsibilities
 Ability to interpret and map business, functional and non functional requirements to technical specifications
 Interact with diverse stakeholders like clients, project manager/scrum master, business analysts, testing and
other cross-functional teams as part of Business Intelligence projects
 Develop solutions following established technical design, application development standards and quality processes in projects to deliver efficient, reusable and reliable code with complete ownership
 Assess the impacts on technical design because of the changes in functional requirements.
 Developing the full life cycle of a BI project (data movement and visualization) which includes requirements analysis , platform selection , architecture , application design and development , testing and deployment
 Provide architectural leadership with a strong emphasis on data architecture around ETL and governance
 Troubleshoot highly complex technical problems in a OLAP/OLTP/DW based environments
 Provide support specific to application bugs or issues within defined SLAs
 Proactively identify and communicate technical risks, issues, and challenges with mitigations
 Perform independent code reviews and guide junior team members for correction

Must Have Technical Skills :
3+ years of experience leading data warehouse implementation with technical architectures , ETL / ELT, reporting / analytic tools and scripting (end to end implementation)
 Experienced in Microsoft Azure (Azure SQL Managed Instance , Data Factory , Azure Synapse, Azure Monitoring, Azure DevOps , Event Hubs , Azure AD Security)
 Deep experience in using any BI tools such as Power BI/Tableau, QlikView/SAP-BO etc.,
 Experienced in ETL tools such as SSIS, Talend/Informatica/Pentaho
 Expertise in using RDBMSes like Oracle, SQL Server as source or target and online analytical processing (OLAP)
 Experienced in SQL/T-SQL/ DML/DDL statements, stored procedure, function, trigger, indexes, cursor
 Expertise in building and organizing advanced DAX calculations and SSAS cubes
 Experience in data/dimensional modelling, analysis, design, testing, development, and implementation
 Experienced in advanced data warehouse concepts using structured, semi-structured and un-structured data
 Experienced with real time ingestion, change data capture, real time & batch processing
 Good knowledge of meta data management and data governance
 Great problem solving skills, with a strong bias for quality and design excellence
 Experienced in developing dashboards with a focus on usability, performance, flexibility, testability, and
standardization.
 Familiarity with development in cloud environments like AWS / Azure / Google
Read more
Agency job
via wrackle by Naveen Taalanki
icon
Bengaluru (Bangalore)
icon
1 - 8 yrs
icon
₹10L - ₹40L / yr
Python
ETL
Jenkins
CI/CD
pandas
+6 more
Roles & Responsibilties
Expectations of the role
This role will be reporting into Technical Lead (Support). You will be expected to resolve bugs in the platform that are identified by Customers and Internal Teams. This role will progress towards SDE-2 in 12-15 months where the developer will be working on solving complex problems around scale and building out new features.
 
What will you do?
  • Fix issues with plugins for our Python-based ETL pipelines
  • Help with automation of standard workflow
  • Deliver Python microservices for provisioning and managing cloud infrastructure
  • Responsible for any refactoring of code
  • Effectively manage challenges associated with handling large volumes of data working to tight deadlines
  • Manage expectations with internal stakeholders and context-switch in a fast-paced environment
  • Thrive in an environment that uses AWS and Elasticsearch extensively
  • Keep abreast of technology and contribute to the engineering strategy
  • Champion best development practices and provide mentorship to others
What are we looking for?
  • First and foremost you are a Python developer, experienced with the Python Data stack
  • You love and care about data
  • Your code is an artistic manifest reflecting how elegant you are in what you do
  • You feel sparks of joy when a new abstraction or pattern arises from your code
  • You support the manifests DRY (Don’t Repeat Yourself) and KISS (Keep It Short and Simple)
  • You are a continuous learner
  • You have a natural willingness to automate tasks
  • You have critical thinking and an eye for detail
  • Excellent ability and experience of working to tight deadlines
  • Sharp analytical and problem-solving skills
  • Strong sense of ownership and accountability for your work and delivery
  • Excellent written and oral communication skills
  • Mature collaboration and mentoring abilities
  • We are keen to know your digital footprint (community talks, blog posts, certifications, courses you have participated in or you are keen to, your personal projects as well as any kind of contributions to the open-source communities if any)
Nice to have:
  • Delivering complex software, ideally in a FinTech setting
  • Experience with CI/CD tools such as Jenkins, CircleCI
  • Experience with code versioning (git / mercurial / subversion)
Read more
icon
Bengaluru (Bangalore), Hyderabad, Pune, Indore, Gurugram, Noida
icon
10 - 17 yrs
icon
₹25L - ₹50L / yr
Product Management
Big Data
Data Warehouse (DWH)
ETL
Hi All, 
Greetings! We are looking for Product Manager for our Data modernization product. We need a resource with good knowledge on Big Data/DWH. should have strong Stakeholders management and Presentation skills
Read more

Top Multinational Fintech Startup

Agency job
via wrackle by Naveen Taalanki
icon
Bengaluru (Bangalore)
icon
4 - 7 yrs
icon
₹20L - ₹30L / yr
Spark
Hadoop
Big Data
Data engineering
PySpark
+5 more
Roles & Responsibilties
What will you do?
  • Deliver plugins for our Python-based ETL pipelines
  • Deliver Python microservices for provisioning and managing cloud infrastructure
  • Implement algorithms to analyse large data sets
  • Draft design documents that translate requirements into code
  • Effectively manage challenges associated with handling large volumes of data working to tight deadlines
  • Manage expectations with internal stakeholders and context-switch in a fast-paced environment
  • Thrive in an environment that uses AWS and Elasticsearch extensively
  • Keep abreast of technology and contribute to the engineering strategy
  • Champion best development practices and provide mentorship to others
What are we looking for?
  • First and foremost you are a Python developer, experienced with the Python Data stack
  • You love and care about data
  • Your code is an artistic manifest reflecting how elegant you are in what you do
  • You feel sparks of joy when a new abstraction or pattern arises from your code
  • You support the manifests DRY (Don’t Repeat Yourself) and KISS (Keep It Short and Simple)
  • You are a continuous learner
  • You have a natural willingness to automate tasks
  • You have critical thinking and an eye for detail
  • Excellent ability and experience of working to tight deadlines
  • Sharp analytical and problem-solving skills
  • Strong sense of ownership and accountability for your work and delivery
  • Excellent written and oral communication skills
  • Mature collaboration and mentoring abilities
  • We are keen to know your digital footprint (community talks, blog posts, certifications, courses you have participated in or you are keen to, your personal projects as well as any kind of contributions to the open-source communities if any)
Nice to have:
  • Delivering complex software, ideally in a FinTech setting
  • Experience with CI/CD tools such as Jenkins, CircleCI
  • Experience with code versioning (git / mercurial / subversion)
Read more

PRODUCT ENGINEERING BASED MNC

Agency job
via Exploro Solutions by ramya ramchandran
icon
Bengaluru (Bangalore)
icon
5 - 8 yrs
icon
₹12L - ₹25L / yr
Scala
Spark
Apache Spark
ETL
SQL

 Strong experience on SQL and relational databases

  • Good programming exp on Scala & Spark
  • Good exp on ETL batch data pipelines development and migration/upgrading
  • Python – Good to have.
  • AWS – Good to have
  • Knowledgeable in the areas of Big data/Hadoop/S3/HIVE. Nice to have exp on ETL frameworks (ex: Airflow, Flume, Oozie etc.)
  • Ability to work independently, take ownership and strong troubleshooting/debugging skills
  • Good communication and collaboration skills
Read more
DP
Posted by Abhilash P
icon
Bengaluru (Bangalore), Coimbatore
icon
4 - 10 yrs
icon
₹12L - ₹18L / yr
Spotfire
Qlikview
Tableau
PowerBI
Data Visualization
+6 more
The Data Analyst will be responsible for supporting the data and reporting needs of the Brokerage. This role will involve interacting with internal stakeholders for business problem formulation, requirements gathering, identifying relevant data sources or sub-processes in which to deploy new KPIs, communicating new data extract requirements, analyzing and synthesizing data from multiple sources, and ultimately producing high-quality insights that demonstrate a full narrative. This role requires a significant amount of collaboration. The successful person will be highly adaptable and able to prioritize multiple critical projects.

Why work with us?

we're helping Canadian families finance their dream homes and our people are at the heart of everything we do. We are the fastest-growing mortgage brokerage and amongst the Top 20 Brokerages in Canada. We are founded in innovation and driven by technology. This translates into a process and service that is consistent, reliable, and scalable. We provide a state of art employment facility, cutting-edge technology, a proven sales process with continuous training and support.

Responsibilities:

 

  • Applies scripting/programming skills to assemble various types of source data (unstructured, semi-structured, and structured) into well-prepared datasets with multiple levels of granularities (e.g., demographics, customers, products, transactions).
  • Lead the development of standard and customized reporting, dashboards, and analysis of information
  • Lead the development of tools, methodologies, and statistical
  • Provide hands-on development and support in creating and launching various tools and reporting
  • Develops analytical solutions and makes recommendations based on an understanding of the business strategy and stakeholder
  • Works with various data owners to discover and select available data from internal sources to fulfill analytical needs
  • Summarizes statistical findings and draws conclusions, presents actionable business recommendations. Presents findings & recommendations in a simple, clear way to drive action.
  • Uses the appropriate algorithms to discover
  • Works independently on a range of complex tasks, which may include unique

 

 

Qualifications, Skills & Competencies:

 

  • Post Secondary Degree – Computer Science, Information Technology or other relevant degrees with curriculum related to data structures and analysis

  • Minimum 5 years of experience as an analyst
  • Minimum 5 years of knowledge of business intelligence tools and programming languages
  • Advance skills in data analysis and profiling, data mapping, data modeling, data lakes, and analytics
  • Data Analytics: AWS Quicksight and Redshift
  • Data Migration: solid in SQL and ETL
  • Scripting and Integration: REST APIs, GraphQL, Nodejs, AWS Lambda/API Gateway
  • Experience working with data mining and performing quantitative analysis
  • Experience with Machine Learning algorithms and associated data sets
  • Business acumen results-oriented
  • Proactive/takes initiative/self-starter
  • Excellent written and oral communication skills
  • Ability to create, coordinate and facilitate presentations
  • Time management, highly organized
  • Collaboration and Team Engagement
  • Analytical and Problem Solving
  • Data-driven/Metrics Driven


Read more

software and consultancy company

Agency job
via Exploro Solutions by ramya ramchandran
icon
Bengaluru (Bangalore), Chennai
icon
6 - 8 yrs
icon
₹12L - ₹30L / yr
Amazon Web Services (AWS)
ETL
Informatica
Data Warehouse (DWH)
SQL

Primary Skills

 6 to 8 years of relevant work experience in ETL tools

 Good knowledge working in AWS Cloud Data Bases like Aurora DB and ecosystem and tools (AWS DMS)

Migrating databases to AWS Cloud would be Mandatory

Sound knowledge of SQL and procedural language.

Possess solid experience of writing complex SQL queries and optimizing SQL query performance

Knowledge of data ingestion one-off feed, change data capture, incremental batch

 

Additional Skills :

Experience in Unix/Linux systems and writing shell scripts would be nice to have

Java knowledge would be an added advantage

Knowledge in Spark Python for building ETL pipelines on cloud would be preferable

Read more
DP
Posted by akanksha rajput
icon
Bengaluru (Bangalore)
icon
4 - 8 yrs
icon
₹20L - ₹30L / yr
ETL
Informatica
Data Warehouse (DWH)
Python
pandas

About us

SteelEye is the only regulatory compliance technology and data analytics firm that offers transaction reporting, record keeping, trade reconstruction, best execution and data insight in one comprehensive solution. The firm’s scalable secure data storage platform offers encryption at rest and in flight and best-in-class analytics to help financial firms meet regulatory obligations and gain competitive advantage.

The company has a highly experienced management team and a strong board, who have decades of technology and management experience and worked in senior positions at many leading international financial businesses. We are a young company that shares a commitment to learning, being smart, working hard and being honest in all we do and striving to do that better each day. We value all our colleagues equally and everyone should feel able to speak up, propose an idea, point out a mistake and feel safe, happy and be themselves at work.

Being part of a start-up can be equally exciting as it is challenging. You will be part of the SteelEye team not just because of your talent but also because of your entrepreneurial flare which we thrive on at SteelEye. This means we want you to be curious, contribute, ask questions and share ideas. We encourage you to get involved in helping shape our business. What you'll do

What you will do?

  • Deliver plugins for our python based ETL pipelines.
  • Deliver python services for provisioning and managing cloud infrastructure.
  • Design, Develop, Unit Test, and Support code in production.
  • Deal with challenges associated with large volumes of data.
  • Manage expectations with internal stakeholders and context switch between multiple deliverables as priorities change.
  • Thrive in an environment that uses AWS and Elasticsearch extensively.
  • Keep abreast of technology and contribute to the evolution of the product.
  • Champion best practices and provide mentorship.

What we're looking for

  • Python 3.
  • Python libraries used for data (such as pandas, numpy).
  • AWS.
  • Elasticsearch.
  • Performance tuning.
  • Object Oriented Design and Modelling.
  • Delivering complex software, ideally in a FinTech setting.
  • CI/CD tools.
  • Knowledge of design patterns.
  • Sharp analytical and problem-solving skills.
  • Strong sense of ownership.
  • Demonstrable desire to learn and grow.
  • Excellent written and oral communication skills.
  • Mature collaboration and mentoring abilities.

What will you get?

  • This is an individual contributor role. So, if you are someone who loves to code and solve complex problems and build amazing products and not worry about anything else, this is the role for you.
  • You will have the chance to learn from the best in the business who have worked across the world and are technology geeks.
  • Company that always appreciates ownership and initiative. If you are someone who is full of ideas, this role is for you.
Read more
icon
Bengaluru (Bangalore)
icon
1 - 4 yrs
icon
₹7L - ₹12L / yr
SQL Server Integration Services (SSIS)
SQL
ETL
Informatica
Data Warehouse (DWH)
+4 more

About Company:

Working with a multitude of clients populating the FTSE and Fortune 500s, Audit Partnership is a people focused organization with a strong belief in our employees. We hire the best people to provide the best services to our clients.

APL offers profit recovery services to organizations of all sizes across a number of sectors. APL was borne out of a desire to offer an alternative from the stagnant service provision on offer in the profit recovery industry.

Every year we cover million of pounds for our clients and also work closely with them, sharing our audit findings to minimize future losses. Our dedicated and highly experienced audit teams utilize progressive & dynamic financial service solutions & industry leading technology to achieve maximum success.

We provide dynamic work environments focused on delivering data-driven solutions at a rapidly increased pace over traditional development. Be a part of our passionate and motivated team who are excited to use the latest in software technologies within financial services.

Headquartered in the UK, we have expanded from a small team in 2002 to a market leading organization serving clients across the globe while keeping our clients at the heart of all decisions we make.


Job description:

We are looking for a high-potential, enthusiastic SQL Data Engineer with a strong desire to build a career in data analysis, database design and application solutions. Reporting directly to our UK based Technology team, you will provide support to our global operation in the delivery of data analysis, conversion, and application development to our core audit functions.

Duties will include assisting with data loads, using T-SQL to analyse data, front-end code changes, data housekeeping, data administration, and supporting the Data Services team as a whole.  Your contribution will grow in line with your experience and skills, becoming increasingly involved in the core service functions and client delivery.  A self-starter with a deep commitment to the highest standards of quality and customer service. We are offering a fantastic career opportunity, working for a leading international financial services organisation, serving the world’s largest organisations.

 

What we are looking for:

  • 1-2 years of previous experience in a similar role
  • Data analysis and conversion skills using Microsoft SQL Server is essential
  • An understanding of relational database design and build
  • Schema design, normalising data, indexing, query performance analysis
  • Ability to analyse complex data to identify patterns and detect anomalies
  • Assisting with ETL design and implementation projects
  • Knowledge or experience in one or more of the key technologies below would be preferable:
    • Microsoft SQL Server (SQL Server Management Studio, Stored Procedure writing etc)
    • T-SQL
    • Programming languages (C#, VB, Python etc)
    • Use of Python to manipulate and import data
    •  
    • Experience of ETL/automation advantageous but not essential (SSIS/Prefect/Azure)
  • A self-starter who can drive projects with minimal guidance
  • Meeting stakeholders to agree system requirements
  • Someone who is enthusiastic and eager to learn
  • Very good command of English and excellent communication skills

 

Perks & Benefits:

  • A fantastic work life balance
  • Competitive compensation and benefits
  • Exposure of working with Fortune 500 organization
  • Expert guidance and nurture from global leaders
  • Opportunities for career and personal advancement with our continued global growth strategy
  • Industry leading training programs
  • A working environment that is exciting, fun and engaging

 

Read more
DP
Posted by Rajesh C
icon
Bengaluru (Bangalore)
icon
5 - 7 yrs
icon
₹10L - ₹20L / yr
Spotfire
Qlikview
Tableau
PowerBI
Data Visualization
+5 more

Data Analyst

Job Description

 

Summary

Are you passionate about handling large & complex data problems, want to make an impact and have the desire to work on ground-breaking big data technologies? Then we are looking for you.

 

At Amagi, great ideas have a way of becoming great products, services, and customer experiences very quickly. Bring passion and dedication to your job and there's no telling what you could accomplish. Would you like to work in a fast-paced environment where your technical abilities will be challenged on a day-to-day basis? If so, Amagi’s Data Engineering and Business Intelligence team is looking for passionate, detail-oriented, technical savvy, energetic team members who like to think outside the box.

 

Amagi’s Data warehouse team deals with petabytes of data catering to a wide variety of real-time, near real-time and batch analytical solutions. These solutions are an integral part of business functions such as Sales/Revenue, Operations, Finance, Marketing and Engineering, enabling critical business decisions. Designing, developing, scaling and running these big data technologies using native technologies of AWS and GCP are a core part of our daily job.

 

Key Qualifications

  • Experience in building highly cost optimised data analytics solutions
  • Experience in designing and building dimensional data models to improve accessibility, efficiency and quality of data
  • Experience (hands on) in building high quality ETL applications, data pipelines and analytics solutions ensuring data privacy and regulatory compliance.
  • Experience in working with AWS or GCP
  • Experience with relational and NoSQL databases
  • Experience to full stack web development (Preferably Python)
  • Expertise with data visualisation systems such as Tableau and Quick Sight
  • Proficiency in writing advanced SQL queries with expertise in performance tuning handling large data volumes
  • Familiarity with ML/AÍ technologies is a plus
  • Demonstrate strong understanding of development processes and agile methodologies
  • Strong analytical and communication skills. Should be self-driven, highly motivated and ability to learn quickly

 

Description

Data Analytics is at the core of our work, and you will have the opportunity to:

 

  • Design Data-warehousing solutions on Amazon S3 with Athena, Redshift, GCP Bigtable etc
  • Lead quick prototypes by integrating data from multiple sources
  • Do advanced Business Analytics through ad-hoc SQL queries
  • Work on Sales Finance reporting solutions using tableau, HTML5, React applications

 

We build amazing experiences and create depth in knowledge for our internal teams and our leadership. Our team is a friendly bunch of people that help each other grow and have a passion for technology, R&D, modern tools and data science.

 

Our work relies on deep understanding of the company needs and an ability to go through vast amounts of internal data such as sales, KPIs, forecasts, Inventory etc. One of the key expectations of this role would be to do data analytics, building data lakes, end to end reporting solutions etc. If you have a passion for cost optimised analytics and data engineering and are eager to learn advanced data analytics at a large scale, this might just be the job for you..

 

Education & Experience

A bachelor’s/master’s degree in Computer Science with 5 to 7 years of experience and previous experience in data engineering is a plus.

 

Read more
icon
Bengaluru (Bangalore)
icon
4 - 8 yrs
icon
₹9L - ₹14L / yr
Data Warehouse (DWH)
Informatica
ETL
CI/CD
SQL

 

Role: Talend Production Support Consultant

 

Brief Job Description:  

  • Involve in release deployment and monitoring of the ETL pipelines.
  • Closely work with the development team and business team to provide operational support.
  • Candidate should have good knowledge and hands on experience on below tools/technologies:

Talend (Talend Studio, TAC, TMC),SAP BODS,SQL,HIVE & Azure(Azure fundamentals, ADB,ADF)

  • Hands on experience in CI/CD is an added advantage.

As discussed, please provide your Linkedin ID URL & a valid ID proof of yours.

 

Please confirm as well, you will relocate to Bangalore once required.

Read more
DP
Posted by Rajesh C
icon
Bengaluru (Bangalore), Chennai
icon
12 - 15 yrs
icon
₹50L - ₹60L / yr
Data Science
Machine Learning (ML)
ETL
Data Warehouse (DWH)
Amazon Web Services (AWS)
+5 more
Job Title: Data Architect
Job Location: Chennai

Job Summary
The Engineering team is seeking a Data Architect. As a Data Architect, you will drive a
Data Architecture strategy across various Data Lake platforms. You will help develop
reference architecture and roadmaps to build highly available, scalable and distributed
data platforms using cloud based solutions to process high volume, high velocity and
wide variety of structured and unstructured data. This role is also responsible for driving
innovation, prototyping, and recommending solutions. Above all, you will influence how
users interact with Conde Nast’s industry-leading journalism.
Primary Responsibilities
Data Architect is responsible for
• Demonstrated technology and personal leadership experience in architecting,
designing, and building highly scalable solutions and products.
• Enterprise scale expertise in data management best practices such as data integration,
data security, data warehousing, metadata management and data quality.
• Extensive knowledge and experience in architecting modern data integration
frameworks, highly scalable distributed systems using open source and emerging data
architecture designs/patterns.
• Experience building external cloud (e.g. GCP, AWS) data applications and capabilities is
highly desirable.
• Expert ability to evaluate, prototype and recommend data solutions and vendor
technologies and platforms.
• Proven experience in relational, NoSQL, ELT/ETL technologies and in-memory
databases.
• Experience with DevOps, Continuous Integration and Continuous Delivery technologies
is desirable.
• This role requires 15+ years of data solution architecture, design and development
delivery experience.
• Solid experience in Agile methodologies (Kanban and SCRUM)
Required Skills
• Very Strong Experience in building Large Scale High Performance Data Platforms.
• Passionate about technology and delivering solutions for difficult and intricate
problems. Current on Relational Databases and No sql databases on cloud.
• Proven leadership skills, demonstrated ability to mentor, influence and partner with
cross teams to deliver scalable robust solutions..
• Mastery of relational database, NoSQL, ETL (such as Informatica, Datastage etc) /ELT
and data integration technologies.
• Experience in any one of Object Oriented Programming (Java, Scala, Python) and
Spark.
• Creative view of markets and technologies combined with a passion to create the
future.
• Knowledge on cloud based Distributed/Hybrid data-warehousing solutions and Data
Lake knowledge is mandate.
• Good understanding of emerging technologies and its applications.
• Understanding of code versioning tools such as GitHub, SVN, CVS etc.
• Understanding of Hadoop Architecture and Hive SQL
• Knowledge in any one of the workflow orchestration
• Understanding of Agile framework and delivery

Preferred Skills:
● Experience in AWS and EMR would be a plus
● Exposure in Workflow Orchestration like Airflow is a plus
● Exposure in any one of the NoSQL database would be a plus
● Experience in Databricks along with PySpark/Spark SQL would be a plus
● Experience with the Digital Media and Publishing domain would be a
plus
● Understanding of Digital web events, ad streams, context models

About Condé Nast

CONDÉ NAST INDIA (DATA)
Over the years, Condé Nast successfully expanded and diversified into digital, TV, and social
platforms - in other words, a staggering amount of user data. Condé Nast made the right
move to invest heavily in understanding this data and formed a whole new Data team
entirely dedicated to data processing, engineering, analytics, and visualization. This team
helps drive engagement, fuel process innovation, further content enrichment, and increase
market revenue. The Data team aimed to create a company culture where data was the
common language and facilitate an environment where insights shared in real-time could
improve performance.
The Global Data team operates out of Los Angeles, New York, Chennai, and London. The
team at Condé Nast Chennai works extensively with data to amplify its brands' digital
capabilities and boost online revenue. We are broadly divided into four groups, Data
Intelligence, Data Engineering, Data Science, and Operations (including Product and
Marketing Ops, Client Services) along with Data Strategy and monetization. The teams built
capabilities and products to create data-driven solutions for better audience engagement.
What we look forward to:
We want to welcome bright, new minds into our midst and work together to create diverse
forms of self-expression. At Condé Nast, we encourage the imaginative and celebrate the
extraordinary. We are a media company for the future, with a remarkable past. We are
Condé Nast, and It Starts Here.
Read more

With a global provider of Business Process Management.

Agency job
via Jobdost by Mamatha A
icon
Bengaluru (Bangalore), Pune, Delhi, Gurugram, Nashik, Vizag
icon
3 - 5 yrs
icon
₹8L - ₹12L / yr
Oracle
Business Intelligence (BI)
PowerBI
Oracle Warehouse Builder
Informatica
+3 more
Oracle BI developer wiith 6+ years experience working on Oracle warehouse design, development and
testing
Good knowledge of Informatica ETL, Oracle Analytics Server
Analytical ability to design warehouse as per user requirements mainly in Finance and HR domain
Good skills to analyze existing ETL, dashboard to understand the logic and do enhancements as per
requirements
Good communication skills and written communication
Qualifications
Master or Bachelor degree in Engineering/Computer Science /Information Technology
Additional information
Excellent verbal and written communication skills
Read more

Leading Sales Enabler

Agency job
via Qrata by Blessy Fernandes
icon
Bengaluru (Bangalore)
icon
5 - 10 yrs
icon
₹25L - ₹40L / yr
ETL
Spark
Python
Amazon Redshift
5+ years of experience in a Data Engineer role.
 Proficiency in Linux.
 Must have SQL knowledge and experience working with relational databases,
query authoring (SQL) as well as familiarity with databases including Mysql,
Mongo, Cassandra, and Athena.
 Must have experience with Python/Scala.
 Must have experience with Big Data technologies like Apache Spark.
 Must have experience with Apache Airflow.
 Experience with data pipeline and ETL tools like AWS Glue.
 Experience working with AWS cloud services: EC2, S3, RDS, Redshift.
Read more
icon
Bengaluru (Bangalore)
icon
4 - 7 yrs
icon
₹6L - ₹30L / yr
Informatica
ETL
SQL
Linux/Unix
Oracle
+1 more
  • Experience implementing large-scale ETL processes using Informatica PowerCenter.
  • Design high-level ETL process and data flow from the source system to target databases.
  • Strong experience with Oracle databases and strong SQL.
  • Develop & unit test Informatica ETL processes for optimal performance utilizing best practices.
  • Performance tune Informatica ETL mappings and report queries.
  • Develop database objects like Stored Procedures, Functions, Packages, and Triggers using SQL and PL/SQL.
  • Hands-on Experience in Unix.
  • Experience in Informatica Cloud (IICS).
  • Work with appropriate leads and review high-level ETL design, source to target data mapping document, and be the point of contact for any ETL-related questions.
  • Good understanding of project life cycle, especially tasks within the ETL phase.
  • Ability to work independently and multi-task to meet critical deadlines in a rapidly changing environment.
  • Excellent communication and presentation skills.
  • Effectively worked on the Onsite and Offshore work model.
Read more

Our Client company is into Computer Software. (EC1)

Agency job
via Multi Recruit by Manjunath Multirecruit
icon
Bengaluru (Bangalore)
icon
3 - 8 yrs
icon
₹13L - ₹22L / yr
ETL
IDQ
  • Participate in planning, implementation of solutions, and transformation programs from legacy system to a cloud-based system
  • Work with the team on Analysis, High level and low-level design for solutions using ETL or ELT based solutions and DB services in RDS
  • Work closely with the architect and engineers to design systems that effectively reflect business needs, security requirements, and service level requirements
  • Own deliverables related to design and implementation
  • Own Sprint tasks and drive the team towards the goal while understanding the change and release process defined by the organization.
  • Excellent communication skills, particularly those relating to complex findings and presenting them to ensure audience appeal at various levels of the organization
  • Ability to integrate research and best practices into problem avoidance and continuous improvement
  • Must be able to perform as an effective member in a team-oriented environment, maintain a positive attitude, and achieve desired results while working with minimal supervision


Basic Qualifications:

  • Minimum of 5+ years of technical work experience in the implementation of complex, large scale, enterprise-wide projects including analysis, design, core development, and delivery
  • Minimum of 3+ years of experience with expertise in Informatica ETL, Informatica Power Center, and Informatica Data Quality
  • Experience with Informatica MDM tool is good to have
  • Should be able to understand the scope of the work and ask for clarifications
  • Should have advanced SQL skills. Including complex PL/SQL coding skills
  • Knowledge of Agile is plus
  • Well-versed with SOAP, Webservice, and REST API.
  • Hand on development using Java would be a plus.

 

 

 

Read more

Leading Sales Platform

Agency job
via Qrata by Blessy Fernandes
icon
Bengaluru (Bangalore)
icon
5 - 10 yrs
icon
₹30L - ₹45L / yr
Big Data
ETL
Spark
Data engineering
Data governance
+4 more
Work with product managers and development leads to create testing strategies · Develop and scale automated data validation framework · Build and monitor key metrics of data health across the entire Big Data pipelines · Early alerting and escalation process to quickly identify and remedy quality issues before something ever goes ‘live’ in front of the customer · Build/refine tools and processes for quick root cause diagnostics · Contribute to the creation of quality assurance standards, policies, and procedures to influence the DQ mind-set across the company
Required skills and experience: · Solid experience working in Big Data ETL environments with Spark and Java/Scala/Python · Strong experience with AWS cloud technologies (EC2, EMR, S3, Kinesis, etc) · Experience building monitoring/alerting frameworks with tools like Newrelic and escalations with slack/email/dashboard integrations, etc · Executive-level communication, prioritization, and team leadership skills
Read more

Leading StartUp Focused On Employee Growth

Agency job
via Qrata by Blessy Fernandes
icon
Bengaluru (Bangalore)
icon
2 - 6 yrs
icon
₹25L - ₹45L / yr
Data engineering
Data Analytics
Big Data
Apache Spark
airflow
+8 more
2+ years of experience in a Data Engineer role.
● Proficiency in Linux.
● Experience working with AWS cloud services: EC2, S3, RDS, Redshift.
● Must have SQL knowledge and experience working with relational databases, query
authoring (SQL) as well as familiarity with databases including Mysql, Mongo, Cassandra,
and Athena.
● Must have experience with Python/Scala.
● Must have experience with Big Data technologies like Apache Spark.
● Must have experience with Apache Airflow.
● Experience with data pipelines and ETL tools like AWS Glue.
Read more
icon
Bengaluru (Bangalore)
icon
4 - 8 yrs
icon
₹20L - ₹35L / yr
Data engineering
Data Engineer
Big Data
Big Data Engineer
Python
+10 more

We are an early stage start-up, building new fintech products for small businesses. Founders are IIT-IIM alumni, with prior experience across management consulting, venture capital and fintech startups. We are driven by the vision to empower small business owners with technology and dramatically improve their access to financial services. To start with, we are building a simple, yet powerful solution to address a deep pain point for these owners: cash flow management. Over time, we will also add digital banking and 1-click financing to our suite of offerings.

 

We have developed an MVP which is being tested in the market. We have closed our seed funding from marquee global investors and are now actively building a world class tech team. We are a young, passionate team with a strong grip on this space and are looking to on-board enthusiastic, entrepreneurial individuals to partner with us in this exciting journey. We offer a high degree of autonomy, a collaborative fast-paced work environment and most importantly, a chance to create unparalleled impact using technology.

 

Reach out if you want to get in on the ground floor of something which can turbocharge SME banking in India!

 

Technology stack at Velocity comprises a wide variety of cutting edge technologies like, NodeJS, Ruby on Rails, Reactive Programming,, Kubernetes, AWS, NodeJS, Python, ReactJS, Redux (Saga) Redis, Lambda etc. 

 

Key Responsibilities

  • Responsible for building data and analytical engineering pipelines with standard ELT patterns, implementing data compaction pipelines, data modelling and overseeing overall data quality

  • Work with the Office of the CTO as an active member of our architecture guild

  • Writing pipelines to consume the data from multiple sources

  • Writing a data transformation layer using DBT to transform millions of data into data warehouses.

  • Implement Data warehouse entities with common re-usable data model designs with automation and data quality capabilities

  • Identify downstream implications of data loads/migration (e.g., data quality, regulatory)

 

What To Bring

  • 3+ years of software development experience, a startup experience is a plus.

  • Past experience of working with Airflow and DBT is preferred

  • 2+ years of experience working in any backend programming language. 

  • Strong first-hand experience with data pipelines and relational databases such as Oracle, Postgres, SQL Server or MySQL

  • Experience with DevOps tools (GitHub, Travis CI, and JIRA) and methodologies (Lean, Agile, Scrum, Test Driven Development)

  • Experienced with the formulation of ideas; building proof-of-concept (POC) and converting them to production-ready projects

  • Experience building and deploying applications on on-premise and AWS or Google Cloud cloud-based infrastructure

  • Basic understanding of Kubernetes & docker is a must.

  • Experience in data processing (ETL, ELT) and/or cloud-based platforms

  • Working proficiency and communication skills in verbal and written English.

 

 

 

Read more

world’s fastest growing consumer internet company

Agency job
via Hunt & Badge Consulting Pvt Ltd by Chandramohan Subramanian
icon
Bengaluru (Bangalore)
icon
5 - 8 yrs
icon
₹20L - ₹35L / yr
Big Data
Data engineering
Big Data Engineering
Data Engineer
ETL
+5 more

Data Engineer JD:

  • Designing, developing, constructing, installing, testing and maintaining the complete data management & processing systems.
  • Building highly scalable, robust, fault-tolerant, & secure user data platform adhering to data protection laws.
  • Taking care of the complete ETL (Extract, Transform & Load) process.
  • Ensuring architecture is planned in such a way that it meets all the business requirements.
  • Exploring new ways of using existing data, to provide more insights out of it.
  • Proposing ways to improve data quality, reliability & efficiency of the whole system.
  • Creating data models to reduce system complexity and hence increase efficiency & reduce cost.
  • Introducing new data management tools & technologies into the existing system to make it more efficient.
  • Setting up monitoring and alarming on data pipeline jobs to detect failures and anomalies

What do we expect from you?

  • BS/MS in Computer Science or equivalent experience
  • 5 years of recent experience in Big Data Engineering.
  • Good experience in working with Hadoop and Big Data technologies like HDFS, Pig, Hive, Zookeeper, Storm, Spark, Airflow and NoSQL systems
  • Excellent programming and debugging skills in Java or Python.
  • Apache spark, python, hands on experience in deploying ML models
  • Has worked on streaming and realtime pipelines
  • Experience with Apache Kafka or has worked with any of Spark Streaming, Flume or Storm

 

 

 

 

 

 

 

 

 

 

 

 

Focus Area:

 

R1

Data structure & Algorithms

R2

Problem solving + Coding

R3

Design (LLD)

 

Read more
DP
Posted by Biby Mathew
icon
Bengaluru (Bangalore), trivandrum, Thiruvananthapuram, Cochin
icon
10 - 15 yrs
icon
₹15L - ₹35L / yr
Architecture
Data architecture
Data Architect
Microsoft SQL Server
Performance Testing
+5 more
Responsibilities
• Lead the Development Team.
• Strong in SQL Coding and performance tuning.
• Strong in Data warehousing concepts and design.
• Strong in ETL process and implementation.
• Participate in database design and architecture to support application development
activities.
• Experience in writing and reviewing complex functions, stored procedures, and custom
scripts to support application development.
• Tuning application queries by analysing execution plans.
• Experience in data migration between different DB systems.
• Expertise in trouble shooting day to day production issues in DB.
• Oversee progress of development team to ensure consistency with initial design
Skills and Qualifications
• 8+ years of professional experience with at least 2 years as an architect.
• Deep knowledge in SQL Server is a must. Knowledge in MySQL and Vertica would be an
asset.
• Ability to work in a very dynamic environment & ready to support production
environment.
• Organised, self-motivated, team player, actions and result oriented.
• Ability to successfully work under tight timelines.
• Good verbal and written communication skills.
• Ability to meet production targets and milestones.
Read more
icon
Bengaluru (Bangalore)
icon
3 - 8 yrs
icon
₹6L - ₹14L / yr
SQL Server Integration Services (SSIS)
SSIS
SQL Server Reporting Services (SSRS)
SSRS
Microsoft SQL Server
+4 more

Primary Responsibilities:

  • We need strong SQL database development skills using the MS SQL server.
  • Strong skill in SQL server integration services (SSIS) for ETL development.
  • Strong Experience in full life cycle database development project using SQL server.
  • Experience in designing and implementing complex logical and physical data models.
  • Exposure to web services & web technologies (Javascript, Jquery, CSS, HTML).
  • Knowledge of other high-level languages (PERL, Python) will be an added advantage
  • Nice to have SQL certification.

Good to have:

•             Bachelor’s degree or a minimum of 3+ years of formal industry/professional experience in software development – Healthcare background preferred.
Read more

AI enabled SAAS organisation

Agency job
via Rize @ People Konnect Pvt. Ltd. by Kalindi Maheshwari
icon
Bengaluru (Bangalore)
icon
1 - 8 yrs
icon
₹5L - ₹40L / yr
Data engineering
Data Engineer
AWS Lambda
Microservices
ETL
+8 more
Required Skills & Experience:
• 2+ years of experience in data engineering & strong understanding of data engineering principles using big data technologies
• Excellent programming skills in Python is mandatory
• Expertise in relational databases (MSSQL/MySQL/Postgres) and expertise in SQL. Exposure to NoSQL such as Cassandra. MongoDB will be a plus.
• Exposure to deploying ETL pipelines such as AirFlow, Docker containers & Lambda functions
• Experience in AWS loud services such as AWS CLI, Glue, Kinesis etc
• Experience using Tableau for data visualization is a plus
• Ability to demonstrate a portfolio of projects (GitHub, papers, etc.) is a plus
• Motivated, can-do attitude and desire to make a change is a must
• Excellent communication skills
Read more

Our Client company is into Computer Software. (EC1)

Agency job
via Multi Recruit by Fiona RKS
icon
Bengaluru (Bangalore)
icon
3 - 5 yrs
icon
₹12L - ₹15L / yr
ETL
Snowflake
snow flake
Data engineering
SQL
+1 more
  • Create and maintain optimal data pipeline architecture
  • Assemble large, complex data sets that meet functional / non-functional business requirements.
  • Author data services using a variety of programming languages
  • Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
  • Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using Snowflake Cloud Datawarehouse as well as SQL and Azure ‘big data’ technologies
  • Work with stakeholders including the Executive, Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs.
  • Keep our data separated and secure across national boundaries through multiple data centers and Azure regions.
  • Create data tools for analytics and data scientist team members that assist them in building and optimizing our product into an innovative industry leader.
  • Work with data and analytics experts to strive for greater functionality in our data systems.
  • Work in an Agile environment with Scrum teams.
  • Ensure data quality and help in achieving data governance.

Basic Qualifications

  • 3+ years of experience in a Data Engineer or Software Engineer role
  • Undergraduate degree required (Graduate degree preferred) in Computer Science, Statistics, Informatics, Information Systems or another quantitative field.
  • Experience using the following software/tools:
  • Experience with “Snowflake Cloud Datawarehouse”
  • Experience with Azure cloud services: ADLS, ADF, ADLA, AAS
  • Experience with data pipeline and workflow management tools
  • Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases
  • Understanding of Datawarehouse (DWH) systems, and migration from DWH to data lakes/Snowflake
  • Understanding of ELT and ETL patterns and when to use each. Understanding of data models and transforming data into the models
  • Strong analytic skills related to working with unstructured datasets
  • Build processes supporting data transformation, data structures, metadata, dependency and workload management
  • Experience supporting and working with cross-functional teams in a dynamic environment.
Read more
DP
Posted by Jaya Harjai
icon
Bengaluru (Bangalore)
icon
3 - 4 yrs
icon
₹18L - ₹35L / yr
Python
SQL
Data engineering
Big Data
Data Warehouse (DWH)
+3 more

Responsibilities:

  • Design, construct, install, test and maintain data pipeline and data management systems.
  • Ensure that all systems meet the business/company requirements as well as industry practices.
  • Integrate up-and-coming data management and software engineering technologies into existing data structures.
  • Processes for data mining, data modeling, and data production.
  • Create custom software components and analytics applications.
  • Collaborate with members of your team (eg, Data Architects, the Software team, Data Scientists) on the project's goals.
  • Recommend different ways to constantly improve data reliability and quality.

 

Requirements:

  • Experience in a related field with real-world skills and testimonials from former employees.
  • Familiar with data warehouses like Redshift, Bigquery and Athena.
  • Familiar with data processing systems like flink, spark and storm. Develop set
  • Proficiency in Python and SQL. Possible work experience and proof of technical expertise.
  • You may also consider a Master's degree in computer engineering or science in order to fine-tune your skills while on the job. (Although a Master's isn't required, it is always appreciated).
  • Intellectual curiosity to find new and unusual ways of how to solve data management issues.
  • Ability to approach data organization challenges while keeping an eye on what's important.
  • Minimal data science knowledge is a Must, should understand a bit of analytics.
Read more
DP
Posted by Rajendra Dasigari
icon
Bengaluru (Bangalore)
icon
2 - 7 yrs
icon
₹6L - ₹12L / yr
ETL
Data Warehouse (DWH)
Apache Hive
Informatica
Data engineering
+5 more
1. Create and maintain optimal data pipeline architecture
2. Assemble large, complex data sets that meet business requirements
3. Identify, design, and implement internal process improvements
4. Optimize data delivery and re-design infrastructure for greater scalability
5. Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and AWS technologies
6. Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency, and other key business performance metrics
7. Work with internal and external stakeholders to assist with data-related technical issues and support data infrastructure needs
8. Create data tools for analytics and data scientist team members
 
Skills Required:
 
1. Working knowledge of ETL on any cloud (Azure / AWS / GCP)
2. Proficient in Python (Programming / Scripting)
3. Good understanding of any of the data warehousing concepts (Snowflake / AWS Redshift / Azure Synapse Analytics / Google Big Query / Hive)
4. In-depth understanding of principles of database structure
5.  Good understanding of any of the ETL technologies (Informatica PowerCenter / AWS Glue / Data Factory / SSIS / Spark / Matillion / Talend / Azure)
6. Proficient in SQL (query solving)
7. Knowledge in Change case Management / Version Control – (VSS / DevOps / TFS / GitHub, Bit bucket, CICD Jenkin)
Read more

IT solutions specialized in Apps Lifecycle management. (MG1)

Agency job
via Multi Recruit by Ayub Pasha
icon
Bengaluru (Bangalore)
icon
5 - 6 yrs
icon
₹8L - ₹10L / yr
Data migration
Data Warehouse (DWH)
ETL
SQL
PostgreSQL
+4 more
  • Excellent working knowledge on Data Warehousing /Data Migration activity using an ETL tool.
  • Strong Data Integration, PostgreSQL/Oracle Database skills, Shell Scripting, Python programming, and development know-how.
  • Hands-on experience in working with and generating XML documents.
  • Good analytical and business process understanding capability.
  • Familiarized with Data Models, Source-Target Data Mapping, Transactional, and Master Data concepts.
  • Well-experienced in High level/Detailed design, Performance tuning of ETL jobs.
  • Very good communication skills, interpersonal skills, stakeholder management skills, self-motivated, quick learner, team player.
  • Exposure to After Sales Business Domain is highly preferred.
  • Experience using HP ALM, Jira for ticketing.
  • Experience release management

 

Read more
DP
Posted by Anand Pandey
icon
Bengaluru (Bangalore)
icon
1 - 2 yrs
icon
₹5L - ₹7L / yr
Business Analysis
Windows Azure
PySpark
SQL
Data Warehouse (DWH)
+4 more
RESPONSIBILITIES & OWNERSHIP: THINGS THE ROLE CAN'T MISS
  • Setting KPIs, monitoring key trends, and helping stakeholders by generating insights from the data delivered.
  • Understanding user behaviour and performing root-cause analysis of changes in data trends across different verticals.
  • Get answers to business questions, identify areas of improvement, and identify opportunities for growth.
  • Work on ad-hoc requests for data and analysis.
  • Work with Cross functional Teams as when required to automate reports and create informative dashboards based on problem statements.

WHO COULD BE A GREAT FIT:
Functional Experience
  • 1-2 years of experience working in Analytics as a Business or Data Analyst.
  • Analytical mind with a problem-solving aptitude.
  • Familiarity with Microsoft Azure & AWS PySpark, Python, Data Bricks, Metabase, Understanding of APIs, data warehouse and ETL etc.
  • Proficient in writing Complex Queries in SQL.
  • Experience in Performing hands-on analysis on data and across multiple datasets and databases primarily using Excel, Google Sheets and R.
  • Ability to work across cross-functional teams proactively.
Read more

at 1CH

DP
Posted by Sathish Sukumar
icon
Chennai, Bengaluru (Bangalore), Hyderabad, NCR (Delhi | Gurgaon | Noida), Mumbai, Pune
icon
4 - 15 yrs
icon
₹10L - ₹25L / yr
Data engineering
Data engineer
ETL
SSIS
ADF
+3 more
  • Expertise in designing and implementing enterprise scale database (OLTP) and Data warehouse solutions.
  • Hands on experience in implementing Azure SQL Database, Azure SQL Date warehouse (Azure Synapse Analytics) and big data processing using Azure Databricks and Azure HD Insight.
  • Expert in writing T-SQL programming for complex stored procedures, functions, views and query optimization.
  • Should be aware of Database development for both on-premise and SAAS Applications using SQL Server and PostgreSQL.
  • Experience in ETL and ELT implementations using Azure Data Factory V2 and SSIS.
  • Experience and expertise in building machine learning models using Logistic and linear regression, Decision tree  and Random forest Algorithms.
  • PolyBase queries for exporting and importing data into Azure Data Lake.
  • Building data models both tabular and multidimensional using SQL Server data tools.
  • Writing data preparation, cleaning and processing steps using Python, SCALA, and R.
  • Programming experience using python libraries NumPy, Pandas and Matplotlib.
  • Implementing NOSQL databases and writing queries using cypher.
  • Designing end user visualizations using Power BI, QlikView and Tableau.
  • Experience working with all versions of SQL Server 2005/2008/2008R2/2012/2014/2016/2017/2019
  • Experience using the expression languages MDX and DAX.
  • Experience in migrating on-premise SQL server database to Microsoft Azure.
  • Hands on experience in using Azure blob storage, Azure Data Lake Storage Gen1 and Azure Data Lake Storage Gen2.
  • Performance tuning complex SQL queries, hands on experience using SQL Extended events.
  • Data modeling using Power BI for Adhoc reporting.
  • Raw data load automation using T-SQL and SSIS
  • Expert in migrating existing on-premise database to SQL Azure.
  • Experience in using U-SQL for Azure Data Lake Analytics.
  • Hands on experience in generating SSRS reports using MDX.
  • Experience in designing predictive models using Python and SQL Server.
  • Developing machine learning models using Azure Databricks and SQL Server
Read more

Curl Analytics

Agency job
via wrackle by Naveen Taalanki
icon
Bengaluru (Bangalore)
icon
5 - 10 yrs
icon
₹15L - ₹30L / yr
ETL
Big Data
Data engineering
Apache Kafka
PySpark
+11 more
What you will do
  • Bring in industry best practices around creating and maintaining robust data pipelines for complex data projects with/without AI component
    • programmatically ingesting data from several static and real-time sources (incl. web scraping)
    • rendering results through dynamic interfaces incl. web / mobile / dashboard with the ability to log usage and granular user feedbacks
    • performance tuning and optimal implementation of complex Python scripts (using SPARK), SQL (using stored procedures, HIVE), and NoSQL queries in a production environment
  • Industrialize ML / DL solutions and deploy and manage production services; proactively handle data issues arising on live apps
  • Perform ETL on large and complex datasets for AI applications - work closely with data scientists on performance optimization of large-scale ML/DL model training
  • Build data tools to facilitate fast data cleaning and statistical analysis
  • Ensure data architecture is secure and compliant
  • Resolve issues escalated from Business and Functional areas on data quality, accuracy, and availability
  • Work closely with APAC CDO and coordinate with a fully decentralized team across different locations in APAC and global HQ (Paris).

You should be

  •  Expert in structured and unstructured data in traditional and Big data environments – Oracle / SQLserver, MongoDB, Hive / Pig, BigQuery, and Spark
  • Have excellent knowledge of Python programming both in traditional and distributed models (PySpark)
  • Expert in shell scripting and writing schedulers
  • Hands-on experience with Cloud - deploying complex data solutions in hybrid cloud / on-premise environment both for data extraction/storage and computation
  • Hands-on experience in deploying production apps using large volumes of data with state-of-the-art technologies like Dockers, Kubernetes, and Kafka
  • Strong knowledge of data security best practices
  • 5+ years experience in a data engineering role
  • Science / Engineering graduate from a Tier-1 university in the country
  • And most importantly, you must be a passionate coder who really cares about building apps that can help people do things better, smarter, and faster even when they sleep
Read more
icon
Remote, Bengaluru (Bangalore), Mysore
icon
7 - 10 yrs
icon
₹10L - ₹20L / yr
Oracle
database
PL/SQL
Database migration
Argus
+3 more
  • 7 years of hands-on experience on database development
  • Very strong and hands-on in Oracle Database and PL/SQL development
  • Hands on experience in designing solutions and developing for data migration projects using Oracle PL/SQL
  • Experience with Oracle Argus Safety, ArisG and other Safety/Clinical systems
  • Working experience in development of ETL process, DB Design and Data Structures
  • Excellent knowledge of Relational Databases, Tables, Views, Constraints, Index (B Tree, Bitmap and Function Based) , Object type, Stored Procedures, Functions, Packages and Triggers, Dynamic SQL, Set transaction, pl/sql Cursor variables with Ref Cursor
    Excellent written and verbal communication skills
Read more
Get to hear about interesting companies hiring right now
iconFollow Cutshort
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Learn more
Get to hear about interesting companies hiring right now
iconFollow Cutshort