ETL Developer

Posted by Karthik Padmanabhan
Apply to this job
Remote only
- yrs
₹1L - ₹20L / yr
Informatica PowerCenter
Job description

If you are an outstanding ETL Developer with a passion for technology and looking forward to being part of a great development organization, we would love to hear from you. We are offering technology consultancy services to our Fortune 500 customers with a primary focus on digital technologies. Our customers are looking for top-tier talents in the industry and willing to compensate based on your skill and expertise. The nature of our engagement is Contract in most cases. If you are looking for the next big step in your career, we are glad to partner with you. 


Below is the job description for your review.

Extensive hands- on experience in designing and developing ETL packages using SSIS

Extensive experience in performance tuning of SSIS packages

In- depth knowledge of data warehousing concepts and ETL systems, relational databases like SQL Server 2012/ 2014.

About PriceSenz
20-100 employees
Why apply to jobs via CutShort
Personalized job matches
Stop wasting time. Get matched with jobs that meet your skills, aspirations and preferences.
Verified hiring teams
See actual hiring teams, find common social connections or connect with them directly. No 3rd party agencies here.
Move faster with AI
We use AI to get you faster responses, recommendations and unmatched user experience.
Matches delivered
Network size
Companies hiring
Similar jobs
Founded 2020  •  Services  •  20-100 employees  •  Bootstrapped
Stored Procedures
Windows Azure
Data Warehouse (DWH)
Remote only
0 - 3 yrs
₹5L - ₹18L / yr

Who are we?


We are incubators of high-quality, dedicated software engineering teams for our clients. We work with product organizations to help them scale or modernize their legacy technology solutions. We work with startups to help them operationalize their idea efficiently. Incubyte strives to find people who are passionate about coding, learning, and growing along with us. We work with a limited number of clients at a time on dedicated, long term commitments with an aim of bringing a product mindset into services.


What we are looking for


We’re looking to hire software craftspeople. People who are proud of the way they work and the code they write. People who believe in and are evangelists of extreme programming principles. High quality, motivated and passionate people who make great teams. We heavily believe in being a DevOps organization, where developers own the entire release cycle and thus get to work not only on programming languages but also on infrastructure technologies in the cloud.


What you’ll be doing


First, you will be writing tests. You’ll be writing self-explanatory, clean code. Your code will produce the same, predictable results, over and over again. You’ll be making frequent, small releases. You’ll be working in pairs. You’ll be doing peer code reviews.


You will work in a product team. Building products and rapidly rolling out new features and fixes.


You will be responsible for all aspects of development – from understanding requirements, writing stories, analyzing the technical approach to writing test cases, development, deployment, and fixes. You will own the entire stack from the front end to the back end to the infrastructure and DevOps pipelines. And, most importantly, you’ll be making a pledge that you’ll never stop learning!


Skills you need in order to succeed in this role

Most Important: Integrity of character, diligence and the commitment to do your best

  • Technologies:
    • Azure Data Factory
    • MongoDB
    • SSIS/Apache NiFi (Good to have)
    • Python/Java
    • SOAP/REST Web Services
    • Stored Procedures
    • SQL
    • Test Driven Development
  • Experience with:
    • Data warehousing and data lake initiatives on the Azure cloud
    • Cloud DevOps solutions and cloud data and application migration
    • Database concepts and optimization of complex queries
    • Database versioning, backups, restores and migration, and automation of the same
    • Data security and integrity
Read more
Job posted by
Lifi Lawrance
Apply for job
Founded 2016  •  Products & Services  •  20-100 employees  •  Raised funding
Data Science
Machine Learning (ML)
Remote only
3 - 5 yrs
₹10L - ₹18L / yr



Who We Are

At ThinkBumblebee, we are committed to understand growth that leads to businesses making better use of their data. Through our dedication to using the latest tech stack, engineering methods and many more, we are committed to help companies put their customers at the centre of the business.


We're looking for a motivated, analytically-minded candidate who wants to be part of a start-up style environment. Ideally you have a broad skill set and is willing to take on (or teach yourself to take on) any challenge.


  • Understand and solve complex business problems with sound analytical prowess and help business with impactful insights in decision-making
  • To make sure any roadblocks in implementation is brought to the notice of relevant stakeholders so that project timelines do not get affected
  • Document every aspect of the project in standard way, for future purposes
  • Articulate technical complexities to the senior leadership in a simple and easy manner.


  • Understand the business problem and translate that to a data-driven analytical/statistical problem; Own the solution building process
  • Create appropriate datasets and develop statistical data models
  • Translate complex statistical analysis over large datasets into insights and actions
  • Analyse results and present to stakeholders
  • Communicate the insights using business-friendly presentations
  • Help and mentor other Associate Data Scientists
  • Build a pipeline of the project which is production-ready
  • Build dashboards for easy consumption of solutions


  • Work with stakeholders to understand their business problems, translate those problems into data-driven analytical solutions which can best address those business problems
  • Use data science/analytics prowess to provide answers to key business questions. Could be in any domain (e.g. Retail, Media, OTT)
  • Summarize insights and recommendations to be presented back to the business
  • Use innovative methods to continuously improve the quality of statistical models


  • Building an efficient model that helps the business stakeholder to measure outcomes of their decisions
  • High quality & production-ready codes written within timelines given
  • Following processes and adhering to documentation goals
  • High quality presentations
  • Questions from business stakeholders answered satisfactorily within agreeable time


  • Minimum 3 years in data science role with experience of building end-to-end solution as well as implementing or operationalizing the same solution
  • Must have done EDA extensively for a minimum of 3 years
  • Expertise strongly desired in building statistical and machine learning algorithms for:
    • Regression
    • Time series
    • Ensemble learning
    • Bayesian stats
    • Classification
    • Clustering
    • NLP
    • Anomaly Detection
  • Hands-on experience in Bayesian Stats would be preferred
  • Exposure to optimization and simulation techniques (good to have)


  • Proven skills in translating statistics into insights. Sound knowledge in statistical inference and hypothesis testing
  • English (Fluent)
  • Microsoft Office (mandatory)
  • Expert in Python (mandatory)
  • Advanced Excel (mandatory)
  • SQL (at least one database)
  • Reporting Tool (Any Tool – Must be open to learning tools based on the requirement)


  • B Tech (or equivalent) in any branch or degree in Statistics, Applied Statistics, Economics, Econometrics, Operations Research or any other quantitative fields

Mandatory Requirements

  • Strong written and verbal English skills
  • Strong analytical, logic and quantitative ability
  • Desire to solve business problems in any domain through analytical/data-driven solution
  • Ability to generate business insights from statistical model outputs
  • Microsoft Office skills (Excel, PowerPoint, Word)
  • Python
Read more
Job posted by
Apply for job
Founded 2016  •  Products & Services  •  20-100 employees  •  Raised funding
Data Science
Supervised learning
Unsupervised learning
Linear regression
Data Visualization
Statistical Modeling
Object Oriented Programming (OOPs)
Remote only
2 - 5 yrs
₹6L - ₹15L / yr

At ThinkBumbleBee Analytics, we derive insights that allow our clients make a
scientific decisions. We believe in demanding more from the fields of
Mathematics, Computer Science and Business Logic. Combine these and we
show our clients a 360-degree view of their business.

The Person

  • Articulate
  • High Energy
  • Passion to learn
  • High sense of ownership
  • Ability to work in a fast-paced and deadline driven environment
  • Loves technology
  • Highly skilled at Data Interpretation
  • Problem solver
  • Ability to narrate story to the business stakeholders
  • Generate insights and the ability to turn them into actions and decisions

Skills to work in a challenging, complex project environment

  • Need you to be naturally curious and have a passion for understanding consumer behavior
  • A high level of motivation, passion and high sense of ownership
  • Excellent communication skills needed to manage an incredibly diverse slate of work, clients, and team personalities
  • Flexibility to work on multiple projects and deadline-driven fast-paced environment
  • Ability to work in ambiguity and manage chaos

Technical Skills

  1. Solid understanding and experience in statistical methods and programming
  2. Experience in both supervised (Random Forest, AB Tests, Linear Regression,  Classification) and unsupervised learning (clustering, PCA, LDA)
  3. Experience in any Cloud Platform (AWS/GCP/Azure)
  4. High level of proficiency in the following: 
A. SQL Databases:
  1. Adapt SQL to query other relational databases
  2. Understanding ER Diagram
  3. Write Production Level Code (Unit Testing, Modularity, etc)
  4. Write efficient SQL queries
B. Knowledge and Hand-on experience in these:
  1. Build Python (Pandas) code using variables, relational operators, logical operators, loops, and functions 
  2. OOPS 
  3. Multi Processing Libraries
  4. DASK
  5. Visualisation - Seaborn, Matplot lIb

C. Implementation of statistics using Python:

  1. Descriptive statistics using Python code -  calculating mean,median, mode, standard deviation, and percentiles; and identifying outliers
  2. Use Python code to test hypothesis, calculate correlations and to predict a continuous variable using regression
  3. Sampling Procedures
  4. Inferential Statistics
  5. Parametric and Non Parametric Tests
  6. Non Linear Regression
  7. Validate regression assumptions
Role includes : 
1. Gather, process, analyse and extract conclusions out of complex, high-volume, high dimensional data coming from varying sources
2. Generating insights and recommendations for clients
3. Tweaking existing models basis the changes in requirement, data source, data type, etc
4. Code fixes that are critical & needs to be resolved within the given SLA
5. Writing APIs
6. Interact proactively with colleagues and counterparts
7. Take full ownership of the product at its various development and deployment stages. 


  • BE/BTech in Computer Science/Statistics
  • Should have a minimum 3 years’ experience in Data Science 
Immediate joiners only to apply
Read more
Job posted by
Isha Awasthy
Apply for job
Founded 2012  •  Products & Services  •  100-1000 employees  •  Profitable
Big Data
Hibernate (Java)
Apache Kafka
Real time media streaming
Apache Hive
Apache HBase
Remote, Pune
3 - 8 yrs
₹4L - ₹15L / yr
ob Title/Designation:
Mid / Senior Big Data Engineer
Job Description:
Role: Big Data EngineerNumber of open positions: 5Location: PuneAt Clairvoyant, we're building a thriving big data practice to help enterprises enable and accelerate the adoption of Big data and cloud services. In the big data space, we lead and serve as innovators, troubleshooters, and enablers. Big data practice at Clairvoyant, focuses on solving our customer's business problems by delivering products designed with best in class engineering practices and a commitment to keep the total cost of ownership to a minimum.
Must Have:
  • 4-10 years of experience in software development.
  • At least 2 years of relevant work experience on large scale Data applications.
  • Strong coding experience in Java is mandatory
  • Good aptitude, strong problem solving abilities, and analytical skills, ability to take ownership as appropriate
  • Should be able to do coding, debugging, performance tuning and deploying the apps to Prod.
  • Should have good working experience on
  • o Hadoop ecosystem (HDFS, Hive, Yarn, File formats like Avro/Parquet)
  • o Kafka
  • o J2EE Frameworks (Spring/Hibernate/REST)
  • o Spark Streaming or any other streaming technology.
  • Strong coding experience in Java is mandatory
  • Ability to work on the sprint stories to completion along with Unit test case coverage.
  • Experience working in Agile Methodology
  • Excellent communication and coordination skills
  • Knowledgeable (and preferred hands on) - UNIX environments, different continuous integration tools.
  • Must be able to integrate quickly into the team and work independently towards team goals
Role & Responsibilities:
  • Take the complete responsibility of the sprint stories' execution
  • Be accountable for the delivery of the tasks in the defined timelines with good quality.
  • Follow the processes for project execution and delivery.
  • Follow agile methodology
  • Work with the team lead closely and contribute to the smooth delivery of the project.
  • Understand/define the architecture and discuss the pros-cons of the same with the team
  • Involve in the brainstorming sessions and suggest improvements in the architecture/design.
  • Work with other team leads to get the architecture/design reviewed.
  • Work with the clients and counter-parts (in US) of the project.
  • Keep all the stakeholders updated about the project/task status/risks/issues if there are any.
Education: BE/B.Tech from reputed institute.
Experience: 4 to 9 years
Keywords: java, scala, spark, software development, hadoop, hive
Locations: Pune
Read more
Job posted by
Taruna Roy
Apply for job
Founded 2015  •  Products & Services  •  employees  •  Profitable
Windows Azure
Data engineering
Remote, Bengaluru (Bangalore), Hyderabad
3 - 9 yrs
₹8L - ₹20L / yr
Must-Have Skills:
• Good experience in Pyspark - Including Dataframe core functions and Spark SQL
• Good experience in SQL DBs - Be able to write queries including fair complexity.
• Should have excellent experience in Big Data programming for data transformation and aggregations
• Good at ELT architecture. Business rules processing and data extraction from Data Lake into data streams for business consumption.
• Good customer communication.
• Good Analytical skill
Technology Skills (Good to Have):
  • Building and operationalizing large scale enterprise data solutions and applications using one or more of AZURE data and analytics services in combination with custom solutions - Azure Synapse/Azure SQL DWH, Azure Data Lake, Azure Blob Storage, Spark, HDInsights, Databricks, CosmosDB, EventHub/IOTHub.
  • Experience in migrating on-premise data warehouses to data platforms on AZURE cloud. 
  • Designing and implementing data engineering, ingestion, and transformation functions
  • Azure Synapse or Azure SQL data warehouse
  • Spark on Azure is available in HD insights and data bricks
Good to Have: 
  • Experience with Azure Analysis Services
  • Experience in Power BI
  • Experience with third-party solutions like Attunity/Stream sets, Informatica
  • Experience with PreSales activities (Responding to RFPs, Executing Quick POCs)
  • Capacity Planning and Performance Tuning on Azure Stack and Spark.
Read more
Job posted by
Evelyn Charles
Apply for job
Founded 2018  •  Products & Services  •  100-1000 employees  •  Profitable
Data modeling
Chennai, Bengaluru (Bangalore)
2 - 4 yrs
Best in industry
We are looking for a developer to design and deliver strategic data-centric insights leveraging the next generation analytics and BI technologies. We want someone who is data-centric and insight-centric, less report centric. We are looking for someone wishing to make an impact by enabling innovation and growth; someone with passion for what they do and a vision for the future.

  • Be the analytical expert in Kaleidofin, managing ambiguous problems by using data to execute sophisticated quantitative modeling and deliver actionable insights.
  • Develop comprehensive skills including project management, business judgment, analytical problem solving and technical depth.
  • Become an expert on data and trends, both internal and external to Kaleidofin.
  • Communicate key state of the business metrics and develop dashboards to enable teams to understand business metrics independently.
  • Collaborate with stakeholders across teams to drive data analysis for key business questions, communicate insights and drive the planning process with company executives.
  • Automate scheduling and distribution of reports and support auditing and value realization.
  • Partner with enterprise architects to define and ensure proposed.
  • Business Intelligence solutions adhere to an enterprise reference architecture.
  • Design robust data-centric solutions and architecture that incorporates technology and strong BI solutions to scale up and eliminate repetitive tasks.
  • Experience leading development efforts through all phases of SDLC.
  • 2+ years "hands-on" experience designing Analytics and Business Intelligence solutions.
  • Experience with Quicksight, PowerBI, Tableau and Qlik is a plus.
  • Hands on experience in SQL, data management, and scripting (preferably Python).
  • Strong data visualisation design skills, data modeling and inference skills.
  • Hands-on and experience in managing small teams.
  • Financial services experience preferred, but not mandatory.
  • Strong knowledge of architectural principles, tools, frameworks, and best practices.
  • Excellent communication and presentation skills to communicate and collaborate with all levels of the organisation.
  • Preferred candidates with less than 30 days notice period.
Read more
Job posted by
Poornima B
Apply for job
Data Science
Machine Learning (ML)
Deep Learning
Bengaluru (Bangalore)
4 - 7 yrs
₹10L - ₹20L / yr
Work-days: Sunday through Thursday
Work shift: Day time

  •  Strong problem-solving skills with an emphasis on product development.
• Experience using statistical computer languages (R, Python, SLQ, etc.) to manipulate data and draw
insights from large data sets.
• Experience in building ML pipelines with Apache Spark, Python
• Proficiency in implementing end to end Data Science Life cycle
• Experience in Model fine-tuning and advanced grid search techniques
• Experience working with and creating data architectures.
• Knowledge of a variety of machine learning techniques (clustering, decision tree learning, artificial neural
networks, etc.) and their real-world advantages/drawbacks.
• Knowledge of advanced statistical techniques and concepts (regression, properties of distributions,
statistical tests and proper usage, etc.) and experience with applications.
• Excellent written and verbal communication skills for coordinating across teams.
• A drive to learn and master new technologies and techniques.
• Assess the effectiveness and accuracy of new data sources and data gathering techniques.
• Develop custom data models and algorithms to apply to data sets.
• Use predictive modeling to increase and optimize customer experiences, revenue generation, ad targeting, and other business outcomes.
• Develop company A/B testing framework and test model quality.
• Coordinate with different functional teams to implement models and monitor outcomes.
• Develop processes and tools to monitor and analyze model performance and data accuracy.

Key skills:
● Strong knowledge in Data Science pipelines with Python
● Object-oriented programming
● A/B testing framework and model fine-tuning
● Proficiency in using sci-kit, NumPy, and pandas package in python
Nice to have:
● Ability to work with containerized solutions: Docker/Compose/Swarm/Kubernetes
● Unit testing, Test-driven development practice
● DevOps, Continuous integration/ continuous deployment experience
● Agile development environment experience, familiarity with SCRUM
● Deep learning knowledge
Read more
Job posted by
Priyanka U
Apply for job
at A Chemical & Purifier Company headquartered in the US.
Agency job
SQL Azure
Azure data factory
Bengaluru (Bangalore)
4 - 9 yrs
₹15L - ₹18L / yr
  • Create and maintain optimal data pipeline architecture,
  • Assemble large, complex data sets that meet functional / non-functional business requirements.
  • Author data services using a variety of programming languages
  • Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
  • Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and Azure ‘big data’ technologies.
  • Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency, and other key business performance metrics.
  • Work with stakeholders including the Executive, Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs.
  • Keep our data separated and secure across national boundaries through multiple data centres and Azure regions.
  • Create data tools for analytics and data scientist team members that assist them in building and optimizing our product into an innovative industry leader.
  • Work with data and analytics experts to strive for greater functionality in our data systems.
  • Work in an Agile environment with Scrum teams.
  • Ensure data quality and help in achieving data governance.

Basic Qualifications
  • 2+ years of experience in a Data Engineer role
  • Undergraduate degree required (Graduate degree preferred) in Computer Science, Statistics, Informatics, Information Systems or another quantitative field.
  • Experience using the following software/tools:
  • Experience with big data tools: Hadoop, Spark, Kafka, etc.
  • Experience with relational SQL and NoSQL databases
  • Experience with data pipeline and workflow management tools
  • Experience with Azure cloud services: ADLS, ADF, ADLA, AAS
  • Experience with stream-processing systems: Storm, Spark-Streaming, etc.
  • Experience with object-oriented/object function scripting languages: Python, Java, C++, Scala, etc.
  • Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases
  • Understanding of ELT and ETL patterns and when to use each. Understanding of data models and transforming data into the models
  • Experience building and optimizing ‘big data’ data pipelines, architectures, and data sets
  • Strong analytic skills related to working with unstructured datasets
  • Build processes supporting data transformation, data structures, metadata, dependency, and workload management
  • Working knowledge of message queuing, stream processing, and highly scalable ‘big data’ data stores
  • Experience supporting and working with cross-functional teams in a dynamic environment
Read more
Job posted by
Fiona RKS
Apply for job
at Our client company is into Analytics. (RF1)
Agency job
Amazon Web Services (AWS)
Data Engineer
Big Data
Bengaluru (Bangalore)
3 - 5 yrs
₹12L - ₹14L / yr
  •  We are looking for a Data Engineer with 3-5 years experience in Python, SQL, AWS (EC2, S3, Elastic Beanstalk, API Gateway), and Java.
  • The applicant must be able to perform Data Mapping (data type conversion, schema harmonization) using Python, SQL, and Java.
  • The applicant must be familiar with and have programmed ETL interfaces (OAUTH, REST API, ODBC) using the same languages.
  • The company is looking for someone who shows an eagerness to learn and who asks concise questions when communicating with teammates.
Read more
Job posted by
Ragul Ragul
Apply for job
Founded 2017  •  Products & Services  •  20-100 employees  •  Raised funding
Apache Kafka
Big Data
Distributed computing
Amazon Web Services (AWS)
Bengaluru (Bangalore)
- yrs
₹15L - ₹28L / yr
Job Description
We are looking for a Data Engineer that will be responsible for collecting, storing, processing, and analyzing huge sets of data that is coming from different sources.

Working with Big Data tools and frameworks to provide requested capabilities Identify development needs in order to improve and streamline operations Develop and manage BI solutions Implementing ETL process and Data Warehousing Monitoring performance and managing infrastructure

Proficient understanding of distributed computing principles Proficiency with Hadoop and Spark Experience with building stream-processing systems, using solutions such as Kafka and Spark-Streaming Good knowledge of Data querying tools SQL and Hive Knowledge of various ETL techniques and frameworks Experience with Python/Java/Scala (at least one) Experience with cloud services such as AWS or GCP Experience with NoSQL databases, such as DynamoDB,MongoDB will be an advantage Excellent written and verbal communication skills
Read more
Job posted by
Keerthana k
Apply for job
Did not find a job you were looking for?
Search for relevant jobs from 10000+ companies such as Google, Amazon & Uber actively hiring on CutShort.
Want to apply for this role at PriceSenz?
Apply for this job
Why apply via CutShort?
Connect with actual hiring teams and get their fast response. No spam.