Cutshort logo
MNC logo
Data Engineer
at MNC
MNC's logo

Data Engineer

at MNC

Agency job
3 - 6 yrs
₹10L - ₹15L / yr (ESOP available)
Bengaluru (Bangalore), Hyderabad
Skills
Big Data
Spark
ETL
Apache
Hadoop
Data engineering
skill iconAmazon Web Services (AWS)
Desired Skill, Experience, Qualifications, and Certifications:
• 5+ years’ experience developing and maintaining modern ingestion pipeline using
technologies like Spark, Apache Nifi etc).
• 2+ years’ experience with Healthcare Payors (focusing on Membership, Enrollment, Eligibility,
• Claims, Clinical)
• Hands on experience on AWS Cloud and its Native components like S3, Athena, Redshift &
• Jupyter Notebooks
• Strong in Spark Scala & Python pipelines (ETL & Streaming)
• Strong experience in metadata management tools like AWS Glue
• String experience in coding with languages like Java, Python
• Worked on designing ETL & streaming pipelines in Spark Scala / Python
• Good experience in Requirements gathering, Design & Development
• Working with cross-functional teams to meet strategic goals.
• Experience in high volume data environments
• Critical thinking and excellent verbal and written communication skills
• Strong problem-solving and analytical abilities, should be able to work and delivery
individually
• Good-to-have AWS Developer certified, Scala coding experience, Postman-API and Apache
Airflow or similar schedulers experience
• Nice-to-have experience in healthcare messaging standards like HL7, CCDA, EDI, 834, 835, 837
• Good communication skills
Read more
Users love Cutshort
Read about what our users have to say about finding their next opportunity on Cutshort.
Subodh Popalwar's profile image

Subodh Popalwar

Software Engineer, Memorres
For 2 years, I had trouble finding a company with good work culture and a role that will help me grow in my career. Soon after I started using Cutshort, I had access to information about the work culture, compensation and what each company was clearly offering.
Companies hiring on Cutshort
companies logos

About MNC

Founded
Type
Size
Stage
About
N/A
Company social profiles
N/A

Similar jobs

Mumbai
5 - 10 yrs
₹8L - ₹20L / yr
skill iconData Science
skill iconMachine Learning (ML)
Natural Language Processing (NLP)
Computer Vision
recommendation algorithm
+6 more


Data Scientist – Delivery & New Frontiers Manager 

Job Description:   

We are seeking highly skilled and motivated data scientist to join our Data Science team. The successful candidate will play a pivotal role in our data-driven initiatives and be responsible for designing, developing, and deploying data science solutions that drives business values for stakeholders. This role involves mapping business problems to a formal data science solution, working with wide range of structured and unstructured data, architecture design, creating sophisticated models, setting up operations for the data science product with the support from MLOps team and facilitating business workshops. In a nutshell, this person will represent data science and provide expertise in the full project cycle. Expectation of the successful candidate will be above that of a typical data scientist. Beyond technical expertise, problem solving in complex set-up will be key to the success for this role. 

Responsibilities: 

  • Collaborate with cross-functional teams, including software engineers, product managers, and business stakeholders, to understand business needs and identify data science opportunities. 
  • Map complex business problems to data science problem, design data science solution using GCP/Azure Databricks platform. 
  • Collect, clean, and preprocess large datasets from various internal and external sources.  
  • Streamlining data science process working with Data Engineering, and Technology teams. 
  • Managing multiple analytics projects within a Function to deliver end-to-end data science solutions, creation of insights and identify patterns.  
  • Develop and maintain data pipelines and infrastructure to support the data science projects 
  • Communicate findings and recommendations to stakeholders through data visualizations and presentations. 
  • Stay up to date with the latest data science trends and technologies, specifically for GCP companies 

 

Education / Certifications:  

Bachelor’s or Master’s in Computer Science, Engineering, Computational Statistics, Mathematics. 

Job specific requirements:  

  • Brings 5+ years of deep data science experience 

∙       Strong knowledge of machine learning and statistical modeling techniques in a in a clouds-based environment such as GCP, Azure, Amazon 

  • Experience with programming languages such as Python, R, Spark 
  • Experience with data visualization tools such as Tableau, Power BI, and D3.js 
  • Strong understanding of data structures, algorithms, and software design principles 
  • Experience with GCP platforms and services such as Big Query, Cloud ML Engine, and Cloud Storage 
  • Experience in configuring and setting up the version control on Code, Data, and Machine Learning Models using GitHub. 
  • Self-driven, be able to work with cross-functional teams in a fast-paced environment, adaptability to the changing business needs. 
  • Strong analytical and problem-solving skills 
  • Excellent verbal and written communication skills 
  • Working knowledge with application architecture, data security and compliance team. 


Read more
Bengaluru (Bangalore)
5 - 9 yrs
₹10L - ₹18L / yr
skill iconMachine Learning (ML)
skill iconData Science
Natural Language Processing (NLP)
Computer Vision
recommendation algorithm
+10 more

Requirements

Experience

  • 5+ years of professional experience in implementing MLOps framework to scale up ML in production.
  • Hands-on experience with Kubernetes, Kubeflow, MLflow, Sagemaker, and other ML model experiment management tools including training, inference, and evaluation.
  • Experience in ML model serving (TorchServe, TensorFlow Serving, NVIDIA Triton inference server, etc.)
  • Proficiency with ML model training frameworks (PyTorch, Pytorch Lightning, Tensorflow, etc.).
  • Experience with GPU computing to do data and model training parallelism.
  • Solid software engineering skills in developing systems for production.
  • Strong expertise in Python.
  • Building end-to-end data systems as an ML Engineer, Platform Engineer, or equivalent.
  • Experience working with cloud data processing technologies (S3, ECR, Lambda, AWS, Spark, Dask, ElasticSearch, Presto, SQL, etc.).
  • Having Geospatial / Remote sensing experience is a plus.
Read more
Mumbai
3 - 8 yrs
₹7L - ₹10L / yr
skill iconPython
Object Oriented Programming (OOPs)
ETL
PowerBI
Tableau
+1 more
Professional Skillset:
 Professional experience in Python – Mandatory experience
 Basic knowledge of any BI Tool (Microsoft Power BI, Tableau etc.) and experience in R
will be an added advantage
 Proficient in Excel
 Good verbal and written communication skills


Key Responsibilities:
 Analyze data trends and provide intelligent business insights, monitor operational and
business metrics
 Complete ownership of business excellence dashboard and preparation of reports for
senior management stating trends, patterns, and predictions using relevant data
 Review, validate and analyse data points and implement new data analysis
methodologies
 Perform data profiling to identify and understand anomalies
 Perform analysis to assess quality and meaning of data
 Develop policies and procedures for the collection and analysis of data
 Analyse existing process with the help of data and propose process change and/or lead
process re-engineering initiatives
 Use BI Tools (Microsoft Power BI/Tableau) and develop and manage BI solutions
Read more
Mumbai
10 - 15 yrs
₹8L - ₹15L / yr
Spotfire
Qlikview
Tableau
PowerBI
Data Visualization
+6 more

Exp-Min 10 Years

Location Mumbai

Sal-Nego

 

 

Powerbi, Tableau, QlikView,

 

 

Solution Architect/Technology Lead – Data Analytics

 

Role

Looking for Business Intelligence lead (BI Lead) having hands on experience BI tools (Tableau, SAP Business Objects, Financial and Accounting modules, Power BI), SAP integration, and database knowledge including one or more of Azure Synapse/Datafactory, SQL Server, Oracle, cloud-based DB Snowflake. Good knowledge of AI-ML, Python is also expected.

  • You will be expected to work closely with our business users. The development will be performed using an Agile methodology which is based on scrum (time boxing, daily scrum meetings, retrospectives, etc.) and XP (continuous integration, refactoring, unit testing, etc) best practices. Candidates must therefore be able to work collaboratively, demonstrate good ownership, leadership and be able to work well in teams.
  • Responsibilities :
  • Design, development and support of multiple/hybrid Data sources, data visualization Framework using Power BI, Tableau, SAP Business Objects etc. and using ETL tools, Scripting, Python Scripting etc.
  • Implementing DevOps techniques and practices like Continuous Integration, Continuous Deployment, Test Automation, Build Automation and Test-Driven Development to enable the rapid delivery of working code-utilizing tools like Git. Primary Skills

Requirements

  • 10+ years working as a hands-on developer in Information Technology across Database, ETL and BI (SAP Business Objects, integration with SAP Financial and Accounting modules, Tableau, Power BI) & prior team management experience
  • Tableau/PowerBI integration with SAP and knowledge of SAP modules related to finance is a must
  • 3+ years of hands-on development experience in Data Warehousing and Data Processing
  • 3+ years of Database development experience with a solid understanding of core database concepts and relational database design, SQL, Performance tuning
  • 3+ years of hands-on development experience with Tableau
  • 3+ years of Power BI experience including parameterized reports and publishing it on PowerBI Service
  • Excellent understanding and practical experience delivering under an Agile methodology
  • Ability to work with business users to provide technical support
  • Ability to get involved in all the stages of project lifecycle, including analysis, design, development, testing, Good To have Skills
  • Experience with other Visualization tools and reporting tools like SAP Business Objects.

 

Read more
CarWale
at CarWale
5 recruiters
Vanita Acharya
Posted by Vanita Acharya
Navi Mumbai, Mumbai
3 - 5 yrs
₹10L - ₹15L / yr
skill iconData Science
Data Scientist
skill iconR Programming
skill iconPython
skill iconMachine Learning (ML)
+1 more

About CarWale: CarWale's mission is to bring delight in car buying, we offer a bouquet of reliable tools and services to help car consumers decide on buying the right car, at the right price and from the right partner. CarWale has always strived to serve car buyers and owners in the most comprehensive and convenient way possible. We provide a platform where car buyers and owners can research, buy, sell and come together to discuss and talk about their cars.We aim to empower Indian consumers to make informed car buying and ownership decisions with exhaustive and un-biased information on cars through our expert reviews, owner reviews, detailed specifications and comparisons. We understand that a car is by and large the second-most expensive asset a consumer associates his lifestyle with! Together with CarTrade & BikeWale, we are the market leaders in the personal mobility media space.About the Team:We are a bunch of enthusiastic analysts assisting all business functions with their data needs. We deal with huge but diverse datasets to find relationships, patterns and meaningful insights. Our goal is to help drive growth across the organization by creating a data-driven culture.

We are looking for an experienced Data Scientist who likes to explore opportunities and know their way around data to build world class solutions making a real impact on the business. 

 

Skills / Requirements –

  • 3-5 years of experience working on Data Science projects
  • Experience doing statistical modelling of big data sets
  • Expert in Python, R language with deep knowledge of ML packages
  • Expert in fetching data from SQL
  • Ability to present and explain data to management
  • Knowledge of AWS would be beneficial
  • Demonstrate Structural and Analytical thinking
  • Ability to structure and execute data science project end to end

 

Education –

Bachelor’s degree in a quantitative field (Maths, Statistics, Computer Science). Masters will be preferred.

 

Read more
LiftOff Software India
at LiftOff Software India
2 recruiters
Hameeda Haider
Posted by Hameeda Haider
Remote, Bengaluru (Bangalore)
5 - 8 yrs
₹1L - ₹30L / yr
PySpark
Data engineering
Big Data
Hadoop
Spark

Why LiftOff? 

 

We at LiftOff specialize in product creation, for our main forte lies in helping Entrepreneurs realize their dream. We have helped businesses and entrepreneurs launch more than 70 plus products.

Many on the team are serial entrepreneurs with a history of successful exits.

 

As a Data Engineer, you will work directly with our founders and alongside our engineers on a variety of software projects covering various languages, frameworks, and application architectures.

 

About the Role

 

If you’re driven by the passion to build something great from scratch, a desire to innovate, and a commitment to achieve excellence in your craftLiftOff is a great place for you.


  • Architecture/design / configure the data ingestion pipeline for data received from 3rd party vendors
  • Data loading should be configured with ease/flexibility for adding new data sources & also refresh of the previously loaded data
  • Design & implement a consumer graph, that provides an efficient means to query the data via email, phone, and address information (using any one of the fields or combination)
  • Expose the consumer graph/search capability for consumption by our middleware APIs, which would be shown in the portal
  • Design / review the current client-specific data storage, which is kept as a copy of the consumer master data for easier retrieval/query for subsequent usage


Please Note that this is for a Consultant Role

Candidates who are okay with freelancing/Part-time can apply

Read more
EnterpriseMinds
at EnterpriseMinds
2 recruiters
Rani Galipalli
Posted by Rani Galipalli
Remote only
4 - 8 yrs
₹8L - ₹25L / yr
PySpark
Data engineering
Big Data
Hadoop
Spark
+2 more

Job Description

 

  1. Solid technical skills with a proven and successful history working with data at scale and empowering organizations through data
  2. Big data processing frameworks: Spark, Scala, Hadoop, Hive, Kafka, EMR with Python
  3. Advanced experience and hands-on architecture and administration experience on big data platforms

 

Read more
IT MNC
Agency job
via Apical Mind by Madhusudan Patade
Bengaluru (Bangalore), Hyderabad, Noida, Chennai, NCR (Delhi | Gurgaon | Noida)
3 - 12 yrs
₹15L - ₹40L / yr
Presto
Hadoop
presto
SQL

Experience – 3 – 12 yrs

Budget - Open

Location - PAN India (Noida/Bangaluru/Hyderabad/Chennai)


Presto Developer (4)

 

Understanding of distributed SQL query engine running on Hadoop 

Design and develop core components for Presto 

Contribute to the ongoing Presto development by implementing new features, bug fixes, and other improvements 

Develop new and extend existing Presto connectors to various data sources 

Lead complex and technically challenging projects from concept to completion 

Write tests and contribute to ongoing automation infrastructure development 

Run and analyze software performance metrics 

Collaborate with teams globally across multiple time zones and operate in an Agile development environment 

Hands-on experience and interest with Hadoop 

Read more
Fragma Data Systems
at Fragma Data Systems
8 recruiters
Evelyn Charles
Posted by Evelyn Charles
Remote, Bengaluru (Bangalore), Hyderabad
0 - 1 yrs
₹3L - ₹3.5L / yr
SQL
Data engineering
Data Engineer
skill iconPython
Big Data
+1 more
Strong Programmer with expertise in Python and SQL
 
● Hands-on Work experience in SQL/PLSQL
● Expertise in at least one popular Python framework (like Django,
Flask or Pyramid)
● Knowledge of object-relational mapping (ORM)
● Familiarity with front-end technologies (like JavaScript and HTML5)
● Willingness to learn & upgrade to Big data and cloud technologies
like Pyspark Azure etc.
● Team spirit
● Good problem-solving skills
● Write effective, scalable code
Read more
NeenOpal Intelligent Solutions Private Limited
Pavel Gupta
Posted by Pavel Gupta
Remote, Bengaluru (Bangalore)
2 - 5 yrs
₹6L - ₹12L / yr
ETL
skill iconPython
skill iconAmazon Web Services (AWS)
SQL
skill iconPostgreSQL

We are actively seeking a Senior Data Engineer experienced in building data pipelines and integrations from 3rd party data sources by writing custom automated ETL jobs using Python. The role will work in partnership with other members of the Business Analytics team to support the development and implementation of new and existing data warehouse solutions for our clients. This includes designing database import/export processes used to generate client data warehouse deliverables.

 

Requirements
  • 2+ Years experience as an ETL developer with strong data architecture knowledge around data warehousing concepts, SQL development and optimization, and operational support models.
  • Experience using Python to automate ETL/Data Processes jobs.
  • Design and develop ETL and data processing solutions using data integration tools, python scripts, and AWS / Azure / On-Premise Environment.
  • Experience / Willingness to learn AWS Glue / AWS Data Pipeline / Azure Data Factory for Data Integration.
  • Develop and create transformation queries, views, and stored procedures for ETL processes, and process automation.
  • Document data mappings, data dictionaries, processes, programs, and solutions as per established standards for data governance.
  • Work with the data analytics team to assess and troubleshoot potential data quality issues at key intake points such as validating control totals at intake and then upon transformation, and transparently build lessons learned into future data quality assessments
  • Solid experience with data modeling, business logic, and RESTful APIs.
  • Solid experience in the Linux environment.
  • Experience with NoSQL / PostgreSQL preferred
  • Experience working with databases such as MySQL, NoSQL, and Postgres, and enterprise-level connectivity experience (such as connecting over TLS and through proxies).
  • Experience with NGINX and SSL.
  • Performance tune data processes and SQL queries, and recommend and implement data process optimization and query tuning techniques.
Read more
Why apply to jobs via Cutshort
people_solving_puzzle
Personalized job matches
Stop wasting time. Get matched with jobs that meet your skills, aspirations and preferences.
people_verifying_people
Verified hiring teams
See actual hiring teams, find common social connections or connect with them directly. No 3rd party agencies here.
ai_chip
Move faster with AI
We use AI to get you faster responses, recommendations and unmatched user experience.
21,01,133
Matches delivered
37,12,187
Network size
15,000
Companies hiring
Did not find a job you were looking for?
icon
Search for relevant jobs from 10000+ companies such as Google, Amazon & Uber actively hiring on Cutshort.
companies logo
companies logo
companies logo
companies logo
companies logo
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Users love Cutshort
Read about what our users have to say about finding their next opportunity on Cutshort.
Subodh Popalwar's profile image

Subodh Popalwar

Software Engineer, Memorres
For 2 years, I had trouble finding a company with good work culture and a role that will help me grow in my career. Soon after I started using Cutshort, I had access to information about the work culture, compensation and what each company was clearly offering.
Companies hiring on Cutshort
companies logos