Data Engineer

at Capgemini

Agency job
via Nu-Pie
icon
Remote, Bengaluru (Bangalore)
icon
4 - 8 yrs
icon
₹4L - ₹16L / yr (ESOP available)
icon
Full time
Skills
Big Data
Hadoop
Data engineering
data engineer
Google Cloud Platform (GCP)
Data Warehouse (DWH)
ETL
Systems Development Life Cycle (SDLC)
Java
Scala
Python
SQL
Scripting
Teradata
HiveQL
Pig
Spark
Apache Kafka
Windows Azure
Job Description
Job Title: Data Engineer
Tech Job Family: DACI
• Bachelor's Degree in Engineering, Computer Science, CIS, or related field (or equivalent work experience in a related field)
• 2 years of experience in Data, BI or Platform Engineering, Data Warehousing/ETL, or Software Engineering
• 1 year of experience working on project(s) involving the implementation of solutions applying development life cycles (SDLC)
Preferred Qualifications:
• Master's Degree in Computer Science, CIS, or related field
• 2 years of IT experience developing and implementing business systems within an organization
• 4 years of experience working with defect or incident tracking software
• 4 years of experience with technical documentation in a software development environment
• 2 years of experience working with an IT Infrastructure Library (ITIL) framework
• 2 years of experience leading teams, with or without direct reports
• Experience with application and integration middleware
• Experience with database technologies
Data Engineering
• 2 years of experience in Hadoop or any Cloud Bigdata components (specific to the Data Engineering role)
• Expertise in Java/Scala/Python, SQL, Scripting, Teradata, Hadoop (Sqoop, Hive, Pig, Map Reduce), Spark (Spark Streaming, MLib), Kafka or equivalent Cloud Bigdata components (specific to the Data Engineering role)
BI Engineering
• Expertise in MicroStrategy/Power BI/SQL, Scripting, Teradata or equivalent RDBMS, Hadoop (OLAP on Hadoop), Dashboard development, Mobile development (specific to the BI Engineering role)
Platform Engineering
• 2 years of experience in Hadoop, NO-SQL, RDBMS or any Cloud Bigdata components, Teradata, MicroStrategy (specific to the Platform Engineering role)
• Expertise in Python, SQL, Scripting, Teradata, Hadoop utilities like Sqoop, Hive, Pig, Map Reduce, Spark, Ambari, Ranger, Kafka or equivalent Cloud Bigdata components (specific to the Platform Engineering role)
Lowe’s is an equal opportunity employer and administers all personnel practices without regard to race, color, religion, sex, age, national origin, disability, sexual orientation, gender identity or expression, marital status, veteran status, genetics or any other category protected under applicable law.

About Capgemini

With more than 180,000 people in over 40 countries, Capgemini is one of the world's foremost providers of consulting, technology and outsourcing services.
Founded
1967
Type
Services
Size
100-1000 employees
Stage
Profitable
View full company details
Why apply to jobs via Cutshort
Personalized job matches
Stop wasting time. Get matched with jobs that meet your skills, aspirations and preferences.
Verified hiring teams
See actual hiring teams, find common social connections or connect with them directly. No 3rd party agencies here.
Move faster with AI
We use AI to get you faster responses, recommendations and unmatched user experience.
2101133
Matches delivered
3712187
Network size
15000
Companies hiring

Similar jobs

Data Scientist (Risk)/Sr. Data Scientist (Risk)

at Rupifi

Founded 2020  •  Product  •  20-100 employees  •  Raised funding
Risk Management
Risk assessment
Problem solving
Python
R Language
SQL
Machine Learning (ML)
Deep Learning
Artificial Intelligence (AI)
Big Data
icon
Bengaluru (Bangalore)
icon
3 - 9 yrs
icon
₹10L - ₹55L / yr
Data Scientist (Risk)/Sr. Data Scientist (Risk)

As a part of the Data Science & Analytics team at Rupifi, you will play a significant role in helping define the business/product vision and deliver it from the ground up by working with passionate
high-performing individuals in a very fast-paced working environment.

You will work closely with Data Scientists & Analysts, Engineers, Designers, Product Managers, Ops Managers and Business Leaders, and help the team make informed data-driven decisions and deliver high business impact.

Expectations:
1. Use statistical and machine learning techniques to create scalable risk management systems
2. Design, develop and evaluate highly innovative models for risk management
3. Establish scalable, efficient and automated processes for model development, model
validation and model implementation
4. Analyse data to better understand potential risks, concerns and outcomes of decisions
5. Aggregate data from multiple sources to provide a comprehensive assessment
6. Create reports, presentations and process documents to display impactful results
7. Collaborate with other team members to effectively analyze and present data
8. Develop insights and data visualizations to solve complex problems and communicate ideas to stakeholders

Tech skills:
● Hands-on experience in Python/R & SQL
● Hands-on experience in Machine & Deep Learning area (e.g., gradient boosting machine, XGBoost, neural network), AI, and deep learning as well as classic statistical modeling techniques and assumptions
● Experience in handling complex and large data sources
● Experience in modeling techniques in the fintech/banking domain
● Experience in working on Big data and distributed computing

Preferred Qualifications:
● Bachelors / Masters's degree in Maths, Data Science, Computer Science, Engineering, Statistics, Economics or similar quantitative field
● 4 to 10 years of modeling experience in the fintech/banking domain in fields like collections, underwriting, customer management, etc.
● Strong analytical skills with good problem-solving ability
● Strong presentation and communication skills
● Experience in working on advanced machine learning techniques
● Quantitative and analytical skills with a demonstrated ability to understand new analytical concepts
Job posted by
Richa Tiwari

Quality Analyst (Coding Team)

at iMocha

Founded 2012  •  Product  •  100-500 employees  •  Raised funding
C
C++
Java
Spring Boot
Software Testing (QA)
.NET
Manual testing
STLC
icon
Pune
icon
0 - 2 yrs
icon
₹2L - ₹5L / yr

Roles and Responsibilities:
- Verify, review and rectify questions end-to-end in the creation cycle, this would be for
all difficulty levels and across multiple programming languages of coding questions.
- Review, validate and correct test cases that belong to a particular question. Make
necessary changes
- Document and report the quality parameters and suggest a continuous improvement.
- Help the team with writing or generating code stubs wherever necessary for a coding
question in one of the programming languages like C, C++, Java, and Python. (A code
stub is a partial code to help candidates start off with, it’s a starter code-snippet)
- Identify and rectify technical errors in coding questions and ensure that questions meet
iMocha guidelines.
- Working with Product Manager to research on latest technologies, trends, and
assessments in coding.
- Bring an innovative approach to the ever-changing world of programming languages and
framework-based technologies like ReactJS, Angular, Spring Boot, DOT NET.
Job Requirements/Qualifications:
- 0 -3 Years of experience in writing codes either in C, C++, C#, Java or Python
- Good to have: Knowledge of Manual QA and lifecycle of a QA
- Ability to understand algorithms and Data Structures.
- Candidates with exposure to ReactJS, Java Springboot, AI/ML will also be a good fit.
- Analytical and problem-solving skills by understanding complex problems.
- Experience on any competitive coding Platform is an added advantage.
- Passion about technology.
- Degree related to Computer Science: MCA, B.E., B.Tech, B.Sc

Job posted by
Sweta Sharma

Data Scientist

at IT-Startup In Chennai

Data Science
Data Scientist
R Programming
Python
Machine Learning (ML)
TensorFlow
Keras
pandas
Statistical Modeling
Pattern recognition
Time series
Deep Learning
Implementation
Optimization
icon
Chennai
icon
3 - 5 yrs
icon
₹12L - ₹20L / yr
  • 3+ years experience in practical implementation and deployment of ML based systems preferred.
  • BE/B Tech or M Tech (preferred) in CS/Engineering with strong mathematical/statistical background
  • Strong mathematical and analytical skills, especially statistical and ML techniques, with familiarity with different supervised and unsupervised learning algorithms
  • Implementation experiences and deep knowledge of Classification, Time Series Analysis, Pattern Recognition, Reinforcement Learning, Deep Learning, Dynamic Programming and Optimisation
  • Experience in working on modeling graph structures related to spatiotemporal systems
  • Programming skills in Python
  • Experience in developing and deploying on cloud (AWS or Google or Azure)
  • Good verbal and written communication skills
  • Familiarity with well-known ML frameworks such as Pandas, Keras, TensorFlow
Job posted by
Jayaraj E

Data Scientist

at Information Solution Provider Company

Agency job
via Jobdost
SQL
Hadoop
Spark
Machine Learning (ML)
Data Science
Algorithms
Python
Big Data
icon
Delhi, Gurugram, Noida, Ghaziabad, Faridabad
icon
3 - 7 yrs
icon
₹10L - ₹15L / yr

Job Description:

The data science team is responsible for solving business problems with complex data. Data complexity could be characterized in terms of volume, dimensionality and multiple touchpoints/sources. We understand the data, ask fundamental-first-principle questions, apply our analytical and machine learning skills to solve the problem in the best way possible. 

 

Our ideal candidate

The role would be a client facing one, hence good communication skills are a must. 

The candidate should have the ability to communicate complex models and analysis in a clear and precise manner. 

 

The candidate would be responsible for:

  • Comprehending business problems properly - what to predict, how to build DV, what value addition he/she is bringing to the client, etc.
  • Understanding and analyzing large, complex, multi-dimensional datasets and build features relevant for business
  • Understanding the math behind algorithms and choosing one over another
  • Understanding approaches like stacking, ensemble and applying them correctly to increase accuracy

Desired technical requirements

  • Proficiency with Python and the ability to write production-ready codes. 
  • Experience in pyspark, machine learning and deep learning
  • Big data experience, e.g. familiarity with Spark, Hadoop, is highly preferred
  • Familiarity with SQL or other databases.
Job posted by
Sathish Kumar

Big Data

at NoBroker

Founded 2014  •  Products & Services  •  100-1000 employees  •  Raised funding
Java
Spark
PySpark
Data engineering
Big Data
Hadoop
Selenium
icon
Bengaluru (Bangalore)
icon
1 - 3 yrs
icon
₹6L - ₹8L / yr
You will build, setup and maintain some of the best data pipelines and MPP frameworks for our
datasets
Translate complex business requirements into scalable technical solutions meeting data design
standards. Strong understanding of analytics needs and proactive-ness to build generic solutions
to improve the efficiency
Build dashboards using Self-Service tools on Kibana and perform data analysis to support
business verticals
Collaborate with multiple cross-functional teams and work
Job posted by
noor aqsa

Big Data Engineer

at StatusNeo

Founded 2020  •  Products & Services  •  100-1000 employees  •  Profitable
Big Data
Data engineering
Scala
Apache Hive
Hadoop
Python
Linux/Unix
dbms
icon
Hyderabad, Bengaluru (Bangalore), Gurugram
icon
3 - 7 yrs
icon
₹2L - ₹20L / yr

Bigdata JD :

 

Data Engineer – SQL, RDBMS, pySpark/Scala, Python, Hive, Hadoop, Unix

 

Data engineering services required:

  • Builddataproducts and processes alongside the core engineering and technology team
  • Collaborate with seniordatascientists to curate, wrangle, and prepare data for use in their advanced analytical models
  • Integratedatafrom a variety of sources, assuring that they adhere to data quality and accessibility standards
  • Modify and improvedataengineering processes to handle ever larger, more complex, and more types of data sources and pipelines
  • Use Hadoop architecture and HDFS commands to design and optimizedataqueries at scale
  • Evaluate and experiment with noveldataengineering tools and advises information technology leads and partners about new capabilities to determine optimal solutions for particular technical problems or designated use cases

Big data engineering skills:

  • Demonstrated ability to perform the engineering necessary to acquire, ingest, cleanse, integrate, and structure massive volumes ofdatafrom multiple sources and systems into enterprise analytics platforms
  • Proven ability to design and optimize queries to build scalable, modular, efficientdatapipelines
  • Ability to work across structured, semi-structured, and unstructureddata, extracting information and identifying linkages across disparatedata sets
  • Proven experience delivering production-readydataengineering solutions, including requirements definition, architecture selection, prototype development, debugging, unit-testing, deployment, support, and maintenance
  • Ability to operate with a variety ofdataengineering tools and technologies; vendor agnostic candidates preferred

Domain and industry knowledge:

  • Strong collaboration and communication skills to work within and across technology teams and business units
  • Demonstrates the curiosity, interpersonal abilities, and organizational skills necessary to serve as a consulting partner, includes the ability to uncover, understand, and assess the needs of various business stakeholders
  • Experience with problem discovery, solution design, and insight delivery that involves frequent interaction, education, engagement, and evangelism with senior executives
  • Ideal candidate will have extensive experience with the creation and delivery of advanced analytics solutions for healthcare payers or insurance companies, including anomaly detection, provider optimization, studies of sources of fraud, waste, and abuse, and analysis of clinical and economic outcomes of treatment and wellness programs involving medical or pharmacy claimsdata, electronic medical recorddata, or other health data
  • Experience with healthcare providers, pharma, or life sciences is a plus

 

Job posted by
Alex P

Senior Project Manager

at Fragma Data Systems

Founded 2015  •  Products & Services  •  employees  •  Profitable
Project Management
Spotfire
Qlikview
Tableau
PowerBI
Data Visualization
Data Analytics
SQL
Banking
icon
Remote, Bengaluru (Bangalore)
icon
5 - 10 yrs
icon
₹10L - ₹15L / yr
  • Gathering project requirements from customers and supporting their requests.
  • Creating project estimates and scoping the solution based on clients’ requirements.
  • Delivery on key project milestones in line with project Plan/ Budget.
  • Establishing individual project plans and working with the team in prioritizing production schedules.
  • Communication of milestones with the team and to clients via scheduled work-in-progress meetings
  • Designing and documenting product requirements.
  • Possess good analytical skills - detail-orientemd
  • Be familiar with Microsoft applications and working knowledge of MS Excel
  • Knowledge of MIS Reports & Dashboards
  • Maintaining strong customer relationships with a positive, can-do attitude
Job posted by
Evelyn Charles

Data Engineer

at SpringML India Private Limited

Founded 2015  •  Services  •  100-1000 employees  •  Profitable
Big Data
Hadoop
Apache Spark
Spark
Data Structures
Data engineering
SQL
kafka
icon
Hyderabad
icon
4 - 11 yrs
icon
₹8L - ₹20L / yr

SpringML is looking to hire a top-notch Senior  Data Engineer who is passionate about working with data and using the latest distributed framework to process large dataset. As an Associate Data Engineer, your primary role will be to design and build data pipelines. You will be focused on helping client projects on data integration, data prep and implementing machine learning on datasets. In this role, you will work on some of the latest technologies, collaborate with partners on early win, consultative approach with clients, interact daily with executive leadership, and help build a great company. Chosen team members will be part of the core team and play a critical role in scaling up our emerging practice.

RESPONSIBILITIES:

 

  • Ability to work as a member of a team assigned to design and implement data integration solutions.
  • Build Data pipelines using standard frameworks in Hadoop, Apache Beam and other open-source solutions.
  • Learn quickly – ability to understand and rapidly comprehend new areas – functional and technical – and apply detailed and critical thinking to customer solutions.
  • Propose design solutions and recommend best practices for large scale data analysis

 

SKILLS:

 

  • B.tech  degree in computer science, mathematics or other relevant fields.
  • 4+years of experience in ETL, Data Warehouse, Visualization and building data pipelines.
  • Strong Programming skills – experience and expertise in one of the following: Java, Python, Scala, C.
  • Proficient in big data/distributed computing frameworks such as Apache,Spark, Kafka,
  • Experience with Agile implementation methodologies
Job posted by
Kayal Vizhi

Scala Spark Engineer

at Skanda Projects

Founded 2010  •  Services  •  100-1000 employees  •  Profitable
Scala
Apache Spark
Big Data
icon
Bengaluru (Bangalore)
icon
2 - 8 yrs
icon
₹6L - ₹25L / yr
PreferredSkills- • Should have minimum 3 years of experience in Software development • Strong experience in spark Scala development • Person should have strong experience in AWS cloud platform services • Should have good knowledge and exposure in Amazon EMR, EC2 • Should be good in over databases like dynamodb, snowflake
Job posted by
Nagraj Kumar

Senior Machine Learning Engineer

at AthenasOwl

Founded 2017  •  Product  •  100-500 employees  •  Raised funding
Deep Learning
Natural Language Processing (NLP)
Machine Learning (ML)
Computer vision
Python
Data Structures
icon
Mumbai
icon
3 - 7 yrs
icon
₹10L - ₹20L / yr

Company Profile and Job Description  

About us:  

AthenasOwl (AO) is our “AI for Media” solution that helps content creators and broadcasters to create and curate smarter content. We launched the product in 2017 as an AI-powered suite meant for the media and entertainment industry. Clients use AthenaOwl's context adapted technology for redesigning content, taking better targeting decisions, automating hours of post-production work and monetizing massive content libraries.  

For more details visit: www.athenasowl.tv   

  

Role:   

Senior Machine Learning Engineer  

Experience Level:   

4 -6 Years of experience  

Work location:   

Mumbai (Malad W)   

  

Responsibilities:   

  • Develop cutting edge machine learning solutions at scale to solve computer vision problems in the domain of media, entertainment and sports
  • Collaborate with media houses and broadcasters across the globe to solve niche problems in the field of post-production, archiving and viewership
  • Manage a team of highly motivated engineers to deliver high-impact solutions quickly and at scale

 

 

The ideal candidate should have:   

  • Strong programming skills in any one or more programming languages like Python and C/C++
  • Sound fundamentals of data structures, algorithms and object-oriented programming
  • Hands-on experience with any one popular deep learning framework like TensorFlow, PyTorch, etc.
  • Experience in implementing Deep Learning Solutions (Computer Vision, NLP etc.)
  • Ability to quickly learn and communicate the latest findings in AI research
  • Creative thinking for leveraging machine learning to build end-to-end intelligent software systems
  • A pleasantly forceful personality and charismatic communication style
  • Someone who will raise the average effectiveness of the team and has demonstrated exceptional abilities in some area of their life. In short, we are looking for a “Difference Maker”

 

Job posted by
Ericsson Fernandes
Did not find a job you were looking for?
icon
Search for relevant jobs from 10000+ companies such as Google, Amazon & Uber actively hiring on Cutshort.
Get to hear about interesting companies hiring right now
iconFollow Cutshort
Want to apply to this role at Capgemini?
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Learn more
Get to hear about interesting companies hiring right now
iconFollow Cutshort