Data Engineer

at Top Management Consulting Company

icon
Gurugram, Bengaluru (Bangalore)
icon
2 - 9 yrs
icon
Best in industry
icon
Full time
Skills
Python
SQL
Amazon Web Services (AWS)
Microsoft Windows Azure
Google Cloud Platform (GCP)
Greetings!!

We are looking out for a technically driven  "Full-Stack Engineer" for one of our premium client

COMPANY DESCRIPTION:
This Company is a global management consulting firm. We are the trusted advisor to the world's leading businesses, governments, and institutions. We work with leading organizations across the private, public and social sectors. 

Qualifications
• Bachelor's degree in computer science or related field; Master's degree is a plus
• 3+ years of relevant work experience
• Meaningful experience with at least two of the following technologies: Python, Scala, Java
• Strong proven experience on distributed processing frameworks (Spark, Hadoop, EMR) and SQL is very
much expected
• Commercial client-facing project experience is helpful, including working in close-knit teams
• Ability to work across structured, semi-structured, and unstructured data, extracting information and
identifying linkages across disparate data sets
• Confirmed ability in clearly communicating complex solutions
• Understandings on Information Security principles to ensure compliant handling and management of
client data
• Experience and interest in Cloud platforms such as: AWS, Azure, Google Platform or Databricks
• Extraordinary attention to detail
Read more
Why apply to jobs via Cutshort
Personalized job matches
Stop wasting time. Get matched with jobs that meet your skills, aspirations and preferences.
Verified hiring teams
See actual hiring teams, find common social connections or connect with them directly. No 3rd party agencies here.
Move faster with AI
We use AI to get you faster responses, recommendations and unmatched user experience.
2101133
Matches delivered
3712187
Network size
15000
Companies hiring

Similar jobs

Senior Business Intelligence Engineer

at Innovative Startup

Agency job
via Qrata
Business Intelligence (BI)
Tableau
CleverTap
Python
Analytics
icon
Remote only
icon
3 - 6 yrs
icon
₹18L - ₹28L / yr
Bachelor Degree in a quantitative field (i.e. Mathematics, Statistics, Computer
Science)
Have 2 to 6 years of experience working in a similar role in a startup environment
SQL and Excel have no secrets for you
You love visualizing data with Tableau
Any experience with product analytics tools (Mixpanel, Clevertap) is a plus
You solve math puzzles for fun
A strong analytical mindset with a problem-solving attitude
Comfortable with being critical and speaking your mind
You can easily switch between coding (R or Python) and having a business
discussion
Be a team player who thrives in a fast-paced and constantly changing environment
Read more
Job posted by
Blessy Fernandes

Cloud Data Engineer

at Intuitive Technology Partners

Founded  •   •  employees  • 
OLTP
data ops
cloud data
Amazon Web Services (AWS)
Google Cloud Platform (GCP)
Windows Azure
PySpark
ETL
Scala
CI/CD
Data-flow analysis
icon
Remote only
icon
9 - 20 yrs
icon
Best in industry

THE ROLE:Sr. Cloud Data Infrastructure Engineer

As a Sr. Cloud Data Infrastructure Engineer with Intuitive, you will be responsible for building or converting legacy data pipelines from legacy environments to modern cloud environments to help the analytics and data science initiatives across our enterprise customers. You will be working closely with SMEs in Data Engineering and Cloud Engineering, to create solutions and extend Intuitive's DataOps Engineering Projects and Initiatives. The Sr. Cloud Data Infrastructure Engineer will be a central critical role for establishing the DataOps/DataX data logistics and management for building data pipelines, enforcing best practices, ownership for building complex and performant Data Lake Environments, work closely with Cloud Infrastructure Architects and DevSecOps automation teams. The Sr. Cloud Data Infrastructure Engineer is the main point of contact for all things related to DataLake formation and data at scale. In this role, we expect our DataOps leaders to be obsessed with data and providing insights to help our end customers.

ROLES & RESPONSIBILITIES:

  • Design, develop, implement, and tune large-scale distributed systems and pipelines that process large volume of data; focusing on scalability, low-latency, and fault-tolerance in every system built
  • Developing scalable and re-usable frameworks for ingesting large data from multiple sources.
  • Modern Data Orchestration engineering - query tuning, performance tuning, troubleshooting, and debugging big data solutions.
  • Provides technical leadership, fosters a team environment, and provides mentorship and feedback to technical resources.
  • Deep understanding of ETL/ELT design methodologies, patterns, personas, strategy, and tactics for complex data transformations.
  • Data processing/transformation using various technologies such as spark and cloud Services.
  • Understand current data engineering pipelines using legacy SAS tools and convert to modern pipelines.

 

Data Infrastructure Engineer Strategy Objectives: End to End Strategy

Define how data is acquired, stored, processed, distributed, and consumed.
Collaboration and Shared responsibility across disciplines as partners in delivery for progressing our maturity model in the End-to-End Data practice.

  • Understanding and experience with modern cloud data orchestration and engineering for one or more of the following cloud providers - AWS, Azure, GCP.
  • Leading multiple engagements to design and develop data logistic patterns to support data solutions using data modeling techniques (such as file based, normalized or denormalized, star schemas, schema on read, Vault data model, graphs) for mixed workloads, such as OLTP, OLAP, streaming using any formats (structured, semi-structured, unstructured).
  • Applying leadership and proven experience with architecting and designing data implementation patterns and engineered solutions using native cloud capabilities that span data ingestion & integration (ingress and egress), data storage (raw & cleansed), data prep & processing, master & reference data management, data virtualization & semantic layer, data consumption & visualization.
  • Implementing cloud data solutions in the context of business applications, cost optimization, client's strategic needs and future growth goals as it relates to becoming a 'data driven' organization.
  • Applying and creating leading practices that support high availability, scalable, process and storage intensive solutions architectures to data integration/migration, analytics and insights, AI, and ML requirements.
  • Applying leadership and review to create high quality detailed documentation related to cloud data Engineering.
  • Understanding of one or more is a big plus -CI/CD, cloud devops, containers (Kubernetes/Docker, etc.), Python/PySpark/JavaScript.
  • Implementing cloud data orchestration and data integration patterns (AWS Glue, Azure Data Factory, Event Hub, Databricks, etc.), storage and processing (Redshift, Azure Synapse, BigQuery, Snowflake)
  • Possessing a certification(s) in one of the following is a big plus - AWS/Azure/GCP data engineering, and Migration.

 

 

KEY REQUIREMENTS:

  • 10+ years’ experience as data engineer.
  • Must have 5+ Years in implementing data engineering solutions with multiple cloud providers and toolsets.
  • This is hands on role building data pipelines using Cloud Native and Partner Solutions. Hands-on technical experience with Data at Scale.
  • Must have deep expertise in one of the programming languages for data processes (Python, Scala). Experience with Python, PySpark, Hadoop, Hive and/or Spark to write data pipelines and data processing layers.
  • Must have worked with multiple database technologies and patterns. Good SQL experience for writing complex SQL transformation.
  • Performance Tuning of Spark SQL running on S3/Data Lake/Delta Lake/ storage and Strong Knowledge on Databricks and Cluster Configurations.
  • Nice to have Databricks administration including security and infrastructure features of Databricks.
  • Experience with Development Tools for CI/CD, Unit and Integration testing, Automation and Orchestration
Read more
Job posted by
shalu Jain

Data Architect

at Searce Inc

Founded 2004  •  Products & Services  •  100-1000 employees  •  Profitable
Big Data
Hadoop
Spark
Apache Hive
ETL
Apache Kafka
Data architecture
Google Cloud Platform (GCP)
Python
Java
Scala
Data engineering
icon
Mumbai
icon
5 - 9 yrs
icon
₹15L - ₹22L / yr
JD of Data Architect
As a Data Architect, you work with business leads, analysts and data scientists to understand the business domain and manage data engineers to build data products that empower better decision making. You are passionate about data quality of our business metrics and flexibility of your solution that scales to respond to broader business questions.
If you love to solve problems using your skills, then come join the Team Searce. We have a
casual and fun office environment that actively steers clear of rigid "corporate" culture, focuses on productivity and creativity, and allows you to be part of a world-class team while still being yourself.

What You’ll Do
● Understand the business problem and translate these to data services and engineering
outcomes
● Explore new technologies and learn new techniques to solve business problems
creatively
● Collaborate with many teams - engineering and business, to build better data products
● Manage team and handle delivery of 2-3 projects

What We’re Looking For
● Over 4-6 years of experience with
○ Hands-on experience of any one programming language (Python, Java, Scala)
○ Understanding of SQL is must
○ Big data (Hadoop, Hive, Yarn, Sqoop)
○ MPP platforms (Spark, Presto)
○ Data-pipeline & scheduler tool (Ozzie, Airflow, Nifi)
○ Streaming engines (Kafka, Storm, Spark Streaming)
○ Any Relational database or DW experience
○ Any ETL tool experience
● Hands-on experience in pipeline design, ETL and application development
● Hands-on experience in cloud platforms like AWS, GCP etc.
● Good communication skills and strong analytical skills
● Experience in team handling and project delivery
Read more
Job posted by
Reena Bandekar

Data Analyst

at Extramarks Education India Pvt Ltd

Founded 2007  •  Product  •  1000-5000 employees  •  Profitable
Tableau
PowerBI
Data Analytics
SQL
Python
icon
Noida, Delhi, Gurugram, Ghaziabad, Faridabad
icon
3 - 5 yrs
icon
₹8L - ₹10L / yr

Required Experience

· 3+ years of relevant technical experience as a data analyst role

· Intermediate / expert skills with SQL and basic statistics

· Experience in Advance SQL

· Python programming- Added advantage

· Strong problem solving and structuring skills

· Automation in connecting various sources to the data and representing it through various dashboards

· Excellent with Numbers and communicate data points through various reports/templates

· Ability to communicate effectively internally and outside Data Analytics team

· Proactively take up work responsibilities and take adhocs as and when needed

· Ability and desire to take ownership of and initiative for analysis; from requirements clarification to deliverable

· Strong technical communication skills; both written and verbal

· Ability to understand and articulate the "big picture" and simplify complex ideas

· Ability to identify and learn applicable new techniques independently as needed

· Must have worked with various Databases (Relational and Non-Relational) and ETL processes

· Must have experience in handling large volume and data and adhere to optimization and performance standards

· Should have the ability to analyse and provide relationship views of the data from different angles

· Must have excellent Communication skills (written and oral).

· Knowing Data Science is an added advantage

Required Skills

MYSQL, Advanced Excel, Tableau, Reporting and dashboards, MS office, VBA, Analytical skills

Preferred Experience

· Strong understanding of relational database MY SQL etc.

· Prior experience working remotely full-time

· Prior Experience working in Advance SQL

· Experience with one or more BI tools, such as Superset, Tableau etc.

· High level of logical and mathematical ability in Problem Solving

Read more
Job posted by
Prachi Sharma

Senior Product Analyst

at AYM Marketing Management

Founded 2016  •  Products & Services  •  20-100 employees  •  Profitable
SQL server
PowerBI
Spotfire
Qlikview
Tableau
Data Visualization
Data Analytics
Python
Data architecture
Mobile applications
ETL
Teamwork
Analytical Skills
Problem solving
Corporate Communications
Google Analytics
icon
Remote only
icon
2 - 8 yrs
icon
₹10L - ₹25L / yr

Senior Product Analyst

Pampers Start Up Team

India / Remote Working

 

 

Team Description

Our internal team focuses on App Development with data a growing area within the structure. We have a clear vision and strategy which is coupled up with App Development, Data, Testing, Solutions and Operations. The data team sits across the UK and India whilst other teams sit across Dubai, Lebanon, Karachi and various cities in India.

 

Role Description

In this role you will use a range of tools and technologies to primarily working on providing data design, data governance, reporting and analytics on the Pampers App.

 

This is a unique opportunity for an ambitious candidate to join a growing business where they will get exposure to a diverse set of assignments, can contribute fully to the growth of the business and where there are no limits to career progression and reward.

 

Responsibilities

● To be the Data Steward and drive governance having full understanding of all the data that flows through the Apps to all systems

● Work with the campaign team to do data fixes when issues with campaigns

● Investigate and troubleshoot issues with product and campaigns giving clear RCA and impact analysis

● Document data, create data dictionaries and be the “go to” person in understanding what data flows

● Build dashboards and reports using Amplitude, Power BI and present to the key stakeholders

● Carry out adhoc data investigations into issues with the app and present findings back querying data in BigQuery/SQL/CosmosDB

● Translate analytics into a clear powerpoint deck with actionable insights

● Write up clear documentation on processes

● Innovate with new processes or ways of providing analytics and reporting

● Help the data lead to find new ways of adding value

 

 

Requirements

● Bachelor’s degree and a minimum of 4+ years’ experience in an analytical role preferably working in product analytics with consumer app data

● Strong SQL Server and Power BI required

● You have experience with most or all of these tools – SQL Server, Python, Power BI, BigQuery.

● Understanding of mobile app data (Events, CTAs, Screen Views etc)

● Knowledge of data architecture and ETL

● Experience in analyzing customer behavior and providing insightful recommendations

● Self-starter, with a keen interest in technology and highly motivated towards success

● Must be proactive and be prepared to address meetings

● Must show initiative and desire to learn business subjects

● Able to work independently and provide updates to management

● Strong analytical and problem-solving capabilities with meticulous attention to detail

● Excellent problem-solving skills; proven teamwork and communication skills

● Experience working in a fast paced “start-up like” environment

 

Desirable

  • Knowledge of mobile analytical tools (Segment, Amplitude, Adjust, Braze and Google Analytics)
  • Knowledge of loyalty data
Read more
Job posted by
Stephen FitzGerald

Sr. Software Engineer

at Cliffai

Founded 2017  •  Product  •  20-100 employees  •  Profitable
ETL
Informatica
Data Warehouse (DWH)
Python
SQL
NOSQL Databases
Object Oriented Programming (OOPs)
RabbitMQ
API
icon
Indore
icon
2 - 4 yrs
icon
₹7L - ₹12L / yr


We are looking for

A Senior Software Development Engineer (SDE2) who will be instrumental in the design and development of our backend technology, which manages our exhaustive data pipelines and AI models. Simplifying complexity and building technology that is robust and scalable is your North Star. You'll work closely alongside our CTO and machine learning engineers, frontend and wider technical team to build new capabilities, focused on speed and reliability.

You'll own your work, to build, test and iterate quickly, with direct guidance from our CTO.

Please note: You must have proven industry experience greater than 2 years.

Your work includes

  • Own and manage the whole engineering infrastructure that supports Greendeck platform.
  • Work to create highly scalable, highly robust and highly available python micro-services.
  • Design the architecture to stream data on a huge scale across multiple services.
  • Create and manage data pipelines using tools like Kafka, Celery.
  • Deploy Serverless functions to process and manage data.
  • Work with variety of databases and storage systems to store and strategically manage data.
  • Write connections to collect data from various third party services, data storages and APIs.

 

 

Skills/ Requirements

  • Strong experience in python creating scripts or apps or services
  • Strong automation and scripting skills
  • Knowledge of at least one SQL and No-SQL Database
  • Experience of working with messaging systems like Kafka, RabbitMQ
  • Good knowledge about data-frames and data-manipulation
  • Have used and deployed apps using FastAPI or Flask or similar tech
  • Knowledge of CI/CD paradigm
  • Basic knowledge about Docker
  • Have knowledge of creating and using REST APIs
  • Good knowledge of OOP Fundamentals.
  • (Optional) Knowledge about Celery/ Airflow
  • (Optional) Knowledge about Lambda/ Serverless
  • (Optional) Have connected apps using OAuth


What you can expect

  • Attractive pay, bonus scheme and flexible vacation policy.
  • A truly flexible, trust-based, performance driven work culture.
  • Lunch is on us, everyday!
  • A young and passionate team building elegant products with intricate technology for the future of businesses around the world. Our average age is 25!
  • The chance to make a huge difference to the success of a world-class SaaS product and the opportunity to make an impact.


Its important to us

  • That you relocate to Indore
  • That you have a minimum of 2 years of experience working as a Software Developer



Read more
Job posted by
Vranda Baheti

Data Engineer

at Fragma Data Systems

Founded 2015  •  Products & Services  •  employees  •  Profitable
Data engineering
Big Data
PySpark
SQL
Python
icon
Bengaluru (Bangalore)
icon
1 - 6 yrs
icon
₹10L - ₹15L / yr
 Good experience in Pyspark - Including Dataframe core functions and Spark SQL
Good experience in SQL DBs - Be able to write queries including fair complexity.
Should have excellent experience in Big Data programming for data transformation and aggregations
Good at ELT architecture. Business rules processing and data extraction from Data Lake into data streams for business consumption.
 Good customer communication.
 Good Analytical skills
Read more
Job posted by
Harpreet kour

Lead Product Analyst

at upGrad

Founded 2015  •  Product  •  100-500 employees  •  Raised funding
SQL
Python
Tableau
Team Management
Statistical Analysis
icon
Bengaluru (Bangalore), Mumbai
icon
4 - 6 yrs
icon
₹19L - ₹24L / yr

Role Summary

We Are looking for an analytically inclined, Insights Driven Product Analyst to make our organisation more data driven. In this role you will be responsible for creating dashboards to drive insights for product and business teams. Be it Day to Day decisions as well as long term impact assessment, Measuring the Efficacy of different products or certain teams, You'll be Empowering each of them. The growing nature of the team will require you to be in touch with all of the teams at upgrad. Are you the "Go-To" person everyone looks at for getting Data, Then this role is for you.

 

Roles & Responsibilities

  • Lead and own the analysis of highly complex data sources, identifying trends and patterns in data and provide insights/recommendations based on analysis results
  • Build, maintain, own and communicate detailed reports to assist Marketing, Growth/Learning Experience and Other Business/Executive Teams
  • Own the design, development, and maintenance of ongoing metrics, reports, analyses, dashboards, etc. to drive key business decisions.
  • Analyze data and generate insights in the form of user analysis, user segmentation, performance reports, etc.
  • Facilitate review sessions with management, business users and other team members
  • Design and create visualizations to present actionable insights related to data sets and business questions at hand
  • Develop intelligent models around channel performance, user profiling, and personalization

Skills Required

  • Having 4-6 yrs hands-on experience with Product related analytics and reporting
  • Experience with building dashboards in Tableau or other data visualization tools such as D3
  • Strong data, statistics, and analytical skills with a good grasp of SQL.
  • Programming experience in Python is must
  • Comfortable managing large data sets
  • Good Excel/data management skills
Read more
Job posted by
Priyanka Muralidharan

Analytics Scientist - Risk Analytics

at market-leading fintech company dedicated to providing credit

Analytics
Predictive analytics
Linear regression
Logistic regression
Python
R Programming
icon
Noida, NCR (Delhi | Gurgaon | Noida)
icon
1 - 4 yrs
icon
₹8L - ₹18L / yr
Job Description : Role : Analytics Scientist - Risk Analytics Experience Range : 1 to 4 Years Job Location : Noida Key responsibilities include •Building models to predict risk and other key metrics •Coming up with data driven solutions to control risk •Finding opportunities to acquire more customers by modifying/optimizing existing rules •Doing periodic upgrades of the underwriting strategy based on business requirements •Evaluating 3rd party solutions for predicting/controlling risk of the portfolio •Running periodic controlled tests to optimize underwriting •Monitoring key portfolio metrics and take data driven actions based on the performance Business Knowledge: Develop an understanding of the domain/function. Manage business process (es) in the work area. The individual is expected to develop domain expertise in his/her work area. Teamwork: Develop cross site relationships to enhance leverage of ideas. Set and manage partner expectations. Drive implementation of projects with Engineering team while partnering seamlessly with cross site team members Communication: Responsibly perform end to end project communication across the various levels in the organization. Candidate Specification: Skills: • Knowledge of analytical tool - R Language or Python • Established competency in Predictive Analytics (Logistic & Regression) • Experience in handling complex data sources •Dexterity with MySQL, MS Excel is good to have •Strong Analytical aptitude and logical reasoning ability •Strong presentation and communication skills Preferred: •1 - 3 years of experience in Financial Services/Analytics Industry •Understanding of the financial services business • Experience in working on advanced machine learning techniques If interested, please send your updated profile in word format with below details for further discussion at the earliest. 1. Current Company 2. Current Designation 3. Total Experience 4. Current CTC( Fixed & Variable) 5. Expected CTC 6. Notice Period 7. Current Location 8. Reason for Change 9. Availability for face to face interview on weekdays 10.Education Degreef the financial services business Thanks & Regards, Hema Talent Socio
Read more
Job posted by
Hema Latha N

Data Scientist - Precily AI

at Precily Private Limited

Founded 2016  •  Product  •  20-100 employees  •  Raised funding
Data Science
Artificial Intelligence (AI)
R Programming
Python
icon
Bengaluru (Bangalore), NCR (Delhi | Gurgaon | Noida)
icon
3 - 7 yrs
icon
₹4L - ₹25L / yr
Job Description – Data Scientist About Company Profile Precily is a startup headquartered in Noida, IN. Precily is currently working with leading consulting & law firms, research firms & technology companies. Aura (Precily AI) is data-analysis platform for enterprises that increase the efficiency of the workforce by providing AI-based solutions. Responsibilities & Skills Required: The role requires deep knowledge in designing, planning, testing and deploying analytics solutions including the following: • Natural Language Processing (NLP), Neural Networks , Text Clustering, Topic Modelling, Information Extraction, Information Retrieval, Deep learning, Machine learning, cognitive science and analytics. • Proven experience implementing and deploying advanced AI solutions using R/Python. • Apply machine learning algorithms, statistical data analysis, text clustering, summarization, extracting insights from multiple data points. • Excellent understanding of Analytics concepts and methodologies including machine learning (unsupervised and supervised). • Hand on in handling large amounts of structured and unstructured data. • Measure, interpret, and derive learning from results of analysis that will lead to improvements document processing. Skills Required: • Python, R, NLP, NLG, Machine Learning, Deep Learning & Neural Networks • Word Vectorizers • Word Embeddings ( word2vec & GloVe ) • RNN ( CNN vs RNN ) • LSTM & GRU ( LSTM vs GRU ) • Pretrained Embeddings ( Implementation in RNN ) • Unsupervised Learning • Supervised Learning • Deep Neural Networks • Framework : Keras/tensorflow • Keras Embedding Layer output Please reach out to us: [email protected]
Read more
Job posted by
Bharath Rao
Did not find a job you were looking for?
icon
Search for relevant jobs from 10000+ companies such as Google, Amazon & Uber actively hiring on Cutshort.
Get to hear about interesting companies hiring right now
iconFollow Cutshort
Want to apply to this role at Top Management Consulting Company?
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Learn more
Get to hear about interesting companies hiring right now
iconFollow Cutshort