Cutshort logo
ERUDITUS Executive Education logo
Sr. Data Engineer
ERUDITUS Executive Education's logo

Sr. Data Engineer

Payal Thakker's profile picture
Posted by Payal Thakker
3 - 10 yrs
₹15L - ₹45L / yr
Remote only
Skills
skill iconDocker
skill iconKubernetes
Apache Kafka
Apache Beam
skill iconPython
skill iconMachine Learning (ML)
skill iconData Analytics
Emeritus is committed to teaching the skills of the future by making high-quality education accessible and affordable to individuals, companies, and governments around the world. It does this by collaborating with more than 50 top-tier universities across the United States, Europe, Latin America, Southeast Asia, India and China. Emeritus’ short courses, degree programs, professional certificates, and senior executive programs help individuals learn new skills and transform their lives, companies and organizations. Its unique model of state-of-the-art technology, curriculum innovation, and hands-on instruction from senior faculty, mentors and coaches has educated more than 250,000 individuals across 80+ countries. Founded in 2015, Emeritus, part of Eruditus Group, has more than 2,000 employees globally and offices in Mumbai, New Delhi, Shanghai, Singapore, Palo Alto, Mexico City, New York, Boston, London, and Dubai. Following its $650 million Series E funding round in August 2021, the Company is valued at $3.2 billion, and is backed by Accel, SoftBank Vision Fund 2, the Chan Zuckerberg Initiative, Leeds Illuminate, Prosus Ventures, Sequoia Capital India, and Bertelsmann.
 
 
As a data engineer at Emeritus, you'll be working on a wide variety of data problems. At this fast paced company, you will frequently have to balance achieving an immediate goal with building sustainable and scalable architecture. The ideal candidate gets excited about streaming data, protocol buffers and microservices. They want to develop and maintain a centralized data platform that provides accurate, comprehensive, and timely data to a growing organization

Role & responsibilities:

    • Developing ETL pipelines for data replication
    • Analyze, query and manipulate data according to defined business rules and procedures
    • Manage very large-scale data from a multitude of sources into appropriate sets for research and development for data science and analysts across the company
    • Convert prototypes into production data engineering solutions through rigorous software engineering practices and modern deployment pipelines
    • Resolve internal and external data exceptions in timely and accurate manner
    • Improve multi-environment data flow quality, security, and performance 

Skills & qualifications:

    • Must have experience with:
    • virtualization, containers, and orchestration (Docker, Kubernetes)
    • creating log ingestion pipelines (Apache Beam) both batch and streaming processing (Pub/Sub, Kafka)
    • workflow orchestration tools (Argo, Airflow)
    • supporting machine learning models in production
    • Have a desire to continually keep up with advancements in data engineering practices
    • Strong Python programming and exploratory data analysis skills
    • Ability to work independently and with team members from different backgrounds
    • At least a bachelor's degree in an analytical or technical field. This could be applied mathematics, statistics, computer science, operations research, economics, etc. Higher education is welcome and encouraged.
    • 3+ years of work in software/data engineering.
    • Superior interpersonal, independent judgment, complex problem-solving skills
    • Global orientation, experience working across countries, regions and time zones
Emeritus provides equal employment opportunities to all employees and applicants for employment and prohibits discrimination and harassment of any type without regard to race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state or local laws.
Read more
Users love Cutshort
Read about what our users have to say about finding their next opportunity on Cutshort.
Subodh Popalwar's profile image

Subodh Popalwar

Software Engineer, Memorres
For 2 years, I had trouble finding a company with good work culture and a role that will help me grow in my career. Soon after I started using Cutshort, I had access to information about the work culture, compensation and what each company was clearly offering.
Companies hiring on Cutshort
companies logos

About ERUDITUS Executive Education

Founded :
2010
Type
Size :
1000-5000
Stage :
Raised funding
About
Emeritus provides online business and professional programs in collaboration with top universities from across the globe
Read more
Connect with the team
Profile picture
Loyson Mascarenhas
Company social profiles
bloginstagram

Similar jobs

Bengaluru (Bangalore), Gurugram
2 - 6 yrs
₹16L - ₹18L / yr
Spotfire
Qlikview
Tableau
PowerBI
Data Visualization
+5 more

Senior Executive - Analytics


Overview of job :-


Our Client is the world’s largest media investment company which is a part of WPP. They are a global digital transformation agency with 1200 employees across 21 nations. Our team of experts support clients in programmatic, social, paid search, analytics, technology, organic search, affiliate marketing, e-commerce and across traditional channels.


We are currently looking for a Sr Executive – Analytics to join us. In this role, you will be responsible for a massive opportunity to build and be a part of largest performance marketing setup APAC is committed to fostering a culture of diversity and inclusion. Our people are our strength so we respect and nurture their individual talent and potential.


Reporting of the role - This role reports to the Director - Analytics,


3 best things about the job:


1. Responsible for data & analytics projects and developing data strategies by diving into data and extrapolating insights and providing guidance to clients


2. Build and be a part of a dynamic team


3. Being part of a global organisations with rapid growth opportunities


Responsibilities of the role:


 Build Marketing-Mix and Multi-Touch attribution models using a range of tools, including free and paid.

 Work with large data sets via hands-on data processing to produce structured data sets for analysis.

 Design and build Visualization, Dashboard and reports for both Internal and external clients using Tableau, Power BI, Datorama or R Shiny/Python.


What you will need:


 Degree in Mathematics, Statistics, Economics, Engineering, Data Science, Computer Science or quantitative field.

 2-3 years’ experience in Marketing/Data Analytics or related field with hands-on experience in building Marketing-Mix and Attribution models.  Proficiency in one or more coding languages – preferred languages: Python, R

 Proficiency in one or more Visualization Tools – Tableau, Datorama, Power BI

 Proficiency in using SQL.

 Proficiency with one or more statistical tools is a plus – Example: SPSS, SAS, MATLAB, Mathcad.

 Working experience using big data technologies (Hive/Hadoop) is a plus

Read more
Gurugram, Bengaluru (Bangalore), Chennai
2 - 9 yrs
₹9L - ₹27L / yr
DevOps
Microsoft Windows Azure
gitlab
skill iconAmazon Web Services (AWS)
Google Cloud Platform (GCP)
+15 more
Greetings!!

We are looking out for a technically driven  "ML OPS Engineer" for one of our premium client

COMPANY DESCRIPTION:
This Company is a global management consulting firm. We are the trusted advisor to the world's leading businesses, governments, and institutions. We work with leading organizations across the private, public and social sectors. Our scale, scope, and knowledge allow us to address


Key Skills
• Excellent hands-on expert knowledge of cloud platform infrastructure and administration
(Azure/AWS/GCP) with strong knowledge of cloud services integration, and cloud security
• Expertise setting up CI/CD processes, building and maintaining secure DevOps pipelines with at
least 2 major DevOps stacks (e.g., Azure DevOps, Gitlab, Argo)
• Experience with modern development methods and tooling: Containers (e.g., docker) and
container orchestration (K8s), CI/CD tools (e.g., Circle CI, Jenkins, GitHub actions, Azure
DevOps), version control (Git, GitHub, GitLab), orchestration/DAGs tools (e.g., Argo, Airflow,
Kubeflow)
• Hands-on coding skills Python 3 (e.g., API including automated testing frameworks and libraries
(e.g., pytest) and Infrastructure as Code (e.g., Terraform) and Kubernetes artifacts (e.g.,
deployments, operators, helm charts)
• Experience setting up at least one contemporary MLOps tooling (e.g., experiment tracking,
model governance, packaging, deployment, feature store)
• Practical knowledge delivering and maintaining production software such as APIs and cloud
infrastructure
• Knowledge of SQL (intermediate level or more preferred) and familiarity working with at least
one common RDBMS (MySQL, Postgres, SQL Server, Oracle)
Read more
[x]cube LABS
at [x]cube LABS
2 candid answers
1 video
Krishna kandregula
Posted by Krishna kandregula
Hyderabad
2 - 6 yrs
₹8L - ₹20L / yr
ETL
Informatica
Data Warehouse (DWH)
PowerBI
DAX
+12 more
  • Creating and managing ETL/ELT pipelines based on requirements
  • Build PowerBI dashboards and manage datasets needed.
  • Work with stakeholders to identify data structures needed for future and perform any transformations including aggregations.
  • Build data cubes for real-time visualisation needs and CXO dashboards.


Required Tech Skills


  • Microsoft PowerBI & DAX
  • Python, Pandas, PyArrow, Jupyter Noteboks, ApacheSpark
  • Azure Synapse, Azure DataBricks, Azure HDInsight, Azure Data Factory



Read more
MNC Company - Product Based
Bengaluru (Bangalore), Chennai, Hyderabad, Pune, Delhi, Gurugram, Noida, Ghaziabad, Faridabad
5 - 9 yrs
₹10L - ₹15L / yr
Data Warehouse (DWH)
Informatica
ETL
skill iconPython
Google Cloud Platform (GCP)
+2 more

Job Responsibilities

  • Design, build & test ETL processes using Python & SQL for the corporate data warehouse
  • Inform, influence, support, and execute our product decisions
  • Maintain advertising data integrity by working closely with R&D to organize and store data in a format that provides accurate data and allows the business to quickly identify issues.
  • Evaluate and prototype new technologies in the area of data processing
  • Think quickly, communicate clearly and work collaboratively with product, data, engineering, QA and operations teams
  • High energy level, strong team player and good work ethic
  • Data analysis, understanding of business requirements and translation into logical pipelines & processes
  • Identification, analysis & resolution of production & development bugs
  • Support the release process including completing & reviewing documentation
  • Configure data mappings & transformations to orchestrate data integration & validation
  • Provide subject matter expertise
  • Document solutions, tools & processes
  • Create & support test plans with hands-on testing
  • Peer reviews of work developed by other data engineers within the team
  • Establish good working relationships & communication channels with relevant departments

 

Skills and Qualifications we look for

  • University degree 2.1 or higher (or equivalent) in a relevant subject. Master’s degree in any data subject will be a strong advantage.
  • 4 - 6 years experience with data engineering.
  • Strong coding ability and software development experience in Python.
  • Strong hands-on experience with SQL and Data Processing.
  • Google cloud platform (Cloud composer, Dataflow, Cloud function, Bigquery, Cloud storage, dataproc)
  • Good working experience in any one of the ETL tools (Airflow would be preferable).
  • Should possess strong analytical and problem solving skills.
  • Good to have skills - Apache pyspark, CircleCI, Terraform
  • Motivated, self-directed, able to work with ambiguity and interested in emerging technologies, agile and collaborative processes.
  • Understanding & experience of agile / scrum delivery methodology

 

Read more
Angel One
at Angel One
4 recruiters
Shriya Tak
Posted by Shriya Tak
Remote only
5 - 7 yrs
₹12L - ₹18L / yr
Management Information System (MIS)
Data mining
skill iconMachine Learning (ML)
Analytical Skills
A/B Testing
+3 more

Job description:

  • Selecting features, building and optimizing classifiers using machine learning techniques
  • Mining data as and when required
  • Enhancing data collection procedures to include information that is relevant for building analytic systems
  • Processing, cleansing, and verifying the integrity of data used for analysis
  • Doing ad-hoc analysis and presenting results in a clear manner
  • Creating automated anomaly detection systems and constant tracking of its performance
  • Efficient stakeholder management

Skills and Qualifications

  • Excellent understanding of machine learning techniques and algorithms, such as k-NN, Naive Bayes, SVM, Decision Forests, etc.
  • Good applied statistics skills, such as distributions, statistical testing, regression, etc.
  • Experience with common data science toolkits. 
  • Great communication skills
  • Experience with data visualisation tools
  • Proficiency in using query languages such as SQL
  • Good scripting and programming skills 
  • Data-oriented personality
  • B.Tech, M.Tech, B.S., M.S., MBA 

Requirement / Desired Skills

Data Scientist --  Data mining skills , SQL, Advanced ML Techniques, NLP (natural Language Processing)

Read more
Innovative Brand Design Studio
Agency job
via Unnati by Astha Bharadwaj
Mumbai
2 - 5 yrs
₹8L - ₹15L / yr
skill iconData Science
Data Scientist
skill iconPython
Tableau
skill iconR Programming
+7 more
Come work with a growing consumer market research team that is currently serving one of the biggest FMCG companies in the world.
 
Our client works with global brands and creates projects that are user-centric. They build cost-effective and compelling product stories that help their clients gain a competitive edge and growth in their brand image. Their team of experts consists of academicians, designers, startup specialists and experts are working for clients across 12 countries targeting new markets and solutions with an excellent understanding of end-users.
 
They work with global brands from FMCG, Beauty and Hospitality sectors, namely Unilever, Lipton, Lakme, Loreal, AXE etc. who have chosen them for a long-term relationship, depending on their insights, consumer research, storytelling and contetnt experience. The founder is a design and product activation expert with over 10 years of impact and over 300 completed projects in India, UK, South Asia and USA.
 
As a Data Scientist, you will help to deliver quantitative consumer primary market research through Survey.
 
What you will do:
  • Handling Survey Scripting Process through the use of survey software platform such as Toluna, QuestionPro, Decipher.
  • Mining large & complex data sets using SQL, Hadoop, NoSQL or Spark.
  • Delivering complex consumer data analysis through the use of software like R, Python, Excel and etc such as
  • Working on Basic Statistical Analysis such as:T-Test &Correlation
  • Performing more complex data analysis processes through Machine Learning technique such as:
  1. Classification
  2. Regression
  3. Clustering
  4. Text
  5. Analysis
  6. Neural Networking
  • Creating an Interactive Dashboard Creation through the use of software like Tableau or any other software you are able to use.
  • Working on Statistical and mathematical modelling, application of ML and AI algorithms

 

What you need to have:
  • Bachelor or Master's degree in highly quantitative field (CS, machine learning, mathematics, statistics, economics) or equivalent experience.
  • An opportunity for one, who is eager of proving his or her data analytical skills with one of the Biggest FMCG market player.

 

Read more
Infogain
Agency job
via Technogen India PvtLtd by RAHUL BATTA
NCR (Delhi | Gurgaon | Noida), Bengaluru (Bangalore), Mumbai, Pune
7 - 8 yrs
₹15L - ₹16L / yr
Data steward
MDM
Tamr
Reltio
Data engineering
+7 more
  1. Data Steward :

Data Steward will collaborate and work closely within the group software engineering and business division. Data Steward has overall accountability for the group's / Divisions overall data and reporting posture by responsibly managing data assets, data lineage, and data access, supporting sound data analysis. This role requires focus on data strategy, execution, and support for projects, programs, application enhancements, and production data fixes. Makes well-thought-out decisions on complex or ambiguous data issues and establishes the data stewardship and information management strategy and direction for the group. Effectively communicates to individuals at various levels of the technical and business communities. This individual will become part of the corporate Data Quality and Data management/entity resolution team supporting various systems across the board.

 

Primary Responsibilities:

 

  • Responsible for data quality and data accuracy across all group/division delivery initiatives.
  • Responsible for data analysis, data profiling, data modeling, and data mapping capabilities.
  • Responsible for reviewing and governing data queries and DML.
  • Accountable for the assessment, delivery, quality, accuracy, and tracking of any production data fixes.
  • Accountable for the performance, quality, and alignment to requirements for all data query design and development.
  • Responsible for defining standards and best practices for data analysis, modeling, and queries.
  • Responsible for understanding end-to-end data flows and identifying data dependencies in support of delivery, release, and change management.
  • Responsible for the development and maintenance of an enterprise data dictionary that is aligned to data assets and the business glossary for the group responsible for the definition and maintenance of the group's data landscape including overlays with the technology landscape, end-to-end data flow/transformations, and data lineage.
  • Responsible for rationalizing the group's reporting posture through the definition and maintenance of a reporting strategy and roadmap.
  • Partners with the data governance team to ensure data solutions adhere to the organization’s data principles and guidelines.
  • Owns group's data assets including reports, data warehouse, etc.
  • Understand customer business use cases and be able to translate them to technical specifications and vision on how to implement a solution.
  • Accountable for defining the performance tuning needs for all group data assets and managing the implementation of those requirements within the context of group initiatives as well as steady-state production.
  • Partners with others in test data management and masking strategies and the creation of a reusable test data repository.
  • Responsible for solving data-related issues and communicating resolutions with other solution domains.
  • Actively and consistently support all efforts to simplify and enhance the Clinical Trial Predication use cases.
  • Apply knowledge in analytic and statistical algorithms to help customers explore methods to improve their business.
  • Contribute toward analytical research projects through all stages including concept formulation, determination of appropriate statistical methodology, data manipulation, research evaluation, and final research report.
  • Visualize and report data findings creatively in a variety of visual formats that appropriately provide insight to the stakeholders.
  • Achieve defined project goals within customer deadlines; proactively communicate status and escalate issues as needed.

 

Additional Responsibilities:

 

  • Strong understanding of the Software Development Life Cycle (SDLC) with Agile Methodologies
  • Knowledge and understanding of industry-standard/best practices requirements gathering methodologies.
  • Knowledge and understanding of Information Technology systems and software development.
  • Experience with data modeling and test data management tools.
  • Experience in the data integration project • Good problem solving & decision-making skills.
  • Good communication skills within the team, site, and with the customer

 

Knowledge, Skills and Abilities

 

  • Technical expertise in data architecture principles and design aspects of various DBMS and reporting concepts.
  • Solid understanding of key DBMS platforms like SQL Server, Azure SQL
  • Results-oriented, diligent, and works with a sense of urgency. Assertive, responsible for his/her own work (self-directed), have a strong affinity for defining work in deliverables, and be willing to commit to deadlines.
  • Experience in MDM tools like MS DQ, SAS DM Studio, Tamr, Profisee, Reltio etc.
  • Experience in Report and Dashboard development
  • Statistical and Machine Learning models
  • Python (sklearn, numpy, pandas, genism)
  • Nice to Have:
  • 1yr of ETL experience
  • Natural Language Processing
  • Neural networks and Deep learning
  • xperience in keras,tensorflow,spacy, nltk, LightGBM python library

 

Interaction :  Frequently interacts with subordinate supervisors.

Education : Bachelor’s degree, preferably in Computer Science, B.E or other quantitative field related to the area of assignment. Professional certification related to the area of assignment may be required

Experience :  7 years of Pharmaceutical /Biotech/life sciences experience, 5 years of Clinical Trials experience and knowledge, Excellent Documentation, Communication, and Presentation Skills including PowerPoint

 

Read more
Yottaasys AI LLC
at Yottaasys AI LLC
5 recruiters
Dinesh Krishnan
Posted by Dinesh Krishnan
Bengaluru (Bangalore), Singapore
2 - 5 yrs
₹9L - ₹20L / yr
skill iconData Science
skill iconDeep Learning
skill iconR Programming
skill iconPython
skill iconMachine Learning (ML)
+2 more
We are a US Headquartered Product Company looking to Hire a few Passionate Deep Learning and Computer Vision Team Players with 2-5 years of experience! If you are any of these:
1. Expert in deep learning and machine learning techniques,
2. Extremely Good in image/video processing,
3. Have a Good understanding of Linear algebra, Optimization techniques, Statistics and pattern recognition.
Then u r the right fit for this position.
Read more
Alien Brains
at Alien Brains
5 recruiters
Praveen Baheti
Posted by Praveen Baheti
Kolkata
0 - 15 yrs
₹4L - ₹8L / yr
skill iconPython
skill iconDeep Learning
skill iconMachine Learning (ML)
skill iconData Analytics
skill iconData Science
+3 more
You'll be giving industry standard training to engineering students and mentoring them to develop their custom mini projects.
Read more
Artivatic
at Artivatic
1 video
3 recruiters
Layak Singh
Posted by Layak Singh
Bengaluru (Bangalore)
2 - 7 yrs
₹5L - ₹12L / yr
OpenCV
skill iconMachine Learning (ML)
skill iconDeep Learning
skill iconPython
Artificial Intelligence (AI)
+1 more
About Artivatic :Artivatic is technology startup that uses AI/ML/Deeplearning to build intelligent products & solutions for finance, healthcare & insurance businesses. It is based out of Bangalore with 20+ team focus on technology. Artivatic building is cutting edge solutions to enable 750 Millions plus people to get insurance, financial access and health benefits with alternative data sources to increase their productivity, efficiency, automation power and profitability, hence improving their way of doing business more intelligently & seamlessly. Artivatic offers lending underwriting, credit/insurance underwriting, fraud, prediction, personalization, recommendation, risk profiling, consumer profiling intelligence, KYC Automation & Compliance, automated decisions, monitoring, claims processing, sentiment/psychology behaviour, auto insurance claims, travel insurance, disease prediction for insurance and more. We have raised US $300K earlier and built products successfully and also done few PoCs successfully with some top enterprises in Insurance, Banking & Health sector. Currently, 4 months away from generating continuous revenue.Skills : - We at artivatic are seeking for passionate, talented and research focused computer engineer with strong machine learning and computer vision background to help build industry-leading technology with a focus in document text extraction and parsing using OCR across different languages.Qualifications :- Bachelors or Master degree in Computer Science, Computer vision or related field with specialization in Image Processing or machine learning.- Research experience in Deep Learning models for Image processing or OCR related field is preferred.- Publication record in Deep Learning models for Computer Vision conferences/journals is a plus.Required Skills :- Excellent skills developing in Python in Linux environment. Programming skills with multi-threaded GPU Cuda computing and API Solutions.- Experience applying machine learning and computer vision principles to real-world data and working in Scanned and Documented Images.- Good knowledge of Computer Science, math and statistics fundamentals (algorithms and data structures, meshing, sampling theory, linear algebra, etc.)- Knowledge of data science technologies such as Python, Pandas, Scipy, Numpy, matplotlib, etc.- Broad Computer Vision knowledge - Construction, Feature Detection, Segmentation, Classification; Machine/Deep Learning - Algorithm Evaluation, Preparation, Analysis, Modeling and Execution.- Familiarity with OpenCV, Dlib, Yolo, Capslule Network or similar and Open Source AR platforms and products- Strong problem solving and logical skills.- A go-getter kind of attitude with a willingness to learn new technologies.- Well versed in software design paradigms and good development practices.Responsibilities :- Developing novel algorithms and modeling techniques to advance the state of the art in- Document and Text Extraction.- Image recognition, Object Identification and Visual Recognition - Working closely with R&D and Machine Learning engineers implementing algorithms that power user and developer-facing products.- Be responsible for measuring and optimizing the quality of your algorithmsExperience : 3 Years+ Location : Sony World Signal, Koramangala 4th Block, Bangalore
Read more
Why apply to jobs via Cutshort
people_solving_puzzle
Personalized job matches
Stop wasting time. Get matched with jobs that meet your skills, aspirations and preferences.
people_verifying_people
Verified hiring teams
See actual hiring teams, find common social connections or connect with them directly. No 3rd party agencies here.
ai_chip
Move faster with AI
We use AI to get you faster responses, recommendations and unmatched user experience.
21,01,133
Matches delivered
37,12,187
Network size
15,000
Companies hiring
Did not find a job you were looking for?
icon
Search for relevant jobs from 10000+ companies such as Google, Amazon & Uber actively hiring on Cutshort.
companies logo
companies logo
companies logo
companies logo
companies logo
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Users love Cutshort
Read about what our users have to say about finding their next opportunity on Cutshort.
Subodh Popalwar's profile image

Subodh Popalwar

Software Engineer, Memorres
For 2 years, I had trouble finding a company with good work culture and a role that will help me grow in my career. Soon after I started using Cutshort, I had access to information about the work culture, compensation and what each company was clearly offering.
Companies hiring on Cutshort
companies logos