VP Data Engineering

at Dream Game Studios

DP
Posted by Vivek Pandey
icon
Mumbai, Navi Mumbai
icon
9 - 100 yrs
icon
₹30L - ₹90L / yr
icon
Full time
Skills
Big Data
Apache Hive
NOSQL Databases
MongoDB
Web Scraping
redshift
Your Role: • You will lead the strategy, planning, and engineering for Data at Dream11 • Build a robust realtime & batch analytics platform for analytics & machine-learning • Design and develop the Data Model for our data warehouse and other data engineering solutions • Collaborate with various departments to develop, maintain a data platform solution and recommend emerging technologies for data storage, processing and analytics MUST have​: • 9+ years of experience in data engineering, data modelling, schema design and 5+ years of programming expertise in Java or Scala • Understanding of real-time as well as batch processing big data technologies (Spark, Storm, Kafka, Flink, MapReduce, Yarn, Pig, Hive, HDFS, Oozie etc) • Developed applications that work with NoSQL stores (e.g. ElasticSearch, Hbase, Cassandra, MongoDB, CouchDB) • Experience in gathering and processing raw data at scale including writing scripts, web scraping, calling APIs, writing SQL queries, etc • Bachelor/Master in Computer Science/Engineering or related technical degree Bonus: • Experience in cloud based data stores like Redshift and Big Query is an advantage • Love sports – especially cricket and football • Have worked previously in a high-growth tech startup
Read more

About Dream Game Studios

About us:
Dream11 is India’s leading fantasy sports platform with a growing user base of over 15 million+ Indians playing fantasy cricket, football, kabaddi & basketball. Our vision is to make fantasy sports a part of every sports fan's life.
Dream11 is a series C funded startup with marquee investors like Multiple Equity, Kalaari Capital and the US based hedge fund, Think Investments. The company was founded by Harsh Jain and Bhavit Sheth in 2012.
 
Our Dream Team of 100 Dreamsters has been the engine of Dream11's growth, and now we're looking to add to our team of smart, self-motivated and passionate people who are on a sprint to build one of the largest fantasy sports platform in the world.
 Oh, and did we mention that we have some very generous stock options?
 
If you are open to learning and unlearning as we become one of the leading fantasy sports platform, then Dream11 is the place for you.
 
What is the Dream11 family like?
At Dream11, we don’t have employees, we have Dreamsters.
Dreamsters you can trust. Dreamsters who care deeply about you. Dreamsters who are intelligent, hard working, fun to be with and respect each other. We are a diverse team united by a common purpose: to make every game exciting for all sports fans.
 
What we promise?
An amazing stadium-like environment that you will love. A place where experiments and learning never stop. A place without micromanagement and bureaucracy. An opportunity to create an impact from day 1.
Come join India’s leading sports tech company. The grass is literally greener in our field.
 
Our Captains:
- Harsh Jain (Co-Founder & CEO; https://www.linkedin.com/in/harshj22 )
- Bhavit Sheth (Co-Founder & COO; https://www.linkedin.com/in/bhavitsheth )
Read more
Founded
2012
Type
Product
Size
100-500 employees
Stage
Raised funding
View full company details
Why apply to jobs via Cutshort
Personalized job matches
Stop wasting time. Get matched with jobs that meet your skills, aspirations and preferences.
Verified hiring teams
See actual hiring teams, find common social connections or connect with them directly. No 3rd party agencies here.
Move faster with AI
We use AI to get you faster responses, recommendations and unmatched user experience.
2101133
Matches delivered
3712187
Network size
15000
Companies hiring

Similar jobs

Machine Learning Engineer

at Contact Center software that leverages AI to improve custome

Agency job
via Qrata
Data Science
Machine Learning (ML)
Natural Language Processing (NLP)
Amazon Web Services (AWS)
Python
Java
Big Data
C#
TensorFlow
icon
Bengaluru (Bangalore)
icon
4 - 8 yrs
icon
₹17L - ₹40L / yr
Role: Machine Learning Engineer

As a machine learning engineer on the team, you will
• Help science and product teams innovate in developing and improving end-to-end
solutions to machine learning-based security/privacy control
• Partner with scientists to brainstorm and create new ways to collect/curate data
• Design and build infrastructure critical to solving problems in privacy-preserving machine
learning
• Help team self-organize and follow machine learning best practice.

Basic Qualifications

• 4+ years of experience contributing to the architecture and design (architecture, design
patterns, reliability and scaling) of new and current systems
• 4+ years of programming experience with at least one modern language such as Java,
C++, or C# including object-oriented design
• 4+ years of professional software development experience
• 4+ years of experience as a mentor, tech lead OR leading an engineering team
• 4+ years of professional software development experience in Big Data and Machine
Learning Fields
• Knowledge of common ML frameworks such as Tensorflow, PyTorch
• Experience with cloud provider Machine Learning tools such as AWS SageMaker
• Programming experience with at least two modern language such as Python, Java, C++,
or C# including object-oriented design
• 3+ years of experience contributing to the architecture and design (architecture, design
patterns, reliability and scaling) of new and current systems
• Experience in python
• BS in Computer Science or equivalent
Read more
Job posted by
Rayal Rajan

Data Scientist

at Top startup of India - News App

Agency job
via Jobdost
Data Science
Machine Learning (ML)
Natural Language Processing (NLP)
Computer Vision
TensorFlow
Deep Learning
Python
PySpark
MongoDB
Hadoop
Spark
icon
Noida
icon
6 - 10 yrs
icon
₹35L - ₹65L / yr
This will be an individual contributor role and people from Tier 1/2 and Product based company can only apply.

Requirements-

● B.Tech/Masters in Mathematics, Statistics, Computer Science or another quantitative field
● 2-3+ years of work experience in ML domain ( 2-5 years experience )
● Hands-on coding experience in Python
● Experience in machine learning techniques such as Regression, Classification,Predictive modeling, Clustering, Deep Learning stack, NLP.
● Working knowledge of Tensorflow/PyTorch
Optional Add-ons-
● Experience with distributed computing frameworks: Map/Reduce, Hadoop, Spark etc.
● Experience with databases: MongoDB
Read more
Job posted by
Sathish Kumar

Senior Data Engineer - Big Data

at CodeCraft Technologies Private Limited

Founded 2011  •  Services  •  100-1000 employees  •  Profitable
Data engineering
SQL
Spark
Apache
HiveQL
Big Data
icon
Bengaluru (Bangalore), Mangalore
icon
4 - 8 yrs
icon
Best in industry
Roles and Responsibilities:
• Responsible to Ingest data from files, streams and databases. Process the data with Apache Kafka, Spark, Google
Fire Store, Google BigQuery
• Drive Data Foundation initiatives, like Modelling, Data Quality Management, Data Governance, Data Maturity
Assessments and Data Strategy in support of the key business stakeholders.
• Implementing ETL process using Google BigQuery
• Monitoring performance and advising any necessary infrastructure changes
• Implement scalable solutions to meet the ever-increasing data volumes, using big data/cloud technologies
Pyspark, Kafka, Google BigQuery, etc.
• Selecting and integrating any Big Data tools and frameworks required to provide requested capabilities
• Responsible to design and develop distributed, high volume, high velocity multi-threaded event processing
systems
• Develop efficient software code for multiple use cases leveraging Python and Big Data technologies for various
use cases built on the platform
• Provide high operational excellence guaranteeing high availability and platform stability

Desired Profile:
• Deep understanding of the ecosystem, including ingestion (e.g. Kafka, Kinesis, Apache Airflow), processing
frameworks (e.g. Spark, Flink) and storage engines (e.g. Google FIreStore, Google BigQuery).
• Should have indepth understanding of Bigquery architecture, table partitioning, clustering, best practices, type of
tables, etc.
• Should know how to reduce BigQuery costs by reducing the amount of data processed by your queries
• Practical knowledge of Kafka to build real-time streaming data pipelines and applications that adapt to the data
streams.
• Should be able to speed up queries by using denormalized data structures, with or without nested repeated fields
• Implementing ETL jobs using Bigquery
• Understanding of Bigquery ML
• Knowledge on latest database technologies like MongoDB, Cassandra, Data Bricks etc.
• Experience with various messaging systems, such as Kafka or RabbitMQ
• Experience in GCP and Managed services of GCP
Read more
Job posted by
Priyanka Praveen

Big Data Engineer

at BDIPlus

Founded 2014  •  Product  •  100-500 employees  •  Profitable
Apache Hive
Spark
Scala
PySpark
Data engineering
Big Data
Hadoop
Java
Python
icon
Remote only
icon
2 - 6 yrs
icon
₹6L - ₹20L / yr
We are looking for big data engineers to join our transformational consulting team serving one of our top US clients in the financial sector. You'd get an opportunity to develop big data pipelines and convert business requirements to production grade services and products. With
lesser concentration on enforcing how to do a particular task, we believe in giving people the opportunity to think out of the box and come up with their own innovative solution to problem solving.
You will primarily be developing, managing and executing handling multiple prospect campaigns as part of Prospect Marketing Journey to ensure best conversion rates and retention rates. Below are the roles, responsibilities and skillsets we are looking for and if you feel these resonate with you, please get in touch with us by applying to this role.
Roles and Responsibilities:
• You'd be responsible for development and maintenance of applications with technologies involving Enterprise Java and Distributed technologies.
• You'd collaborate with developers, product manager, business analysts and business users in conceptualizing, estimating and developing new software applications and enhancements.
• You'd Assist in the definition, development, and documentation of software’s objectives, business requirements, deliverables, and specifications in collaboration with multiple cross-functional teams.
• Assist in the design and implementation process for new products, research and create POC for possible solutions.
Skillset:
• Bachelors or Masters Degree in a technology related field preferred.
• Overall experience of 2-3 years on the Big Data Technologies.
• Hands on experience with Spark (Java/ Scala)
• Hands on experience with Hive, Shell Scripting
• Knowledge on Hbase, Elastic Search
• Development experience In Java/ Python is preferred
• Familiar with profiling, code coverage, logging, common IDE’s and other
development tools.
• Demonstrated verbal and written communication skills, and ability to interface with Business, Analytics and IT organizations.
• Ability to work effectively in short-cycle, team oriented environment, managing multiple priorities and tasks.
• Ability to identify non-obvious solutions to complex problems
Read more
Job posted by
Puja Kumari

Data Engineer

at A large software MNC with over 20k employees in India

Agency job
via RS Consultants
Spark
Data engineering
Data Engineer
Apache Kafka
Apache Spark
Java
Python
Hadoop
Amazon Web Services (AWS)
Windows Azure
Big Data
icon
Pune
icon
5 - 12 yrs
icon
₹15L - ₹22L / yr

As a Senior Engineer - Big Data Analytics, you will help the architectural design and development for Healthcare Platforms, Products, Services, and Tools to deliver the vision of the Company. You will significantly contribute to engineering, technology, and platform architecture. This will be done through innovation and collaboration with engineering teams and related business functions. This is a critical, highly visible role within the company that has the potential to drive significant business impact. 


The scope of this role will include strong technical contribution in the development and delivery of Big Data Analytics Cloud Platform, Products and Services in collaboration with execution and strategic partners. 

 

Responsibilities:

  • Design & develop, operate, and drive scalable, resilient, and cloud native Big Data Analytics platform to address the business requirements
  • Help drive technology transformation to achieve business transformation, through the creation of the Healthcare Analytics Data Cloud that will help Change establish a leadership position in healthcare data & analytics in the industry
  • Help in successful implementation of Analytics as a Service 
  • Ensure Platforms and Services meet SLA requirements
  • Be a significant contributor and partner in the development and execution of the Enterprise Technology Strategy

 

Qualifications:

  • At least 2 years of experience software development for big data analytics, and cloud. At least 5 years of experience in software development
  • Experience working with High Performance Distributed Computing Systems in public and private cloud environments
  • Understands big data open-source eco-systems and its players. Contribution to open source is a strong plus
  • Experience with Spark, Spark Streaming, Hadoop, AWS/Azure, NoSQL Databases, In-Memory caches, distributed computing, Kafka, OLAP stores, etc.
  • Have successful track record of creating working Big Data stack that aligned with business needs, and delivered timely enterprise class products
  • Experience with delivering and managing scale of Operating Environment
  • Experience with Big Data/Micro Service based Systems, SaaS, PaaS, and Architectures
  • Experience Developing Systems in Java, Python, Unix
  • BSCS, BSEE or equivalent, MSCS preferred
Read more
Job posted by
Rahul Inamdar

Data Engineer

at They provide both wholesale and retail funding. PM1

Agency job
via Multi Recruit
Spark
Big Data
Data Engineer
Hadoop
Apache Kafka
Data Warehousing
Data Modeling
Hive
Kafka
icon
Bengaluru (Bangalore)
icon
1 - 5 yrs
icon
₹15L - ₹20L / yr
  • 1-5 years of experience in building and maintaining robust data pipelines, enriching data, low-latency/highly-performance data analytics applications.
  • Experience handling complex, high volume, multi-dimensional data and architecting data products in streaming, serverless, and microservices-based Architecture and platform.
  • Experience in Data warehousing, Data modeling, and Data architecture.
  • Expert level proficiency with the relational and NoSQL databases.
  • Expert level proficiency in Python, and PySpark.
  • Familiarity with Big Data technologies and utilities (Spark, Hive, Kafka, Airflow).
  • Familiarity with cloud services (preferable AWS)
  • Familiarity with MLOps processes such as data labeling, model deployment, data-model feedback loop, data drift.

Key Roles/Responsibilities:

  • Act as a technical leader for resolving problems, with both technical and non-technical audiences.
  • Identifying and solving issues with data pipelines regarding consistency, integrity, and completeness.
  • Lead data initiatives, architecture design discussions, and implementation of next-generation BI solutions.
  • Partner with data scientists, tech architects to build advanced, scalable, efficient self-service BI infrastructure.
  • Provide thought leadership and mentor data engineers in information presentation and delivery.

 

 

Read more
Job posted by
Kavitha S

Talend Developer

at Helical IT Solution

Founded 2012  •  Products & Services  •  20-100 employees  •  Profitable
ETL
Big Data
TAC
PL/SQL
Relational Database (RDBMS)
MySQL
icon
Hyderabad
icon
1 - 5 yrs
icon
₹3L - ₹8L / yr

ETL Developer – Talend

Job Duties:

  • ETL Developer is responsible for Design and Development of ETL Jobs which follow standards,

best practices and are maintainable, modular and reusable.

  • Proficiency with Talend or Pentaho Data Integration / Kettle.
  • ETL Developer will analyze and review complex object and data models and the metadata

repository in order to structure the processes and data for better management and efficient

access.

  • Working on multiple projects, and delegating work to Junior Analysts to deliver projects on time.
  • Training and mentoring Junior Analysts and building their proficiency in the ETL process.
  • Preparing mapping document to extract, transform, and load data ensuring compatibility with

all tables and requirement specifications.

  • Experience in ETL system design and development with Talend / Pentaho PDI is essential.
  • Create quality rules in Talend.
  • Tune Talend / Pentaho jobs for performance optimization.
  • Write relational(sql) and multidimensional(mdx) database queries.
  • Functional Knowledge of Talend Administration Center/ Pentaho data integrator, Job Servers &

Load balancing setup, and all its administrative functions.

  • Develop, maintain, and enhance unit test suites to verify the accuracy of ETL processes,

dimensional data, OLAP cubes and various forms of BI content including reports, dashboards,

and analytical models.

  • Exposure in Map Reduce components of Talend / Pentaho PDI.
  • Comprehensive understanding and working knowledge in Data Warehouse loading, tuning, and

maintenance.

  • Working knowledge of relational database theory and dimensional database models.
  • Creating and deploying Talend / Pentaho custom components is an add-on advantage.
  • Nice to have java knowledge.

Skills and Qualification:

  • BE, B.Tech / MS Degree in Computer Science, Engineering or a related subject.
  • Having an experience of 3+ years.
  • Proficiency with Talend or Pentaho Data Integration / Kettle.
  • Ability to work independently.
  • Ability to handle a team.
  • Good written and oral communication skills.
Read more
Job posted by
Niyotee Gupta

Data Engineer - Google Cloud Platform

at Datalicious Pty Ltd

Founded 2007  •  Products & Services  •  20-100 employees  •  Raised funding
Python
Amazon Web Services (AWS)
Google Cloud Storage
Big Data
Data Analytics
Datawarehousing
Software Development
Data Science
icon
Bengaluru (Bangalore)
icon
2 - 7 yrs
icon
₹7L - ₹20L / yr
DESCRIPTION :- We- re looking for an experienced Data Engineer to be part of our team who has a strong cloud technology experience to help our big data team to take our products to the next level.- This is a hands-on role, you will be required to code and develop the product in addition to your leadership role. You need to have a strong software development background and love to work with cutting edge big data platforms.- You are expected to bring with you extensive hands-on experience with Amazon Web Services (Kinesis streams, EMR, Redshift), Spark and other Big Data processing frameworks and technologies as well as advanced knowledge of RDBS and Data Warehousing solutions.REQUIREMENTS :- Strong background working on large scale Data Warehousing and Data processing solutions.- Strong Python and Spark programming experience.- Strong experience in building big data pipelines.- Very strong SQL skills are an absolute must.- Good knowledge of OO, functional and procedural programming paradigms.- Strong understanding of various design patterns.- Strong understanding of data structures and algorithms.- Strong experience with Linux operating systems.- At least 2+ years of experience working as a software developer or a data-driven environment.- Experience working in an agile environment.Lots of passion, motivation and drive to succeed!Highly desirable :- Understanding of agile principles specifically scrum.- Exposure to Google cloud platform services such as BigQuery, compute engine etc.- Docker, Puppet, Ansible, etc..- Understanding of digital marketing and digital advertising space would be advantageous.BENEFITS :Datalicious is a global data technology company that helps marketers improve customer journeys through the implementation of smart data-driven marketing strategies. Our team of marketing data specialists offer a wide range of skills suitable for any challenge and cover everything from web analytics to data engineering, data science and software development.Experience : Join us at any level and we promise you'll feel up-levelled in no time, thanks to the fast-paced, transparent and aggressive growth of DataliciousExposure : Work with ONLY the best clients in the Australian and SEA markets, every problem you solve would directly impact millions of real people at a large scale across industriesWork Culture : Voted as the Top 10 Tech Companies in Australia. Never a boring day at work, and we walk the talk. The CEO organises nerf-gun bouts in the middle of a hectic day.Money: We'd love to have a long term relationship because long term benefits are exponential. We encourage people to get technical certifications via online courses or digital schools.So if you are looking for the chance to work for an innovative, fast growing business that will give you exposure across a diverse range of the world's best clients, products and industry leading technologies, then Datalicious is the company for you!
Read more
Job posted by
Ramjee Ganti

Data Scientist

at Saama Technologies

Founded 1997  •  Products & Services  •  100-1000 employees  •  Profitable
Data Science
Python
Machine Learning (ML)
Natural Language Processing (NLP)
Big Data
Agile/Scrum
Project Management
icon
Pune
icon
4 - 8 yrs
icon
₹1L - ₹16L / yr
Description Must have Direct Hands- on, 4 years of experience, building complex Data Science solutions Must have fundamental knowledge of Inferential Statistics Should have worked on Predictive Modelling, using Python / R Experience should include the following, File I/ O, Data Harmonization, Data Exploration Machine Learning Techniques (Supervised, Unsupervised) Multi- Dimensional Array Processing Deep Learning NLP, Image Processing Prior experience in Healthcare Domain, is a plus Experience using Big Data, is a plus Should have Excellent Analytical, Problem Solving ability. Should be able to grasp new concepts quickly Should be well familiar with Agile Project Management Methodology Should have excellent written and verbal communication skills Should be a team player with open mind
Read more
Job posted by
Sandeep Chaudhary

Big Data Evangelist

at UpX Academy

Founded 2016  •  Products & Services  •  20-100 employees  •  Profitable
Spark
Hadoop
MongoDB
Python
Scala
Apache Kafka
Apache Flume
Cassandra
icon
Noida, Hyderabad, NCR (Delhi | Gurgaon | Noida)
icon
2 - 6 yrs
icon
₹4L - ₹12L / yr
Looking for a technically sound and excellent trainer on big data technologies. Get an opportunity to become popular in the industry and get visibility. Host regular sessions on Big data related technologies and get paid to learn.
Read more
Job posted by
Suchit Majumdar
Did not find a job you were looking for?
icon
Search for relevant jobs from 10000+ companies such as Google, Amazon & Uber actively hiring on Cutshort.
Get to hear about interesting companies hiring right now
iconFollow Cutshort
Want to apply to this role at Dream Game Studios?
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Learn more
Get to hear about interesting companies hiring right now
iconFollow Cutshort