Business Analyst

at GreedyGame

DP
Posted by Shreyoshi Ghosh
icon
Bengaluru (Bangalore)
icon
1 - 2 yrs
icon
₹4L - ₹12L / yr
icon
Full time
Skills
MS-Excel
SQL
Data Analytics
Python
R Language
Business Analysis

About Us:

GreedyGame is looking for a Business Analyst to join its clan. We are looking to get an enthusiastic Business Analyst who likes to play with Data. You'll be building insights from Data, creating analytical dashboard and monitoring KPI values. Also you will coordinate with teams working on different layers of the infrastructure.

 

Job details:

 

Seniority Level: Associate

Level Industry: Marketing & Advertising

Employment Type: Full Time

Job Location: Bangalore

Experience: 1-2 years

 

WHAT ARE WE LOOKING FOR?

 

  • Excellent planning, organizational, and time management skills.
  • Exceptional analytical and conceptual thinking skills.
  • A previous experience of working closely with Operations and Product Teams.
  • Competency in Excel and SQL is a must.
  • Experience with a programming language like Python is required.
  • Knowledge of Marketing Tools is preferable.

 

 

WHAT WILL BE YOUR RESPONSIBILITIES?

 

  • Evaluating business processes, anticipating requirements, uncovering areas for improvement, developing and implementing solutions.
  • Should be able to generate meaningful insights to help the marketing team and product team in enhancing the user experience for Mobile and Web Apps.
  • Leading ongoing reviews of business processes and developing optimization strategies.
  • Performing requirements analysis from a user and business point of view
  • Combining data from multiple sources like SQL tables, Google Analytics, Inhouse Analytical signals etc and driving relevant insights
  • Deciding the success metrics and KPIs for different Products and features and making sure they are achieved.
  • Act as quality assurance liaison prior to the release of new data analysis or application.

 

Skills and Abilities:

  • Python
  • SQL
  • Business Analytics
  • BigQuery

 

WHAT'S IN IT FOR YOU?

  • An opportunity to be a part of a fast scaling start-up in the AdTech space that offers unmatched services and products.
  • To work with a team of young enthusiasts who are always upbeat and self-driven to achieve bigger milestones in shorter time spans.
  • A workspace that is wide open as per the open door policy at the company, located in the most happening center of Bangalore.
  • A well-fed stomach makes the mind work better and therefore we provide - free lunch with a wide variety on all days of the week, a stocked-up pantry to satiate your want for munchies, a Foosball table to burst stress and above all a great working environment.
  • We believe that we grow as you grow. Once you are a part of our team, your growth also becomes essential to us, and in order to make sure that happens, there are timely formal and informal feedbacks given

About GreedyGame

GreedyGame is a platform which enables blending of ads within mobile gaming experience using assets like background, characters, power-ups. It helps advertisers engage audiences while they are playing games, empowers game developers monetize their game development efforts through non-intrusive advertising and allows gamers to enjoy gaming content without having to deal with distractive advertising.

 

 

Founded
2013
Type
Product
Size
20-100 employees
Stage
Raised funding
View full company details
Why apply to jobs via Cutshort
Personalized job matches
Stop wasting time. Get matched with jobs that meet your skills, aspirations and preferences.
Verified hiring teams
See actual hiring teams, find common social connections or connect with them directly. No 3rd party agencies here.
Move faster with AI
We use AI to get you faster responses, recommendations and unmatched user experience.
2101133
Matches delivered
3712187
Network size
15000
Companies hiring

Similar jobs

Database Administrator

at With Reputed service based company

Agency job
via Jobdost
SQL
MySQL
MySQL DBA
MariaDB
MS SQLServer
icon
Bengaluru (Bangalore)
icon
4 - 6 yrs
icon
₹12L - ₹15L / yr
Role Description
As a Database Administrator, you will be responsible for designing, testing, planning,
implementing, protecting, operating, managing and maintaining our company’s
databases. The goal is to provide a seamless flow of information throughout the

company, considering both backend data structure and frontend accessibility for end-
users. You get to work with some of the best minds in the industry at a place where

opportunity lurks everywhere and in everything.
Responsibilities
Your responsibilities are as follows.
• Build database systems of high availability and quality depending on each end
user’s specialised role
• Design and implement database in accordance to end users’ information needs
and views
• Define users and enable data distribution to the right user, in appropriate format
and in a timely manner
• Use high-speed transaction recovery techniques and backup data
• Minimise database downtime and manage parameters to provide fast query
responses
• Provide proactive and reactive data management support and training to users
• Determine, enforce and document database policies, procedures and
standards
• Perform tests and evaluations regularly to ensure data security, privacy and
integrity
• Monitor database performance, implement changes and apply new patches
and versions when required
Required Qualifications
We are looking for individuals who are curious, excited about learning, and navigating
through the uncertainties and complexities that are associated with a growing
company. Some qualifications that we think would help you thrive in this role are:
• Minimum 4 Years of experience as a Database Administrator
• Hands-on experience with database standards and end user applications
• Excellent knowledge of data backup, recovery, security, integrity and SQL
• Familiarity with database design, documentation and coding
• Previous experience with DBA case tools (frontend/backend) and third-party
tools
• Familiarity with programming languages API
• Problem solving skills and ability to think algorithmically
• Bachelor/Masters of CS/IT Engineering, BCA/MCA, B Sc/M Sc in CS/IT

Preferred Qualifications
• Sense of ownership and pride in your performance and its impact on company’s
success
• Critical thinker and problem-solving skills
• Team player
• Good time-management skills
• Great interpersonal and communication skills.
Job posted by
Saida Jabbar

Data Annotation Analyst

at Sizzle

Founded 2018  •  Product  •  20-100 employees  •  Raised funding
Data Analytics
Data Annotation
Natural Language Processing (NLP)
Computer Vision
data annotation
icon
Bengaluru (Bangalore)
icon
0 - 3 yrs
icon
₹1.5L - ₹1.8L / yr

Sizzle is an exciting new startup that’s changing the world of gaming.  At Sizzle, we’re building AI to automate gaming highlights, directly from Twitch and YouTube streams. 


For this role, we're looking for someone that ideally loves to watch video gaming content on Twitch and YouTube. Specifically, you will help generate training data for all the AI we are building. This will include gathering screenshots, clips and other data from gaming videos on Twitch and YouTube.  You will then be responsible for labeling and annotating them. You will work very closely with our AI engineers.


You will:

  • Gather training data as specified by the management and engineering team
  • Label and annotate all the training data
  • Ensure all data is prepped and ready to feed into the AI models
  • Revise the training data as specified by the engineering team
  • Test the output of the AI models and update training data needs

You should have the following qualities:

  • Willingness to work hard and hit deadlines
  • Work well with people
  • Be able to work remotely (if not in Bangalore)
  • Interested in learning about AI and computer vision
  • Willingness to learn rapidly on the job
  • Ideally a gamer or someone interested in watching gaming content online

Skills:

Data labeling, annotation, AI, computer vision, gaming


Work Experience:  0 years to 3 years


About Sizzle

Sizzle is building AI to automate gaming highlights, directly from Twitch and YouTube videos. Presently, there are over 700 million fans around the world that watch gaming videos on Twitch and YouTube. Sizzle is creating a new highlights experience for these fans, so they can catch up on their favorite streamers and esports leagues. Sizzle is available at www.sizzle.gg.
Job posted by
Vijay Koduri

Data Engineer

at Slintel

Agency job
via Qrata
Big Data
ETL
Apache Spark
Spark
Data engineer
Data engineering
Linux/Unix
MySQL
Python
Amazon Web Services (AWS)
icon
Bengaluru (Bangalore)
icon
4 - 9 yrs
icon
₹20L - ₹28L / yr
Responsibilities
  • Work in collaboration with the application team and integration team to design, create, and maintain optimal data pipeline architecture and data structures for Data Lake/Data Warehouse.
  • Work with stakeholders including the Sales, Product, and Customer Support teams to assist with data-related technical issues and support their data analytics needs.
  • Assemble large, complex data sets from third-party vendors to meet business requirements.
  • Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
  • Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL, Elasticsearch, MongoDB, and AWS technology.
  • Streamline existing and introduce enhanced reporting and analysis solutions that leverage complex data sources derived from multiple internal systems.

Requirements
  • 5+ years of experience in a Data Engineer role.
  • Proficiency in Linux.
  • Must have SQL knowledge and experience working with relational databases, query authoring (SQL) as well as familiarity with databases including Mysql, Mongo, Cassandra, and Athena.
  • Must have experience with Python/Scala.
  • Must have experience with Big Data technologies like Apache Spark.
  • Must have experience with Apache Airflow.
  • Experience with data pipeline and ETL tools like AWS Glue.
  • Experience working with AWS cloud services: EC2, S3, RDS, Redshift.
Job posted by
Prajakta Kulkarni

Senior Data Engineer

at InnovAccer

Founded 2014  •  Products & Services  •  100-1000 employees  •  Profitable
ETL
SQL
Data Warehouse (DWH)
Informatica
Datawarehousing
ELT
SSIS
icon
Noida, Bengaluru (Bangalore), Pune, Hyderabad
icon
4 - 7 yrs
icon
₹4L - ₹16L / yr

We are looking for a Senior Data Engineer to join the Customer Innovation team, who will be responsible for acquiring, transforming, and integrating customer data onto our Data Activation Platform from customers’ clinical, claims, and other data sources. You will work closely with customers to build data and analytics solutions to support their business needs, and be the engine that powers the partnership that we build with them by delivering high-fidelity data assets.

In this role, you will work closely with our Product Managers, Data Scientists, and Software Engineers to build the solution architecture that will support customer objectives. You'll work with some of the brightest minds in the industry, work with one of the richest healthcare data sets in the world, use cutting-edge technology, and see your efforts affect products and people on a regular basis. The ideal candidate is someone that

  • Has healthcare experience and is passionate about helping heal people,
  • Loves working with data,
  • Has an obsessive focus on data quality,
  • Is comfortable with ambiguity and making decisions based on available data and reasonable assumptions,
  • Has strong data interrogation and analysis skills,
  • Defaults to written communication and delivers clean documentation, and,
  • Enjoys working with customers and problem solving for them.

A day in the life at Innovaccer:

  • Define the end-to-end solution architecture for projects by mapping customers’ business and technical requirements against the suite of Innovaccer products and Solutions.
  • Measure and communicate impact to our customers.
  • Enabling customers on how to activate data themselves using SQL, BI tools, or APIs to solve questions they have at speed.

What You Need:

  • 4+ years of experience in a Data Engineering role, a Graduate degree in Computer Science, Statistics, Informatics, Information Systems, or another quantitative field.
  • 4+ years of experience working with relational databases like Snowflake, Redshift, or Postgres.
  • Intermediate to advanced level SQL programming skills.
  • Data Analytics and Visualization (using tools like PowerBI)
  • The ability to engage with both the business and technical teams of a client - to document and explain technical problems or concepts in a clear and concise way.
  • Ability to work in a fast-paced and agile environment.
  • Easily adapt and learn new things whether it’s a new library, framework, process, or visual design concept.

What we offer:

  • Industry certifications: We want you to be a subject matter expert in what you do. So, whether it’s our product or our domain, we’ll help you dive in and get certified.
  • Quarterly rewards and recognition programs: We foster learning and encourage people to take risks. We recognize and reward your hard work.
  • Health benefits: We cover health insurance for you and your loved ones.
  • Sabbatical policy: We encourage people to take time off and rejuvenate, learn new skills, and pursue their interests so they can generate new ideas with Innovaccer.
  • Pet-friendly office and open floor plan: No boring cubicles.
Job posted by
Jyoti Kaushik

Data Scientist

at Kwalee

Founded 2011  •  Product  •  100-500 employees  •  Profitable
Data Science
R Programming
Python
icon
Bengaluru (Bangalore)
icon
3 - 15 yrs
icon
Best in industry

Kwalee is one of the world’s leading multiplatform game publishers and developers, with well over 750 million downloads worldwide for mobile hits such as Draw It, Teacher Simulator, Let’s Be Cops 3D, Traffic Cop 3D and Makeover Studio 3D. Alongside this, we also have a growing PC and Console team of incredible pedigree that is on the hunt for great new titles to join TENS!, Eternal Hope and Die by the Blade. 

With a team of talented people collaborating daily between our studios in Leamington Spa, Bangalore and Beijing, or on a remote basis from Turkey, Brazil, the Philippines and many more places, we have a truly global team making games for a global audience. And it’s paying off: Kwalee games have been downloaded in every country on earth! If you think you’re a good fit for one of our remote vacancies, we want to hear from you wherever you are based.

Founded in 2011 by David Darling CBE, a key architect of the UK games industry who previously co-founded and led Codemasters for many years, our team also includes legends such as Andrew Graham (creator of Micro Machines series) and Jason Falcus (programmer of classics including NBA Jam) alongside a growing and diverse team of global gaming experts. Everyone contributes creatively to Kwalee’s success, with all employees eligible to pitch their own game ideas on Creative Wednesdays, and we’re proud to have built our success on this inclusive principle. Could your idea be the next global hit?

What’s the job?

As a Data Scientist you will help utilise masses of data generated by Kwalee players all over the world to solve complex problems using cutting edge techniques.   

What you tell your friends you do 

"My models optimise the performance of Kwalee games and advertising every day!”

What you will really be doing 

  • Building intelligent systems which generate value from the data which our players and marketing activities produce.
  • Leveraging statistical modelling and machine learning techniques to perform automated decision making on a large scale.
  • Developing complex, multi-faceted and highly valuable data products which fuel the growth of Kwalee and our games.
  • Owning and managing data science projects from concept to deployment.
  • Collaborating with key stakeholders across the company to develop new products and avenues of research.

How you will be doing this

  • You’ll be part of an agile, multidisciplinary and creative team and work closely with them to ensure the best results.
  • You'll think creatively and be motivated by challenges and constantly striving for the best.
  • You’ll work with cutting edge technology, if you need software or hardware to get the job done efficiently, you will get it. We even have a robot!

Team

Our talented team is our signature. We have a highly creative atmosphere with more than 200 staff where you’ll have the opportunity to contribute daily to important decisions. You’ll work within an extremely experienced, passionate and diverse team, including David Darling and the creator of the Micro Machines video games.

Skills and Requirements

  • A degree in a numerically focussed degree discipline such as, Maths, Physics, Economics, Chemistry, Engineering, Biological Sciences
  • A record of outstanding contribution to data science projects.
  • Experience using Python for data analysis and visualisation.
  • A good understanding of a deep learning framework such as Tensorflow.
  • Experience manipulating data in SQL and/or NoSQL databases

We offer

  • We want everyone involved in our games to share our success, that’s why we have a generous team profit sharing scheme from day 1 of employment
  • In addition to a competitive salary we also offer private medical cover and life assurance
  • Creative Wednesdays!(Design and make your own games every Wednesday)
  • 20 days of paid holidays plus bank holidays 
  • Hybrid model available depending on the department and the role
  • Relocation support available 
  • Great work-life balance with flexible working hours
  • Quarterly team building days - work hard, play hard!
  • Monthly employee awards
  • Free snacks, fruit and drinks

Our philosophy

We firmly believe in creativity and innovation and that a fundamental requirement for a successful and happy company is having the right mix of individuals. With the right people in the right environment anything and everything is possible.

Kwalee makes games to bring people, their stories, and their interests together. As an employer, we’re dedicated to making sure that everyone can thrive within our team by welcoming and supporting people of all ages, races, colours, beliefs, sexual orientations, genders and circumstances. With the inclusion of diverse voices in our teams, we bring plenty to the table that’s fresh, fun and exciting; it makes for a better environment and helps us to create better games for everyone! This is how we move forward as a company – because these voices are the difference that make all the difference.

Job posted by
Michael Hoppitt

Data Warehousing Engineer - Big Data/ETL

at Marktine

Founded 2014  •  Products & Services  •  20-100 employees  •  Bootstrapped
Big Data
ETL
PySpark
SSIS
Microsoft Windows Azure
Data Warehouse (DWH)
Python
Amazon Web Services (AWS)
Informatica
icon
Remote, Bengaluru (Bangalore)
icon
3 - 10 yrs
icon
₹5L - ₹15L / yr

Must Have Skills:

- Solid Knowledge on DWH, ETL and Big Data Concepts

- Excellent SQL Skills (With knowledge of SQL Analytics Functions)

- Working Experience on any ETL tool i.e. SSIS / Informatica

- Working Experience on any Azure or AWS Big Data Tools.

- Experience on Implementing Data Jobs (Batch / Real time Streaming)

- Excellent written and verbal communication skills in English, Self-motivated with strong sense of ownership and Ready to learn new tools and technologies

Preferred Skills:

- Experience on Py-Spark / Spark SQL

- AWS Data Tools (AWS Glue, AWS Athena)

- Azure Data Tools (Azure Databricks, Azure Data Factory)

Other Skills:

- Knowledge about Azure Blob, Azure File Storage, AWS S3, Elastic Search / Redis Search

- Knowledge on domain/function (across pricing, promotions and assortment).

- Implementation Experience on Schema and Data Validator framework (Python / Java / SQL),

- Knowledge on DQS and MDM.

Key Responsibilities:

- Independently work on ETL / DWH / Big data Projects

- Gather and process raw data at scale.

- Design and develop data applications using selected tools and frameworks as required and requested.

- Read, extract, transform, stage and load data to selected tools and frameworks as required and requested.

- Perform tasks such as writing scripts, web scraping, calling APIs, write SQL queries, etc.

- Work closely with the engineering team to integrate your work into our production systems.

- Process unstructured data into a form suitable for analysis.

- Analyse processed data.

- Support business decisions with ad hoc analysis as needed.

- Monitoring data performance and modifying infrastructure as needed.

Responsibility: Smart Resource, having excellent communication skills

 

 
Job posted by
Vishal Sharma

Data architect

at AES Technologies India pvt

Founded 1998  •  Products & Services  •  100-1000 employees  •  Raised funding
Python
Windows Azure
Java
Big Data
Scala
icon
Dubai
icon
2 - 4 yrs
icon
Best in industry

As a Data Engineer, your role will encompass: 

  • Designing and building production data pipelines from ingestion to consumption within a hybrid big data architecture using Scala, Python, Talend etc.   
  • Gather and address technical and design requirements.  
  • Refactor existing applications to optimize its performance through setting the appropriate architecture and integrating the best practices and standards. 
  • Participate in the entire data life-cycle mainly focusing on coding, debugging, and testing. 
  • Troubleshoot and debug ETL Pipelines. 
  • Documentation of each process. 

Technical Requirements: - 

  • BSc degree in Computer Science/Computer Engineering. (Masters is a plus.) 
  • 2+ years of experience as a Data Engineer. 
  • In-depth understanding of core ETL concepts, Data Modelling, Data Lineage, Data Governance, Data Catalog, etc. 
  • 2+ years of work experience in Scala, Python, Java. 
  • Good Knowledge on Big Data Tools such as Spark/HDFS/Hive/Flume, etc. 
  • Hands on experience on ETL tools like Talend/Informatica is a plus. 
  • Good knowledge in Kafka and spark streaming is a big plus. 
  • 2+ years of experience in using Azure cloud and its resources/services (like Azure Data factory, Azure Databricks, SQL Synapse, Azure Devops, Logic Apps, Power Bi, Azure Event Hubs, etc). 
  • Strong experience in Relational Databases (MySQL, SQL Server)  
  • Exposure on data visualization tools like Power BI / Qlik sense / MicroStrategy 
  • 2+ years of experience in developing APIs (REST & SOAP protocols). 
  • Strong knowledge in Continuous Integration & Continuous Deployment (CI/CD) utilizing Docker containers, Jenkins, etc. 
  • Strong competencies in algorithms and software architecture. 
  • Excellent analytical and teamwork skills. 

 Good to have: - 

  • Previous on-prem working experience is a plus. 
  • In-depth understanding of the entire web development process (design, development, and deployment) 
  • Previous experience in automated testing including unit testing & UI testing. 

 

Job posted by
Ragavendra G

Sr. Database Engineer

at Technology service company

Agency job
via Jobdost
Relational Database (RDBMS)
NOSQL Databases
NOSQL
Performance tuning
SQL
PostgreSQL
MongoDB
DynamoDB
Object Oriented Programming (OOPs)
Domain-driven design
Cloud Computing
Oracle
Data Analytics
Data modeling
Database Design
icon
Remote only
icon
5 - 10 yrs
icon
₹10L - ₹20L / yr

Preferred Education & Experience:

  • Bachelor’s or master’s degree in Computer Engineering, Computer Science, Computer Applications, Mathematics, Statistics or related technical field or equivalent practical experience. Relevant experience of at least 3 years in lieu of above if from a different stream of education.

  • Well-versed in and 5+ years of hands-on demonstrable experience with:
    ▪ Data Analysis & Data Modeling
    ▪ Database Design & Implementation
    ▪ Database Performance Tuning & Optimization
    ▪ PL/pgSQL & SQL

  • 5+ years of hands-on development experience in Relational Database (PostgreSQL/SQL Server/Oracle).

  • 5+ years of hands-on development experience in SQL, PL/PgSQL, including stored procedures, functions, triggers, and views.

  • Hands-on experience with demonstrable working experience in Database Design Principles, SQL Query Optimization Techniques, Index Management, Integrity Checks, Statistics, and Isolation levels

  • Hands-on experience with demonstrable working experience in Database Read & Write Performance Tuning & Optimization.

  • Knowledge and Experience working in Domain Driven Design (DDD) Concepts, Object Oriented Programming System (OOPS) Concepts, Cloud Architecture Concepts, NoSQL Database Concepts are added values

  • Knowledge and working experience in Oil & Gas, Financial, & Automotive Domains is a plus

  • Hands-on development experience in one or more NoSQL data stores such as Cassandra, HBase, MongoDB, DynamoDB, Elastic Search, Neo4J, etc. a plus.

Job posted by
Riya Roy

Data Engineer

at Intergral Add Science

Java
Hadoop
Apache Spark
Scala
Python
SQL
Data architecture
data pipeline
icon
Pune
icon
5 - 8 yrs
icon
₹9L - ₹25L / yr
  • 6+ years of recent hands-on Java development
  • Developing data pipelines in AWS or Google Cloud
  • Java, Python, JavaScript programming languages
  • Great understanding of designing for performance, scalability, and reliability of data intensive application
  • Hadoop MapReduce, Spark, Pig. Understanding of database fundamentals and advanced SQL knowledge.
  • In-depth understanding of object oriented programming concepts and design patterns
  • Ability to communicate clearly to technical and non-technical audiences, verbally and in writing
  • Understanding of full software development life cycle, agile development and continuous integration
  • Experience in Agile methodologies including Scrum and Kanban
Job posted by
Prashma S R

Data Scientist

at upGrad

Founded 2015  •  Product  •  100-500 employees  •  Raised funding
Data Science
R Programming
Python
SQL
Natural Language Processing (NLP)
Machine Learning (ML)
Tableau
icon
Bengaluru (Bangalore), Mumbai
icon
4 - 6 yrs
icon
₹10L - ₹21L / yr

About Us

upGrad is an online education platform building the careers of tomorrow by offering the most industry-relevant programs in an immersive learning experience. Our mission is to create a new digital-first learning experience to deliver tangible career impact to individuals at scale. upGrad currently offers programs in Data Science, Machine Learning, Product Management, Digital Marketing, and Entrepreneurship, etc. upGrad is looking for people passionate about management and education to help design learning programs for working professionals to stay sharp and stay relevant and help build the careers of tomorrow.

  • upGrad was awarded the Best Tech for Education by IAMAI for 2018-19

  • upGrad was also ranked as one of the LinkedIn Top Startups 2018: The 25 most sought-

    after startups in India

  • upGrad was earlier selected as one of the top ten most innovative companies in India

    by FastCompany.

  • We were also covered by the Financial Times along with other disruptors in Ed-Tech

  • upGrad is the official education partner for Government of India - Startup India

    program

  • Our program with IIIT B has been ranked #1 program in the country in the domain of Artificial Intelligence and Machine Learning

     

    Role Summary

    Are you excited by the challenge and the opportunity of applying data-science and data- analytics techniques to the fast developing education technology domain? Do you look forward to, the sense of ownership and achievement that comes with innovating and creating data products from scratch and pushing it live into Production systems? Do you want to work with a team of highly motivated members who are on a mission to empower individuals through education?
    If this is you, come join us and become a part of the upGrad technology team. At upGrad the technology team enables all the facets of the business - whether it’s bringing efficiency to ourmarketing and sales initiatives, to enhancing our student learning experience, to empowering our content, delivery and student success teams, to aiding our student’s for their desired careeroutcomes. We play the part of bringing together data & tech to solve these business problems and opportunities at hand.
    We are looking for an highly skilled, experienced and passionate data-scientist who can come on-board and help create the next generation of data-powered education tech product. The ideal candidate would be someone who has worked in a Data Science role before wherein he/she is comfortable working with unknowns, evaluating the data and the feasibility of applying scientific techniques to business problems and products, and have a track record of developing and deploying data-science models into live applications. Someone with a strong math, stats, data-science background, comfortable handling data (structured+unstructured) as well as strong engineering know-how to implement/support such data products in Production environment.
    Ours is a highly iterative and fast-paced environment, hence being flexible, communicating well and attention-to-detail are very important too. The ideal candidate should be passionate about the customer impact and comfortable working with multiple stakeholders across the company.


    Roles & Responsibilities

      • 3+ years of experience in analytics, data science, machine learning or comparable role
      • Bachelor's degree in Computer Science, Data Science/Data Analytics, Math/Statistics or related discipline 
      • Experience in building and deploying Machine Learning models in Production systems
      • Strong analytical skills: ability to make sense out of a variety of data and its relation/applicability to the business problem or opportunity at hand
      • Strong programming skills: comfortable with Python - pandas, numpy, scipy, matplotlib; Databases - SQL and noSQL
      • Strong communication skills: ability to both formulate/understand the business problem at hand as well as ability to discuss with non data-science background stakeholders 
      • Comfortable dealing with ambiguity and competing objectives

       

      Skills Required

      • Experience in Text Analytics, Natural Language Processing

      • Advanced degree in Data Science/Data Analytics or Math/Statistics

      • Comfortable with data-visualization tools and techniques

      • Knowledge of AWS and Data Warehousing

      • Passion for building data-products for Production systems - a strong desire to impact

        the product through data-science technique

Job posted by
Priyanka Muralidharan
Did not find a job you were looking for?
icon
Search for relevant jobs from 10000+ companies such as Google, Amazon & Uber actively hiring on Cutshort.
Get to hear about interesting companies hiring right now
iconFollow Cutshort
Want to apply to this role at GreedyGame?
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Learn more
Get to hear about interesting companies hiring right now
iconFollow Cutshort