Loading...

{{notif_text}}

Join the fight against Covid-19 in India. Check the resources we have curated for you here.https://fightcovid.cutshort.io
Bengaluru (Bangalore)
3 - 15 years
{{::renderSalaryString({min: 1200000, max: 3500000, duration: '', currency: 'INR', equity: false})}}

Skills

Data Science
R Programming
Python

Job description

About Kwalee

We develop and publish mobile games that millions love. With global hits and 450 million+ downloads, we're the UK's biggest hypercasual games company!

Founded

2011

Type

Product

Size

51-250 employees

Stage

View company

Why apply to jobs via CutShort

No long forms
Personalized job matches
Stop wasting time. Get matched with jobs that meet your skills, aspirations and preferences.
Discover employers in your network
Verified hiring teams
See actual hiring teams, find common social connections or connect with them directly. No 3rd party agencies here.
Make your network count
Move faster with AI
We use AI to get you faster responses, recommendations and unmatched user experience.
{{2101133 | number}}
Matches delivered
{{3712187 | number}}
Network size
{{6212 | number}}
Companies hiring

Similar jobs

Data Engineer

Founded 2015
Products and services{{j_company_types[3 - 1]}}
{{j_company_sizes[2 - 1]}} employees
{{j_company_stages[ - 1]}}
via Qrata
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore)
Experience icon
4 - 6 years
Salary icon
Best in industry{{renderSalaryString({min: 3000000, max: 4500000, duration: "undefined", currency: "INR", equity: false})}}

It is the leader in capturing technographics-powered buying intent, helpscompanies uncover the 3% of active buyers in their target market. It evaluatesover 100 billion data points and analyzes factors such as buyer journeys, technologyadoption patterns, and other digital footprints to deliver market & sales intelligence.Its customers have access to the buying patterns and contact information ofmore than 17 million companies and 70 million decision makers across the world.Role – Data EngineerResponsibilities Work in collaboration with the application team and integration team todesign, create, and maintain optimal data pipeline architecture and datastructures for Data Lake/Data Warehouse. Work with stakeholders including the Sales, Product, and Customer Supportteams to assist with data-related technical issues and support their dataanalytics needs. Assemble large, complex data sets from third-party vendors to meet businessrequirements. Identify, design, and implement internal process improvements: automatingmanual processes, optimizing data delivery, re-designing infrastructure forgreater scalability, etc. Build the infrastructure required for optimal extraction, transformation, andloading of data from a wide variety of data sources using SQL, Elasticsearch,MongoDB, and AWS technology. Streamline existing and introduce enhanced reporting and analysis solutionsthat leverage complex data sources derived from multiple internal systems.Requirements 5+ years of experience in a Data Engineer role. Proficiency in Linux. Must have SQL knowledge and experience working with relational databases,query authoring (SQL) as well as familiarity with databases including Mysql,Mongo, Cassandra, and Athena. Must have experience with Python/Scala. Must have experience with Big Data technologies like Apache Spark. Must have experience with Apache Airflow. Experience with data pipeline and ETL tools like AWS Glue. Experience working with AWS cloud services: EC2, S3, RDS, Redshift.

Job posted by
apply for job
apply for job
Prajakta Kulkarni picture
Prajakta Kulkarni
Job posted by
Prajakta Kulkarni picture
Prajakta Kulkarni
Apply for job
apply for job

Big Data Architect

Founded 2016
Products and services{{j_company_types[2 - 1]}}
{{j_company_sizes[1 - 1]}} employees
{{j_company_stages[ - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore), Hyderabad, Pune
Experience icon
9 - 16 years
Salary icon
Best in industry{{renderSalaryString({min: 700000, max: 3200000, duration: "undefined", currency: "INR", equity: false})}}

Greetings..   We have urgent requirement for the post of Big Data Architect in reputed MNC company     Location:  Pune/Nagpur,Goa,Hyderabad/Bangalore Job Requirements: 9 years and above of total experience preferably in bigdata space. Creating spark applications using Scala to process data. Experience in scheduling and troubleshooting/debugging Spark jobs in steps. Experience in spark job performance tuning and optimizations. Should have experience in processing data using Kafka/Pyhton. Individual should have experience and understanding in configuring Kafka topics to optimize the performance. Should be proficient in writing SQL queries to process data in Data Warehouse. Hands on experience in working with Linux commands to troubleshoot/debug issues and creating shell scripts to automate tasks. Experience on AWS services like EMR.

Job posted by
apply for job
apply for job
Haina khan picture
Haina khan
Job posted by
Haina khan picture
Haina khan
Apply for job
apply for job

Business Analyst

Founded 2019
Products and services{{j_company_types[2 - 1]}}
{{j_company_sizes[2 - 1]}} employees
{{j_company_stages[ - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Mumbai, Bengaluru (Bangalore)
Experience icon
1 - 5 years
Salary icon
Best in industry{{renderSalaryString({min: 1500000, max: 2500000, duration: "undefined", currency: "INR", equity: false})}}

Responsibilities:● Ability to do exploratory analysis: Fetch data from systems and analyze trends.● Developing customer segmentation models to improve the efficiency of marketing and productcampaigns.● Establishing mechanisms for cross functional teams to consume customer insights to improveengagement along the customer life cycle.● Gather requirements for dashboards from business, marketing and operations stakeholders.● Preparing internal reports for executive leadership and supporting their decision making.● Analyse data, derive insights and embed it into Business actions.● Work with cross functional teams.Skills Required• Data Analytics Visionary.• Strong in SQL & Excel and good to have experience in Tableau.• Experience in the field of Data Analysis, Data Visualization.• Strong in analysing the Data and creating dashboards.• Strong in communication, presentation and business intelligence.• Multi-Dimensional, "Growth Hacker" Skill Set with strong sense of ownership for work.• Aggressive “Take no prisoners” approach.

Job posted by
apply for job
apply for job
Sidharth Maholia picture
Sidharth Maholia
Job posted by
Sidharth Maholia picture
Sidharth Maholia
Apply for job
apply for job

Data Engineer

Founded 2015
Products and services{{j_company_types[3 - 1]}}
{{j_company_sizes[3 - 1]}} employees
{{j_company_stages[ - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore)
Experience icon
2 - 5 years
Salary icon
Best in industry{{renderSalaryString({min: 700000, max: 1200000, duration: "undefined", currency: "INR", equity: false})}}

Primary Responsibilities:• Responsible for developing and maintaining applications with PySpark• Contribute to the overall design and architecture of the application developed and deployed.• Performance Tuning wrt to executor sizing and other environmental parameters, code optimization, partitions tuning, etc.• Interact with business users to understand requirements and troubleshoot issues.• Implement Projects based on functional specifications.Must-Have Skills:• Good experience in Pyspark - Including Dataframe core functions and Spark SQL• Good customer communication.• Good Analytical skills

Job posted by
apply for job
apply for job
geeti gaurav mohanty picture
geeti gaurav mohanty
Job posted by
geeti gaurav mohanty picture
geeti gaurav mohanty
Apply for job
apply for job

Data Scientist

Founded 2018
Products and services{{j_company_types[2 - 1]}}
{{j_company_sizes[1 - 1]}} employees
{{j_company_stages[ - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Noida, NCR (Delhi | Gurgaon | Noida)
Experience icon
4 - 10 years
Salary icon
Best in industry{{renderSalaryString({min: 1800000, max: 2400000, duration: "undefined", currency: "INR", equity: false})}}

Job Description:• Help build a Data Science team which will be engaged in researching, designing,implementing, and deploying full-stack scalable data analytics vision and machine learningsolutions to challenge various business issues.• Modelling complex algorithms, discovering insights and identifying businessopportunities through the use of algorithmic, statistical, visualization, and mining techniques• Translates business requirements into quick prototypes and enable thedevelopment of big data capabilities driving business outcomes• Responsible for data governance and defining data collection and collationguidelines.• Must be able to advice, guide and train other junior data engineers in their job.Must Have:• 4+ experience in a leadership role as a Data Scientist• Preferably from retail, Manufacturing, Healthcare industry(not mandatory)• Willing to work from scratch and build up a team of Data Scientists• Open for taking up the challenges with end to end ownership• Confident with excellent communication skills along with a good decision maker

Job posted by
apply for job
apply for job
Jyotsna Econolytics picture
Jyotsna Econolytics
Job posted by
Jyotsna Econolytics picture
Jyotsna Econolytics
Apply for job
apply for job

Data Scientist (Banking Domain Mandatory)

Founded 2015
Products and services{{j_company_types[3 - 1]}}
{{j_company_sizes[3 - 1]}} employees
{{j_company_stages[ - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore)
Experience icon
4 - 7 years
Salary icon
Best in industry{{renderSalaryString({min: 1000000, max: 2000000, duration: "undefined", currency: "INR", equity: false})}}

Work-days: Sunday through ThursdayWork shift: Day time  Strong problem-solving skills with an emphasis on product development. • Experience using statistical computer languages (R, Python, SLQ, etc.) to manipulate data and drawinsights from large data sets.• Experience in building ML pipelines with Apache Spark, Python• Proficiency in implementing end to end Data Science Life cycle• Experience in Model fine-tuning and advanced grid search techniques• Experience working with and creating data architectures.• Knowledge of a variety of machine learning techniques (clustering, decision tree learning, artificial neuralnetworks, etc.) and their real-world advantages/drawbacks.• Knowledge of advanced statistical techniques and concepts (regression, properties of distributions,statistical tests and proper usage, etc.) and experience with applications.• Excellent written and verbal communication skills for coordinating across teams.• A drive to learn and master new technologies and techniques.• Assess the effectiveness and accuracy of new data sources and data gathering techniques.• Develop custom data models and algorithms to apply to data sets.• Use predictive modeling to increase and optimize customer experiences, revenue generation, ad targeting, and other business outcomes.• Develop company A/B testing framework and test model quality.• Coordinate with different functional teams to implement models and monitor outcomes.• Develop processes and tools to monitor and analyze model performance and data accuracy.Key skills:● Strong knowledge in Data Science pipelines with Python● Object-oriented programming● A/B testing framework and model fine-tuning● Proficiency in using sci-kit, NumPy, and pandas package in pythonNice to have:● Ability to work with containerized solutions: Docker/Compose/Swarm/Kubernetes● Unit testing, Test-driven development practice● DevOps, Continuous integration/ continuous deployment experience● Agile development environment experience, familiarity with SCRUM● Deep learning knowledge

Job posted by
apply for job
apply for job
Priyanka U picture
Priyanka U
Job posted by
Priyanka U picture
Priyanka U
Apply for job
apply for job

Data Engineer (Remote/ Bengaluru)

Founded 2016
Products and services{{j_company_types[1 - 1]}}
{{j_company_sizes[2 - 1]}} employees
{{j_company_stages[ - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Remote, Bengaluru (Bangalore)
Experience icon
2 - 10 years
Salary icon
Best in industry{{renderSalaryString({min: 200000, max: 1500000, duration: "undefined", currency: "INR", equity: false})}}

Job Title: Data Engineer (Remote)   Job Description You will work on:   We help many of our clients make sense of their large investments in data – be it building analytics solutions or machine learning applications. You will work on cutting edge cloud native technologies to crunch terabytes of data into meaningful insights.    What you will do (Responsibilities): Collaborate with Business, Marketing & CRM teams to build highly efficient data pipleines.  You will be responsible for: Dealing with customer data and building highly efficient pipelines Building insights dashboards Troubleshooting data loss, data inconsistency, and other data-related issues Maintaining backend services (written in Golang) for metadata generation Providing prompt support and solutions for Product, CRM, and Marketing partners   What you bring (Skills): 2+ year of experience in data engineering Coding experience with one of the following languages: Golang, Java, Python, C++ Fluent in SQL Working experience with at least one of the following data-processing engines: Flink,Spark, Hadoop, Hive   Great if you know (Skills): T-shaped skills are always preferred – so if you have the passion to work across the full stack spectrum – it is more than welcome. Exposure to infrastructure-based skills like Docker, Istio, Kubernetes is a plus Experience with building and maintaining large scale and/or real-time complex data processing pipelines using Flink, Hadoop, Hive, Storm, etc.   Advantage Cognologix:  Higher degree of autonomy, startup culture & small teams  Opportunities to become expert in emerging technologies  Remote working options for the right maturity level  Competitive salary & family benefits  Performance based career advancement     About Cognologix: Cognologix helps companies disrupt by reimagining their business models and innovate like a Startup. We are at the forefront of digital disruption and take a business first approach to help meet our client’s strategic goals. We are an Data focused organization helping our clients to deliver their next generation of products in the most efficient, modern and cloud-native way.   Skills: JAVA, PYTHON, HADOOP, HIVE, SPARK PROGRAMMING, KAFKA   Thanks & regards, Cognologix- HR Dept.

Job posted by
apply for job
apply for job
Rupa Kadam picture
Rupa Kadam
Job posted by
Rupa Kadam picture
Rupa Kadam
Apply for job
apply for job

Jr. Data Engineer

Founded 2013
Products and services{{j_company_types[2 - 1]}}
{{j_company_sizes[2 - 1]}} employees
{{j_company_stages[ - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore)
Experience icon
1 - 3 years
Salary icon
Best in industry{{renderSalaryString({min: 1000000, max: 1200000, duration: "undefined", currency: "INR", equity: false})}}

Proficient in R and Python Work experience 1+ years with at least 6 months working with Python Prior experience with building ML models Prior experience with SQL Knowledge of statistical techniques Experience with working on Spatial Data will be an added advantage

Job posted by
apply for job
apply for job
Ravish E picture
Ravish E
Job posted by
Ravish E picture
Ravish E
Apply for job
apply for job

Technical Architect

Founded 2015
Products and services{{j_company_types[2 - 1]}}
{{j_company_sizes[2 - 1]}} employees
{{j_company_stages[ - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore)
Experience icon
8 - 15 years
Salary icon
Best in industry{{renderSalaryString({min: 1500000, max: 3000000, duration: "undefined", currency: "INR", equity: false})}}

Role and Responsibilities Build a low latency serving layer that powers DataWeave's Dashboards, Reports, and Analytics functionality Build robust RESTful APIs that serve data and insights to DataWeave and other products Design user interaction workflows on our products and integrating them with data APIs Help stabilize and scale our existing systems. Help design the next generation systems. Scale our back end data and analytics pipeline to handle increasingly large amounts of data. Work closely with the Head of Products and UX designers to understand the product vision and design philosophy Lead/be a part of all major tech decisions. Bring in best practices. Mentor younger team members and interns. Constantly think scale, think automation. Measure everything. Optimize proactively. Be a tech thought leader. Add passion and vibrance to the team. Push the envelope.   Skills and Requirements 8- 15 years of experience building and scaling APIs and web applications. Experience building and managing large scale data/analytics systems. Have a strong grasp of CS fundamentals and excellent problem solving abilities. Have a good understanding of software design principles and architectural best practices. Be passionate about writing code and have experience coding in multiple languages, including at least one scripting language, preferably Python. Be able to argue convincingly why feature X of language Y rocks/sucks, or why a certain design decision is right/wrong, and so on. Be a self-starter—someone who thrives in fast paced environments with minimal ‘management’. Have experience working with multiple storage and indexing technologies such as MySQL, Redis, MongoDB, Cassandra, Elastic. Good knowledge (including internals) of messaging systems such as Kafka and RabbitMQ. Use the command line like a pro. Be proficient in Git and other essential software development tools. Working knowledge of large-scale computational models such as MapReduce and Spark is a bonus. Exposure to one or more centralized logging, monitoring, and instrumentation tools, such as Kibana, Graylog, StatsD, Datadog etc. Working knowledge of building websites and apps. Good understanding of integration complexities and dependencies. Working knowledge linux server administration as well as the AWS ecosystem is desirable. It's a huge bonus if you have some personal projects (including open source contributions) that you work on during your spare time. Show off some of your projects you have hosted on GitHub.

Job posted by
apply for job
apply for job
BasavRaj P S picture
BasavRaj P S
Job posted by
BasavRaj P S picture
BasavRaj P S
Apply for job
apply for job

Enthusiastic Cloud-ML Engineers with a keen sense of curiosity

Founded 2012
Products and services{{j_company_types[3 - 1]}}
{{j_company_sizes[3 - 1]}} employees
{{j_company_stages[ - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore)
Experience icon
3 - 12 years
Salary icon
Best in industry{{renderSalaryString({min: 300000, max: 2500000, duration: "undefined", currency: "INR", equity: false})}}

We are a start-up in India seeking excellence in everything we do with an unwavering curiosity and enthusiasm. We build simplified new-age AI driven Big Data Analytics platform for Global Enterprises and solve their biggest business challenges. Our Engineers develop fresh intuitive solutions keeping the user in the center of everything. As a Cloud-ML Engineer, you will design and implement ML solutions for customer use cases and problem solve complex technical customer challenges. Expectations and Tasks - Total of 7+ years of experience with minimum of 2 years in Hadoop technologies like HDFS, Hive, MapReduce - Experience working with recommendation engines, data pipelines, or distributed machine learning and experience with data analytics and data visualization techniques and software. - Experience with core Data Science techniques such as regression, classification or clustering, and experience with deep learning frameworks - Experience in NLP, R and Python - Experience in performance tuning and optimization techniques to process big data from heterogeneous sources. - Ability to communicate clearly and concisely across technology and the business teams. - Excellent Problem solving and Technical troubleshooting skills. - Ability to handle multiple projects and prioritize tasks in a rapidly changing environment. Technical Skills Core Java, Multithreading, Collections, OOPS, Python, R, Apache Spark, MapReduce, Hive, HDFS, Hadoop, MongoDB, Scala We are a retained Search Firm employed by our client - Technology Start-up @ Bangalore. Interested candidates can share their resumes with me - Jia@TalentSculpt.com. I will respond to you within 24 hours. Online assessments and pre-employment screening are part of the selection process.

Job posted by
apply for job
apply for job
Blitzkrieg HR Consulting picture
Blitzkrieg HR Consulting
Job posted by
Blitzkrieg HR Consulting picture
Blitzkrieg HR Consulting
Apply for job
apply for job
Did not find a job you were looking for?
Search for relevant jobs from 10000+ companies such as Google, Amazon & Uber actively hiring on CutShort.
Want to apply for this role at Kwalee?
Hiring team responds within a day
apply for this job
Why apply via CutShort?
Connect with actual hiring teams and get their fast response. No spam.
File upload not supportedAudio recording not supported
This browser does not support file upload. Please follow the instructions to upload your resume.This browser does not support audio recording. Please follow the instructions to record audio.
  1. Click on the 3 dots
  2. Click on "Copy link"
  3. Open Google Chrome (or any other browser) and enter the copied link in the URL bar
Done