Data Engineer

at Big revolution in the e-gaming industry. (GK1)

Agency job
icon
Bengaluru (Bangalore)
icon
2 - 3 yrs
icon
₹15L - ₹20L / yr
icon
Full time
Skills
Python
Scala
Hadoop
Spark
Data Engineer
Kafka
Luigi
Airflow
Nosql
  • We are looking for a Data Engineer to build the next-generation mobile applications for our world-class fintech product.
  • The candidate will be responsible for expanding and optimising our data and data pipeline architecture, as well as optimising data flow and collection for cross-functional teams.
  • The ideal candidate is an experienced data pipeline builder and data wrangler who enjoys optimising data systems and building them from the ground up.
  • Looking for a person with a strong ability to analyse and provide valuable insights to the product and business team to solve daily business problems.
  • You should be able to work in a high-volume environment, have outstanding planning and organisational skills.

 

Qualifications for Data Engineer

 

  • Working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases.
  • Experience building and optimising ‘big data’ data pipelines, architectures, and data sets.
  • Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement.
  • Strong analytic skills related to working with unstructured datasets. Build processes supporting data transformation, data structures, metadata, dependency and workload management.
  • Experience supporting and working with cross-functional teams in a dynamic environment.
  • Looking for a candidate with 2-3 years of experience in a Data Engineer role, who is a CS graduate or has an equivalent experience.

 

What we're looking for?

 

  • Experience with big data tools: Hadoop, Spark, Kafka and other alternate tools.
  • Experience with relational SQL and NoSQL databases, including MySql/Postgres and Mongodb.
  • Experience with data pipeline and workflow management tools: Luigi, Airflow.
  • Experience with AWS cloud services: EC2, EMR, RDS, Redshift.
  • Experience with stream-processing systems: Storm, Spark-Streaming.
  • Experience with object-oriented/object function scripting languages: Python, Java, Scala.
Read more
Why apply to jobs via Cutshort
Personalized job matches
Stop wasting time. Get matched with jobs that meet your skills, aspirations and preferences.
Verified hiring teams
See actual hiring teams, find common social connections or connect with them directly. No 3rd party agencies here.
Move faster with AI
We use AI to get you faster responses, recommendations and unmatched user experience.
2101133
Matches delivered
3712187
Network size
15000
Companies hiring

Similar jobs

Data Science - Manager

at Vahak

Founded 2016  •  Product  •  20-100 employees  •  Raised funding
Data Science
Data Analytics
R Programming
Python
SQL
PowerBI
Tableau
icon
Bengaluru (Bangalore)
icon
5 - 12 yrs
icon
₹20L - ₹40L / yr

Who Are We?

 

Vahak (https://www.vahak.in) is India’s largest & most trusted online transport marketplace & directory for road transport businesses and individual commercial vehicle (Trucks, Trailers, Containers, Hyva, LCVs) owners for online truck and load booking, transport business branding and transport business network expansion. Lorry owners can find intercity and intracity loads from all over India and connect with other businesses to find trusted transporters and best deals in the Indian logistics services market. With the Vahak app, users can book loads and lorries from a live transport marketplace with over 7 Lakh + Transporters and Lorry owners in over 10,000+ locations for daily transport requirements.

Vahak has raised a capital of $5+ Million in a Pre-Series A round from RTP Global along with participation from Luxor Capital and Leo Capital. The other marquee angel investors include Kunal Shah, Founder and CEO, CRED; Jitendra Gupta, Founder and CEO, Jupiter; Vidit Aatrey and Sanjeev Barnwal, Co-founders, Meesho; Mohd Farid, Co-founder, Sharechat; Amrish Rau, CEO, Pine Labs; Harsimarbir Singh, Co-founder, Pristyn Care; Rohit and Kunal Bahl, Co-founders, Snapdeal; and Ravish Naresh, Co-founder and CEO, Khatabook.


 

Manager Data Science:

We at Vahak, are looking for an enthusiastic and passionate Manager of Data Science, to join our young & diverse team.You will play a key role in the data science group, working with different teams, identifying the use cases that could be solved by application of data science techniques.

Our goal as a group is to drive powerful, big data analytics products with scalable results.We love people who are humble and collaborative with hunger for excellence.


Responsibilities:

  • Mine and Analyze end to end business data and generate actionable insights. Work will involve analyzing Customer transaction data, Marketing Campaign performance analysis, identifying process bottlenecks, business performance analysis etc.
  • Identify data driven opportunities to drive optimization and improvement of product development, marketing techniques and business strategies.
  • Collaborate with Product and growth teams to test and learn at unprecedented pace and help the team achieve substantial upside in key metrics
  • Actively participate in the OKR process and help team democratize the key KPIs and metrics that drive various objectives
  • Comfortable with digital marketing campaign concepts, use of marketing campaign platforms such as Google Adwords and Facebook Ads
  • Responsible for design of algorithms that require different advanced analytics techniques  and heuristics to work together
  • Create dashboard and visualization from scratch and present data in logical manner to all the stakeholders
  • Collaborates with internal teams to create actionable items based off analysis; works with the datasets to conduct complex quantitative analysis and helps drive the innovation for our customers

Requirements:

  • Bachelor’s or Masters degree in  Engineering, Science, Maths,  Economics or other quantitative fields. MBA is a plus but not required
  • 5+ years of proven experience working in Data Science field preferably in ecommerce/web based  or consumer technology companies
  • Thorough understanding of implementation and analysis of product and marketing metrics at scale
  • Strong problem solving skills with an emphasis on product development.
  • Fluency in statistical computer languages like SQL, Python, R as well as a deep understanding of statistical analysis, experiments designs and common pitfalls of data analysis
  • Should have worked in a relational database like Oracle or Mysql, experience in Big Data systems like Bigquery or  Redshift a definite plus
  • Experience using business intelligence tools e.g. Tableau, Power BI would be an added advantage (not mandatory)

 

Read more
Job posted by
Vahak Talent

Junior Server Support Programmer

at Kwalee

Founded 2011  •  Product  •  100-500 employees  •  Profitable
SQL
NOSQL Databases
Cassandra
Python
icon
Bengaluru (Bangalore)
icon
0 - 5 yrs
icon
Best in industry

Kwalee is one of the world’s leading multiplatform game publishers and developers, with well over 750 million downloads worldwide for mobile hits such as Draw It, Teacher Simulator, Let’s Be Cops 3D, Traffic Cop 3D and Makeover Studio 3D. Alongside this, we also have a growing PC and Console team of incredible pedigree that is on the hunt for great new titles to join TENS!, Eternal Hope and Die by the Blade. 

With a team of talented people collaborating daily between our studios in Leamington Spa, Bangalore and Beijing, or on a remote basis from Turkey, Brazil, the Philippines and many more places, we have a truly global team making games for a global audience. And it’s paying off: Kwalee games have been downloaded in every country on earth! If you think you’re a good fit for one of our remote vacancies, we want to hear from you wherever you are based.

Founded in 2011 by David Darling CBE, a key architect of the UK games industry who previously co-founded and led Codemasters for many years, our team also includes legends such as Andrew Graham (creator of Micro Machines series) and Jason Falcus (programmer of classics including NBA Jam) alongside a growing and diverse team of global gaming experts. Everyone contributes creatively to Kwalee’s success, with all employees eligible to pitch their own game ideas on Creative Wednesdays, and we’re proud to have built our success on this inclusive principle. Could your idea be the next global hit?

What’s the job?

As a Junior Server Support Programmer you are the vital link between the server and the other teams, you could save hours of manual data-entry with a script, help raise ongoing issues requiring new server features, and assist in the automation of anything data-driven.

You love all things server but also like helping others.  The server team has many channels of communication with other teams, from people confirming processes with us, to some needing data migrations, your role would be to manage these requests alongside building backend features.

While you’re fixing issues and helping other you are also learning about the code base, architecture and gaining knowledge to work on features later on.

What you tell your friends you do

“I’m the hero of the day, when actually I only needed to fix one line of code”

 
What you will really be doing

  • Contributing to the design and architecture of our servers using Python with Flask/FastAPI deployed on AWS EC2 Servers and Lambda Pipelines.

  • Implementing server features to robustly support millions of mobile game players.

  • You’ll be working with a worldwide multi cluster Couchbase and Elasticsearch database as well as Redis, Redshift, Postgres.

  • Interfacing with other teams on any server-related issues affecting them.


How you will be doing this

  • You’ll be part of an agile, multidisciplinary and creative team and work closely with them to ensure the best results.

  • You'll think creatively and be motivated by challenges and constantly striving for the best.

  • You’ll work with cutting edge technology, if you need software or hardware to get the job done efficiently, you will get it. We even have a robot!


Team

Our talented team is our signature. We have a highly creative atmosphere with more than 200 staff where you’ll have the opportunity to contribute daily to important decisions. You’ll work within an extremely experienced, passionate and diverse team, including David Darling and the creator of the Micro Machines video games.


Skills and Requirements

  • A good knowledge of writing python code for web backends.

  • The ability to write quick scripts to accelerate manual tasks. 

  • Knowledge of SQL or NoSQL databases (MySQL, Postgres, Couchbase, MongoDB, Cassandra, Memcache,etc.) could be useful but it’s not mandatory.

  • Knowledge of Unix, Linux or equivalent development environments

  • Some experience in game development would be a plus, although it’s not necessary.

  • Excellent communication skills.


We offer

  • We want everyone involved in our games to share our success, that’s why we have a generous team profit sharing scheme from day 1 of employment

  • In addition to a competitive salary we also offer private medical cover and life assurance

  • Creative Wednesdays! (Design and make your own games every Wednesday)

  • 20 days of paid holidays plus bank holidays 

  • Hybrid model available depending on the department and the role

  • Relocation support available 

  • Great work-life balance with flexible working hours

  • Quarterly team building days - work hard, play hard!

  • Monthly employee awards

  • Free snacks, fruit and drinks


Our philosophy

We firmly believe in creativity and innovation and that a fundamental requirement for a successful and happy company is having the right mix of individuals. With the right people in the right environment anything and everything is possible.

Kwalee makes games to bring people, their stories, and their interests together. As an employer, we’re dedicated to making sure that everyone can thrive within our team by welcoming and supporting people of all ages, races, colours, beliefs, sexual orientations, genders and circumstances. With the inclusion of diverse voices in our teams, we bring plenty to the table that’s fresh, fun and exciting; it makes for a better environment and helps us to create better games for everyone! This is how we move forward as a company – because these voices are the difference that make all the difference.

Read more
Job posted by
Michael Hoppitt

Generation Engineer

at IndArka Energy Pvt Ltd

Founded 2020  •  Product  •  20-100 employees  •  Raised funding
Python
Data Science
icon
Bengaluru (Bangalore)
icon
1 - 3 yrs
icon
₹10L - ₹14L / yr

We are looking for a python developer who has a passion to drive more solar and clean energy in the world working with us. The software helps anyone understand how much solar could be put up on a rooftop and calculates how many units of clean energy the solar PV system would generate, along with how much savings the homeowner would have. This is a crucial step in helping educate people who want to go solar, but aren’t completely convinced of solar's value proposition. If you are interested in bringing the latest technologies to the fast-growing solar industry and want to help society transition to a more sustainable future, we would love to hear from you!

 

You will -

- Be an early employee at a growing startup and help shape the team culture

- Safeguard code quality on their team, reviewing others’ code with an eye to performance and maintainability

- Be trusted to take point on complex product initiatives

- Work in a ownership driven, micro-management free environment

 

You should have:

- Strong programming fundamentals. (if you don’t officially have a CS degree but know programming, it’s fine with us!)

- Have a strong problem solving attitude.

- Experience with solar or electrical modelling is a plus, although not required.

Read more
Job posted by
Ruchika Prakash

Data Architect

at Hypersonix Inc

Founded 2018  •  Product  •  100-500 employees  •  Profitable
Big Data
Data Warehouse (DWH)
Apache Kafka
Spark
Hadoop
Data engineering
Artificial Intelligence (AI)
Machine Learning (ML)
Data Structures
Data modeling
Data wrangling
Data integration
Data-driven testing
Database performance tuning
Apache Storm
Python
Scala
SQL
Amazon Web Services (AWS)
SQL Azure
kafka
databricks
Flinks
druid
Airflow
Luigi
Nifi
Talend
icon
Bengaluru (Bangalore)
icon
10 - 15 yrs
icon
₹15L - ₹20L / yr
Hypersonix.ai is disrupting the Business Intelligence and Analytics space with AI, ML and NLP capabilities to drive specific business insights with a conversational user experience. Hypersonix.ai has been built ground up with new age technology to simplify the consumption of data for our customers in Restaurants, Hospitality and other industry verticals.

Hypersonix.ai is seeking a Data Evangelist who can work closely with customers to understand the data sources, acquire data and drive product success by delivering insights based on customer needs.

Primary Responsibilities :

- Lead and deliver complete application lifecycle design, development, deployment, and support for actionable BI and Advanced Analytics solutions

- Design and develop data models and ETL process for structured and unstructured data that is distributed across multiple Cloud platforms

- Develop and deliver solutions with data streaming capabilities for a large volume of data

- Design, code and maintain parts of the product and drive customer adoption

- Build data acquisition strategy to onboard customer data with speed and accuracy

- Working both independently and with team members to develop, refine, implement, and scale ETL processes

- On-going support and maintenance of live-clients for their data and analytics needs

- Defining the data automation architecture to drive self-service data load capabilities

Required Qualifications :

- Bachelors/Masters/Ph.D. in Computer Science, Information Systems, Data Science, Artificial Intelligence, Machine Learning or related disciplines

- 10+ years of experience guiding the development and implementation of Data architecture in structured, unstructured, and semi-structured data environments.

- Highly proficient in Big Data, data architecture, data modeling, data warehousing, data wrangling, data integration, data testing and application performance tuning

- Experience with data engineering tools and platforms such as Kafka, Spark, Databricks, Flink, Storm, Druid and Hadoop

- Strong with hands-on programming and scripting for Big Data ecosystem (Python, Scala, Spark, etc)

- Experience building batch and streaming ETL data pipelines using workflow management tools like Airflow, Luigi, NiFi, Talend, etc

- Familiarity with cloud-based platforms like AWS, Azure or GCP

- Experience with cloud data warehouses like Redshift and Snowflake

- Proficient in writing complex SQL queries.

- Excellent communication skills and prior experience of working closely with customers

- Data savvy who loves to understand large data trends and obsessed with data analysis

- Desire to learn about, explore, and invent new tools for solving real-world problems using data

Desired Qualifications :

- Cloud computing experience, Amazon Web Services (AWS)

- Prior experience in Data Warehousing concepts, multi-dimensional data models

- Full command of Analytics concepts including Dimension, KPI, Reports & Dashboards

- Prior experience in managing client implementation of Analytics projects

- Knowledge and prior experience of using machine learning tools
Read more
Job posted by
Gowshini Maheswaran

Senior Software Engineer

at Hiring for a leading client

Agency job
via Jobaaj.com
Big Data
Apache Kafka
Business Intelligence (BI)
Data Warehouse (DWH)
Coding
Hadoop
Apache Impala
Spark
Python
CI/CD
Git
Tableau
Qlikview
Amazon Web Services (AWS)
Java
Apache Oozie
Data engineering
Databases
Software Development
Airflow
icon
New Delhi
icon
3 - 5 yrs
icon
₹10L - ₹15L / yr
Job Description:
Senior Software Engineer - Data Team

We are seeking a highly motivated Senior Software Engineer with hands-on experience and build scalable, extensible data solutions, identifying and addressing performance bottlenecks, collaborating with other team members, and implementing best practices for data engineering. Our engineering process is fully agile, and has a really fast release cycle - which keeps our environment very energetic and fun.

What you'll do:

Design and development of scalable applications.
Work with Product Management teams to get maximum value out of existing data.
Contribute to continual improvement by suggesting improvements to the software system.
Ensure high scalability and performance
You will advocate for good, clean, well documented and performing code; follow standards and best practices.
We'd love for you to have:

Education: Bachelor/Master Degree in Computer Science.
Experience: 3-5 years of relevant experience in BI/DW with hands-on coding experience.

Mandatory Skills

Strong in problem-solving
Strong experience with Big Data technologies, Hive, Hadoop, Impala, Hbase, Kafka, Spark
Strong experience with orchestration framework like Apache oozie, Airflow
Strong experience of Data Engineering
Strong experience with Database and Data Warehousing technologies and ability to understand complex design, system architecture
Experience with the full software development lifecycle, design, develop, review, debug, document, and deliver (especially in a multi-location organization)
Good knowledge of Java
Desired Skills

Experience with Python
Experience with reporting tools like Tableau, QlikView
Experience of Git and CI-CD pipeline
Awareness of cloud platform ex:- AWS
Excellent communication skills with team members, Business owners, across teams
Be able to work in a challenging, dynamic environment and meet tight deadlines
Read more
Job posted by
Saksham Agarwal

Data Analyst

at Extramarks Education India Pvt Ltd

Founded 2007  •  Product  •  1000-5000 employees  •  Profitable
MySQL
Python
icon
Noida
icon
1 - 3 yrs
icon
₹7L - ₹10L / yr

Data Analyst preferably with SQL , Advance Excel and Python experience, but with experience in providing deep insights from live data of various products.

Required Experience

  • 3+ years of relevant technical experience as a data analyst role
  • Intermediate / expert skills with SQL and basic statistics
  • Experience in Advance SQL
  • Good in Python programming
  • Strong problem solving and structuring skills
  • Automation in connecting various sources to the data and representing it through various dashboards
  • Excellent with Numbers and communicate data points through various reports/templates
  • Ability to communicate effectively internally and outside Data Analytics team
  • Proactively take up work responsibilities and take adhocs as and when needed
  • Ability and desire to take ownership of and initiative for analysis; from requirements clarification to deliverable
  • Strong technical communication skills; both written and verbal
  • Ability to understand and articulate the "big picture" and simplify complex ideas
  • Ability to identify and learn applicable new techniques independently as needed
  • Must have worked with various Databases (Relational and Non-Relational) and ETL processes
  • Must have experience in handling large volume and data and adhere to optimization and performance standards
  • Should have the ability to analyse and provide relationship views of the data from different angles
  • Must have excellent Communication skills (written and oral).
  • Knowing Data Science is an added advantage

Required Skills:

MYSQL, Python, Advanced Excel, Tableau, Reporting and dashboards, MS office, Analytical skills

Read more
Job posted by
Vidhi Solanki
Big Data
Spark
Hadoop
Apache Kafka
Apache Hive
Scala
Apache Sqoop
Cassandra
NOSQL Databases
icon
Remote, Bengaluru (Bangalore)
icon
5 - 8 yrs
icon
₹20L - ₹38L / yr

Company Overview:

Rakuten, Inc. (TSE's first section: 4755) is the largest ecommerce company in Japan, and third largest eCommerce marketplace company worldwide. Rakuten provides a variety of consumer and business-focused services including e-commerce, e-reading, travel, banking, securities, credit card, e-money, portal and media, online marketing and professional sports. The company is expanding globally and currently has operations throughout Asia, Western Europe, and the Americas. Founded in 1997, Rakuten is headquartered in Tokyo, with over 17,000 employees and partner staff worldwide. Rakuten's 2018 revenues were 1101.48 billions yen.   -In Japanese, Rakuten stands for ‘optimism.’ -It means we believe in the future. -It’s an understanding that, with the right mind-set, -we can make the future better by what we do today. Today, our 70+ businesses span e-commerce, digital content, communications and FinTech, bringing the joy of discovery to more than 1.2 billion members across the world.


Website
: https://www.rakuten.com/

Crunchbase : Rakuten has raised a total of $42.4M in funding over 2 rounds

Companysize : 10,001 + Employees

Founded : 1997

Headquarters : Tokyo, Japan

Work location : Bangalore (M.G.Road)


Please find below Job Description.


Role Description – Data Engineer for AN group (Location - India)

 

Key responsibilities include:

 

We are looking for engineering candidate in our Autonomous Networking Team. The ideal candidate must have following abilities –

 

  • Hands- on experience in big data computation technologies (at least one and potentially several of the following: Spark and Spark Streaming, Hadoop, Storm, Kafka Streaming, Flink, etc)
  • Familiar with other related big data technologies, such as big data storage technologies (e.g., Phoenix/HBase, Redshift, Presto/Athena, Hive, Spark SQL, BigTable, BigQuery, Clickhouse, etc), messaging layer (Kafka, Kinesis, etc), Cloud and container- based deployments (Docker, Kubernetes etc), Scala, Akka, SocketIO, ElasticSearch, RabbitMQ, Redis, Couchbase, JAVA, Go lang.
  • Partner with product management and delivery teams to align and prioritize current and future new product development initiatives in support of our business objectives
  • Work with cross functional engineering teams including QA, Platform Delivery and DevOps
  • Evaluate current state solutions to identify areas to improve standards, simplify, and enhance functionality and/or transition to effective solutions to improve supportability and time to market
  • Not afraid of refactoring existing system and guiding the team about same.
  • Experience with Event driven Architecture, Complex Event Processing
  • Extensive experience building and owning large- scale distributed backend systems.
Read more
Job posted by
RAKESH RANJAN

Data Analyst- Biglittle.ai

at Codejudge

Founded 2019  •  Product  •  20-100 employees  •  Raised funding
SQL
Python
Data architecture
Data mining
Data Analytics
icon
Bengaluru (Bangalore)
icon
3 - 7 yrs
icon
₹20L - ₹25L / yr
Job description
  • The ideal candidate is adept at using large data sets to find opportunities for product and process optimization and using models to test the effectiveness of different courses of action.
  • Mine and analyze data from company databases to drive optimization and improvement of product development, marketing techniques and business strategies.
  • Assess the effectiveness and accuracy of new data sources and data gathering techniques.
  • Develop custom data models and algorithms to apply to data sets.
  • Use predictive modeling to increase and optimize customer experiences, revenue generation, ad targeting and other business outcomes.
  • Develop company A/B testing framework and test model quality.
  • Develop processes and tools to monitor and analyze model performance and data accuracy.

Roles & Responsibilities

  • Experience using statistical languages (R, Python, SQL, etc.) to manipulate data and draw insights from large data sets.
  • Experience working with and creating data architectures.
  • Looking for someone with 3-7 years of experience manipulating data sets and building statistical models
  • Has a Bachelor's, Master's in Computer Science or another quantitative field
  • Knowledge and experience in statistical and data mining techniques :
  • GLM/Regression, Random Forest, Boosting, Trees, text mining,social network analysis, etc.
  • Experience querying databases and using statistical computer languages :R, Python, SQL, etc.
  • Experience creating and using advanced machine learning algorithms and statistics: regression, simulation, scenario analysis, modeling, clustering, decision trees,neural networks, etc.
  • Experience with distributed data/computing tools: Map/Reduce, Hadoop, Hive, Spark, Gurobi, MySQL, etc.
  • Experience visualizing/presenting data for stakeholders using: Periscope, Business Objects, D3, ggplot, etc.
Read more
Job posted by
Vaishnavi M

Data Engineer

at Fast paced Startup

Big Data
Data engineering
Hadoop
Spark
Apache Hive
Data engineer
Google Cloud Platform (GCP)
Scala
Python
Airflow
bigquery
icon
Pune
icon
3 - 6 yrs
icon
₹15L - ₹22L / yr

ears of Exp: 3-6+ Years 
Skills: Scala, Python, Hive, Airflow, Spark

Languages: Java, Python, Shell Scripting

GCP: BigTable, DataProc,  BigQuery, GCS, Pubsub

OR
AWS: Athena, Glue, EMR, S3, Redshift

MongoDB, MySQL, Kafka

Platforms: Cloudera / Hortonworks
AdTech domain experience is a plus.
Job Type - Full Time 

Read more
Job posted by
Kavita Singh

Consulting Staff Engineer - Machine Learning

at Thinkdeeply

Founded 2014  •  Products & Services  •  20-100 employees  •  Raised funding
Machine Learning (ML)
R Programming
TensorFlow
Deep Learning
Python
Natural Language Processing (NLP)
PyTorch
icon
Hyderabad
icon
5 - 15 yrs
icon
₹5L - ₹35L / yr

Job Description

Want to make every line of code count? Tired of being a small cog in a big machine? Like a fast-paced environment where stuff get DONE? Wanna grow with a fast-growing company (both career and compensation)? Like to wear different hats? Join ThinkDeeply in our mission to create and apply Enterprise-Grade AI for all types of applications.

 

Seeking an M.L. Engineer with high aptitude toward development. Will also consider coders with high aptitude in M.L. Years of experience is important but we are also looking for interest and aptitude. As part of the early engineering team, you will have a chance to make a measurable impact in future of Thinkdeeply as well as having a significant amount of responsibility.

 

Experience

10+ Years

 

Location

Bozeman/Hyderabad

 

Skills

Required Skills:

Bachelors/Masters or Phd in Computer Science or related industry experience

3+ years of Industry Experience in Deep Learning Frameworks in PyTorch or TensorFlow

7+ Years of industry experience in scripting languages such as Python, R.

7+ years in software development doing at least some level of Researching / POCs, Prototyping, Productizing, Process improvement, Large-data processing / performance computing

Familiar with non-neural network methods such as Bayesian, SVM, Adaboost, Random Forests etc

Some experience in setting up large scale training data pipelines.

Some experience in using Cloud services such as AWS, GCP, Azure

Desired Skills:

Experience in building deep learning models for Computer Vision and Natural Language Processing domains

Experience in productionizing/serving machine learning in industry setting

Understand the principles of developing cloud native applications

 

Responsibilities

 

Collect, Organize and Process data pipelines for developing ML models

Research and develop novel prototypes for customers

Train, implement and evaluate shippable machine learning models

Deploy and iterate improvements of ML Models through feedback

Read more
Job posted by
Aditya Kanchiraju
Did not find a job you were looking for?
icon
Search for relevant jobs from 10000+ companies such as Google, Amazon & Uber actively hiring on Cutshort.
Get to hear about interesting companies hiring right now
iconFollow Cutshort
Want to apply to this role at Big revolution in the e-gaming industry. (GK1)?
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Learn more
Get to hear about interesting companies hiring right now
iconFollow Cutshort