Technical Architect

at E-Commerce Product Based Company

Agency job
icon
Bengaluru (Bangalore)
icon
8 - 15 yrs
icon
₹15L - ₹30L / yr
icon
Full time
Skills
Technical Architecture
Big Data
IT Solutioning
Python
Rest API

Role and Responsibilities

  • Build a low latency serving layer that powers DataWeave's Dashboards, Reports, and Analytics functionality
  • Build robust RESTful APIs that serve data and insights to DataWeave and other products
  • Design user interaction workflows on our products and integrating them with data APIs
  • Help stabilize and scale our existing systems. Help design the next generation systems.
  • Scale our back end data and analytics pipeline to handle increasingly large amounts of data.
  • Work closely with the Head of Products and UX designers to understand the product vision and design philosophy
  • Lead/be a part of all major tech decisions. Bring in best practices. Mentor younger team members and interns.
  • Constantly think scale, think automation. Measure everything. Optimize proactively.
  • Be a tech thought leader. Add passion and vibrance to the team. Push the envelope.

 

Skills and Requirements

  • 8- 15 years of experience building and scaling APIs and web applications.
  • Experience building and managing large scale data/analytics systems.
  • Have a strong grasp of CS fundamentals and excellent problem solving abilities. Have a good understanding of software design principles and architectural best practices.
  • Be passionate about writing code and have experience coding in multiple languages, including at least one scripting language, preferably Python.
  • Be able to argue convincingly why feature X of language Y rocks/sucks, or why a certain design decision is right/wrong, and so on.
  • Be a self-starter—someone who thrives in fast paced environments with minimal ‘management’.
  • Have experience working with multiple storage and indexing technologies such as MySQL, Redis, MongoDB, Cassandra, Elastic.
  • Good knowledge (including internals) of messaging systems such as Kafka and RabbitMQ.
  • Use the command line like a pro. Be proficient in Git and other essential software development tools.
  • Working knowledge of large-scale computational models such as MapReduce and Spark is a bonus.
  • Exposure to one or more centralized logging, monitoring, and instrumentation tools, such as Kibana, Graylog, StatsD, Datadog etc.
  • Working knowledge of building websites and apps. Good understanding of integration complexities and dependencies.
  • Working knowledge linux server administration as well as the AWS ecosystem is desirable.
  • It's a huge bonus if you have some personal projects (including open source contributions) that you work on during your spare time. Show off some of your projects you have hosted on GitHub.
Why apply to jobs via Cutshort
Personalized job matches
Stop wasting time. Get matched with jobs that meet your skills, aspirations and preferences.
Verified hiring teams
See actual hiring teams, find common social connections or connect with them directly. No 3rd party agencies here.
Move faster with AI
We use AI to get you faster responses, recommendations and unmatched user experience.
2101133
Matches delivered
3712187
Network size
15000
Companies hiring

Similar jobs

Data Scientist

at Aidetic

Founded 2018  •  Services  •  20-100 employees  •  Profitable
Data Science
Machine Learning (ML)
Natural Language Processing (NLP)
Computer Vision
Python
PyTorch
pandas
NumPy
TensorFlow
Natural Language Toolkit (NLTK)
recommendation algorithm
icon
Bengaluru (Bangalore)
icon
1 - 5 yrs
icon
₹8L - ₹15L / yr

Title:

Data Scientist


Location:

Bengaluru, Karnataka.


About us: 

We are an emerging Artificial Intelligence-based startup trying to cater to the need of industries that employ cutting-edge technologies for their operations. Currently, we are into and provide services to disruptive sectors such as drone tech, video surveillance, human-computer interaction, etc. In general, we believe that AI has the ability to shape the future of humanity and we aim to work towards spearheading this transition.


About the role:

We are looking for a highly motivated data scientist with a strong algorithmic mindset and problem-solving propensity.

Since we are operating in a highly competitive market, every opportunity to increase efficiency and cut costs is critical and the candidate should have an eye for such opportunities. We are constantly innovating – working on novel hardware and software – so a high level of flexibility and celerity in learning is expected.

Responsibilities:

  • Design machine learning / deep learning models for products and client projects.
  • Creating and managing data pipelines.
  • Exploring new SOTA models and data handling techniques.
  • Coordinate with software development teams to implement models and monitor outcomes.
  • Develop processes and tools to monitor and analyze model performance and data accuracy.
  • Explore new promising technology and implement it to create awesome stuff.


Technology stack:

  • Must have:
    • Python
    • Pandas, Numpy
    • Tensorflow, Pytorch
    • Sklearn
  • Good to have:
    • Scipy
    • Opencv
    • Spacy, NLTK

What’s in it for you:

  • Opportunity to work on a lot of new cutting-edge technologies: We promise you rapid growth on your skillset by providing you with a steep learning curve.
  • Opportunity to work closely with our experienced founding members who are experts in developing scalable practical AI products and software architecture development.
Job posted by
Vibha Atal

Big Data Engineer

at BDIPlus

Founded 2014  •  Product  •  100-500 employees  •  Profitable
Apache Hive
Spark
Scala
PySpark
Data engineering
Big Data
Hadoop
Java
Python
icon
Remote only
icon
2 - 6 yrs
icon
₹6L - ₹20L / yr
We are looking for big data engineers to join our transformational consulting team serving one of our top US clients in the financial sector. You'd get an opportunity to develop big data pipelines and convert business requirements to production grade services and products. With
lesser concentration on enforcing how to do a particular task, we believe in giving people the opportunity to think out of the box and come up with their own innovative solution to problem solving.
You will primarily be developing, managing and executing handling multiple prospect campaigns as part of Prospect Marketing Journey to ensure best conversion rates and retention rates. Below are the roles, responsibilities and skillsets we are looking for and if you feel these resonate with you, please get in touch with us by applying to this role.
Roles and Responsibilities:
• You'd be responsible for development and maintenance of applications with technologies involving Enterprise Java and Distributed technologies.
• You'd collaborate with developers, product manager, business analysts and business users in conceptualizing, estimating and developing new software applications and enhancements.
• You'd Assist in the definition, development, and documentation of software’s objectives, business requirements, deliverables, and specifications in collaboration with multiple cross-functional teams.
• Assist in the design and implementation process for new products, research and create POC for possible solutions.
Skillset:
• Bachelors or Masters Degree in a technology related field preferred.
• Overall experience of 2-3 years on the Big Data Technologies.
• Hands on experience with Spark (Java/ Scala)
• Hands on experience with Hive, Shell Scripting
• Knowledge on Hbase, Elastic Search
• Development experience In Java/ Python is preferred
• Familiar with profiling, code coverage, logging, common IDE’s and other
development tools.
• Demonstrated verbal and written communication skills, and ability to interface with Business, Analytics and IT organizations.
• Ability to work effectively in short-cycle, team oriented environment, managing multiple priorities and tasks.
• Ability to identify non-obvious solutions to complex problems
Job posted by
Puja Kumari

Data Engineer

at Bungee Tech India

Founded 2018  •  Products & Services  •  20-100 employees  •  Raised funding
Big Data
Hadoop
Apache Hive
Spark
ETL
Data engineering
Databases
Performance tuning
icon
Remote, NCR (Delhi | Gurgaon | Noida), Chennai
icon
5 - 10 yrs
icon
₹10L - ₹30L / yr

Company Description

At Bungee Tech, we help retailers and brands meet customers everywhere and, on every occasion, they are in. We believe that accurate, high-quality data matched with compelling market insights empowers retailers and brands to keep their customers at the center of all innovation and value they are delivering. 

 

We provide a clear and complete omnichannel picture of their competitive landscape to retailers and brands. We collect billions of data points every day and multiple times in a day from publicly available sources. Using high-quality extraction, we uncover detailed information on products or services, which we automatically match, and then proactively track for price, promotion, and availability. Plus, anything we do not match helps to identify a new assortment opportunity.

 

Empowered with this unrivalled intelligence, we unlock compelling analytics and insights that once blended with verified partner data from trusted sources such as Nielsen, paints a complete, consolidated picture of the competitive landscape.

We are looking for a Big Data Engineer who will work on the collecting, storing, processing, and analyzing of huge sets of data. The primary focus will be on choosing optimal solutions to use for these purposes, then maintaining, implementing, and monitoring them.

You will also be responsible for integrating them with the architecture used in the company.

 

We're working on the future. If you are seeking an environment where you can drive innovation, If you want to apply state-of-the-art software technologies to solve real world problems, If you want the satisfaction of providing visible benefit to end-users in an iterative fast paced environment, this is your opportunity.

 

Responsibilities

As an experienced member of the team, in this role, you will:

 

  • Contribute to evolving the technical direction of analytical Systems and play a critical role their design and development

 

  • You will research, design and code, troubleshoot and support. What you create is also what you own.

 

  • Develop the next generation of automation tools for monitoring and measuring data quality, with associated user interfaces.

 

  • Be able to broaden your technical skills and work in an environment that thrives on creativity, efficient execution, and product innovation.

 

BASIC QUALIFICATIONS

  • Bachelor’s degree or higher in an analytical area such as Computer Science, Physics, Mathematics, Statistics, Engineering or similar.
  • 5+ years relevant professional experience in Data Engineering and Business Intelligence
  • 5+ years in with Advanced SQL (analytical functions), ETL, Data Warehousing.
  • Strong knowledge of data warehousing concepts, including data warehouse technical architectures, infrastructure components, ETL/ ELT and reporting/analytic tools and environments, data structures, data modeling and performance tuning.
  • Ability to effectively communicate with both business and technical teams.
  • Excellent coding skills in Java, Python, C++, or equivalent object-oriented programming language
  • Understanding of relational and non-relational databases and basic SQL
  • Proficiency with at least one of these scripting languages: Perl / Python / Ruby / shell script

 

PREFERRED QUALIFICATIONS

 

  • Experience with building data pipelines from application databases.
  • Experience with AWS services - S3, Redshift, Spectrum, EMR, Glue, Athena, ELK etc.
  • Experience working with Data Lakes.
  • Experience providing technical leadership and mentor other engineers for the best practices on the data engineering space
  • Sharp problem solving skills and ability to resolve ambiguous requirements
  • Experience on working with Big Data
  • Knowledge and experience on working with Hive and the Hadoop ecosystem
  • Knowledge of Spark
  • Experience working with Data Science teams
Job posted by
Abigail David

Data Architect

at Amagi Media Labs

Founded 2008  •  Product  •  500-1000 employees  •  Profitable
Data architecture
Architecture
Data Architect
Architect
Java
Scala
Python
Spark
ETL
Amazon Web Services (AWS)
icon
Chennai
icon
15 - 18 yrs
icon
Best in industry
Job Title: Data Architect
Job Location: Chennai
Job Summary

The Engineering team is seeking a Data Architect. As a Data Architect, you will drive a
Data Architecture strategy across various Data Lake platforms. You will help develop
reference architecture and roadmaps to build highly available, scalable and distributed
data platforms using cloud based solutions to process high volume, high velocity and
wide variety of structured and unstructured data. This role is also responsible for driving
innovation, prototyping, and recommending solutions. Above all, you will influence how
users interact with Conde Nast’s industry-leading journalism.
Primary Responsibilities
Data Architect is responsible for
• Demonstrated technology and personal leadership experience in architecting,
designing, and building highly scalable solutions and products.
• Enterprise scale expertise in data management best practices such as data integration,
data security, data warehousing, metadata management and data quality.
• Extensive knowledge and experience in architecting modern data integration
frameworks, highly scalable distributed systems using open source and emerging data
architecture designs/patterns.
• Experience building external cloud (e.g. GCP, AWS) data applications and capabilities is
highly desirable.
• Expert ability to evaluate, prototype and recommend data solutions and vendor
technologies and platforms.
• Proven experience in relational, NoSQL, ELT/ETL technologies and in-memory
databases.
• Experience with DevOps, Continuous Integration and Continuous Delivery technologies
is desirable.
• This role requires 15+ years of data solution architecture, design and development
delivery experience.
• Solid experience in Agile methodologies (Kanban and SCRUM)
Required Skills
• Very Strong Experience in building Large Scale High Performance Data Platforms.
• Passionate about technology and delivering solutions for difficult and intricate
problems. Current on Relational Databases and No sql databases on cloud.
• Proven leadership skills, demonstrated ability to mentor, influence and partner with
cross teams to deliver scalable robust solutions..
• Mastery of relational database, NoSQL, ETL (such as Informatica, Datastage etc) /ELT
and data integration technologies.
• Experience in any one of Object Oriented Programming (Java, Scala, Python) and
Spark.
• Creative view of markets and technologies combined with a passion to create the
future.
• Knowledge on cloud based Distributed/Hybrid data-warehousing solutions and Data
Lake knowledge is mandate.
• Good understanding of emerging technologies and its applications.
• Understanding of code versioning tools such as GitHub, SVN, CVS etc.
• Understanding of Hadoop Architecture and Hive SQL
• Knowledge in any one of the workflow orchestration
• Understanding of Agile framework and delivery

Preferred Skills:
● Experience in AWS and EMR would be a plus
● Exposure in Workflow Orchestration like Airflow is a plus
● Exposure in any one of the NoSQL database would be a plus
● Experience in Databricks along with PySpark/Spark SQL would be a plus
● Experience with the Digital Media and Publishing domain would be a
plus
● Understanding of Digital web events, ad streams, context models
About Condé Nast
CONDÉ NAST INDIA (DATA)
Over the years, Condé Nast successfully expanded and diversified into digital, TV, and social
platforms - in other words, a staggering amount of user data. Condé Nast made the right
move to invest heavily in understanding this data and formed a whole new Data team
entirely dedicated to data processing, engineering, analytics, and visualization. This team
helps drive engagement, fuel process innovation, further content enrichment, and increase
market revenue. The Data team aimed to create a company culture where data was the
common language and facilitate an environment where insights shared in real-time could
improve performance.
The Global Data team operates out of Los Angeles, New York, Chennai, and London. The
team at Condé Nast Chennai works extensively with data to amplify its brands' digital
capabilities and boost online revenue. We are broadly divided into four groups, Data
Intelligence, Data Engineering, Data Science, and Operations (including Product and
Marketing Ops, Client Services) along with Data Strategy and monetization. The teams built
capabilities and products to create data-driven solutions for better audience engagement.
What we look forward to:
We want to welcome bright, new minds into our midst and work together to create diverse
forms of self-expression. At Condé Nast, we encourage the imaginative and celebrate the
extraordinary. We are a media company for the future, with a remarkable past. We are
Condé Nast, and It Starts Here.
Job posted by
Rajesh C

Data Engineer

at MNC Company - Product Based

Agency job
via Bharat Headhunters
Data Warehouse (DWH)
Informatica
ETL
Python
Google Cloud Platform (GCP)
SQL
AIrflow
icon
Bengaluru (Bangalore), Chennai, Hyderabad, Pune, Delhi, Gurugram, Noida, Ghaziabad, Faridabad
icon
5 - 9 yrs
icon
₹10L - ₹15L / yr

Job Responsibilities

  • Design, build & test ETL processes using Python & SQL for the corporate data warehouse
  • Inform, influence, support, and execute our product decisions
  • Maintain advertising data integrity by working closely with R&D to organize and store data in a format that provides accurate data and allows the business to quickly identify issues.
  • Evaluate and prototype new technologies in the area of data processing
  • Think quickly, communicate clearly and work collaboratively with product, data, engineering, QA and operations teams
  • High energy level, strong team player and good work ethic
  • Data analysis, understanding of business requirements and translation into logical pipelines & processes
  • Identification, analysis & resolution of production & development bugs
  • Support the release process including completing & reviewing documentation
  • Configure data mappings & transformations to orchestrate data integration & validation
  • Provide subject matter expertise
  • Document solutions, tools & processes
  • Create & support test plans with hands-on testing
  • Peer reviews of work developed by other data engineers within the team
  • Establish good working relationships & communication channels with relevant departments

 

Skills and Qualifications we look for

  • University degree 2.1 or higher (or equivalent) in a relevant subject. Master’s degree in any data subject will be a strong advantage.
  • 4 - 6 years experience with data engineering.
  • Strong coding ability and software development experience in Python.
  • Strong hands-on experience with SQL and Data Processing.
  • Google cloud platform (Cloud composer, Dataflow, Cloud function, Bigquery, Cloud storage, dataproc)
  • Good working experience in any one of the ETL tools (Airflow would be preferable).
  • Should possess strong analytical and problem solving skills.
  • Good to have skills - Apache pyspark, CircleCI, Terraform
  • Motivated, self-directed, able to work with ambiguity and interested in emerging technologies, agile and collaborative processes.
  • Understanding & experience of agile / scrum delivery methodology

 

Job posted by
Ranjini C. N

Data Analyst

at MedCords

Founded 2016  •  Product  •  20-100 employees  •  Raised funding
Data Analytics
Data Analyst
R Language
Python
icon
Kota
icon
0 - 1 yrs
icon
₹1L - ₹2.5L / yr

Required Python ,R

work in handling large-scale data engineering pipelines.
Excellent verbal and written communication skills.
Proficient in PowerPoint or other presentation tools.
Ability to work quickly and accurately on multiple projects.

Job posted by
kavita jain

Machine Learning Engineer

at IDfy

Founded 2011  •  Products & Services  •  100-1000 employees  •  Raised funding
Machine Learning (ML)
Python
TensorFlow
PyTorch
Scikit-Learn
elixir
icon
Mumbai, Pune
icon
1 - 3 yrs
icon
₹6L - ₹14L / yr
About the team
● The machine learning team is a self-contained team of 9 people responsible for building models and services that support key workflows for IDfy.
● Our models are gating criteria for these workflows and as such are expected to perform accurately and quickly. We use a mix of conventional and hand-crafted deep learning models.
● The team comes from diverse backgrounds and experiences. We have ex-bankers, startup founders, IIT-ians, and more.
● We work directly with business and product teams to craft solutions for our customers. We know that we are, and function as a platform and not a services company.

● Be working on all aspects of a production machine learning system. You will be acquiring data, training and building models, deploying models, building API services for exposing these models, maintaining them in production, and more.
● Work on performance tuning of models
● From time to time work on support and debugging of these production systems
● Work on researching the latest technology in the areas of our interest and applying it to build newer products and enhancement of the existing platform.
● Building workflows for training and production systems
● Contribute to documentation

About you

● You are an early-career machine learning engineer (or data scientist). Our ideal candidate is
someone with 1-3 years of experience in data science.

Must Haves

● You have a good understanding of Python and Scikit-learn, Tensorflow, or Pytorch. Our systems are built with these tools/language and we expect a strong base in these.
● You are proficient at exploratory analysis and know which model to use in most scenarios
● You should have worked on framing and solving problems with the application of machine learning or deep learning models.
● You have some experience in building and delivering complete or part AI solutions
● You appreciate that the role of the Machine Learning engineer is not only modeling, but also building product solutions and you strive towards this.
● Enthusiasm and drive to learn and assimilate the state of art research. A lot of what we are building will require innovative approaches using newly researched models and applications.

Good to Have

● Knowledge of and experience in computer vision. While a large part of our work revolves around computer
vision, we believe this is something you can learn on the job.
● We build our own services, hence we would want you to have some knowledge of writing APIs.
● Our stack also includes languages like Ruby, Go, and Elixir. We would love it if you know any of these or take an interest in functional programming.
● Knowledge of and experience in ML Ops and tooling would be a welcome addition. We use Docker and Kubernetes for deploying our services.
Job posted by
Rati from

Backend Engineer

at Venture Highway

Founded 2015  •  Products & Services  •  0-20 employees  •  Raised funding
Python
Data engineering
Data Engineer
MySQL
MongoDB
Celery
Apache
Data modeling
RESTful APIs
Natural Language Processing (NLP)
icon
Bengaluru (Bangalore)
icon
2 - 6 yrs
icon
₹10L - ₹30L / yr
-Experience with Python and Data Scraping.
- Experience with relational SQL & NoSQL databases including MySQL & MongoDB.
- Familiar with the basic principles of distributed computing and data modeling.
- Experience with distributed data pipeline frameworks like Celery, Apache Airflow, etc.
- Experience with NLP and NER models is a bonus.
- Experience building reusable code and libraries for future use.
- Experience building REST APIs.

Preference for candidates working in tech product companies
Job posted by
Nipun Gupta
Data Science
Data Scientist
Machine Learning (ML)
R Programming
Python
icon
NCR (Delhi | Gurgaon | Noida)
icon
5 - 10 yrs
icon
₹50L - ₹60L / yr

Job Description – Sr. Data Scientist

 

Credgenics is India’s first of it's kind NPA resolution platform backed by credible investors including Accel Partners and Titan Capital. We work with financial institutions, Banks, NBFCs & Digital lending firms to improve their collections efficiency using technology, automation intelligence and optimal legal routes in order to facilitate the resolution of stressed assets. With all major banks and NBFCs as our clients, our SaaS based collections platform helps them efficiently improve their NPA, geographic reach and customer experience.

 

About the Role:

We are looking for a highly-skilled, experienced, and passionate Sr. Data Scientist who can come on-board and help create and build a robust, scalable, and extendable platform to power the mission to reduce the exponentially growing Non Performing Assets in the Indian Economy by harnessing the power of technology and data driven analytics. Our focus is to provide deep insights that improve collection efficiency across delinquent portfolios of ARCs, Banks & NBFCs.

 

The ideal candidate would be someone who has worked in a data science role before wherein he/she is comfortable working with unknowns, evaluating the data and the feasibility of applying scientific techniques to business problems and products, have built platforms from scratch and have a track record of developing and deploying data-science models into live applications.

Responsibilities:

  • Work with the CTO to build the roadmap of data science function and set up the best practices
  • Collaborate with and influence leadership to ensure data science directly impacts strategy
  • Drive an organizational effort toward a better understanding of user needs and pain points, and propose solutions that data science can provide to further this goal
  • Build and deploy Machine Learning models in production systems
  • Building platforms from scratch to solve image recognition and context understanding problems, and also improving search
  • Work with large, complex data sets. Solve difficult, non-routine analysis problems, applying advanced analytical methods as needed
  • Conduct analysis that includes data gathering and requirements specification, processing, analysis, ongoing deliverables, and presentations
  • Develop comprehensive knowledge of data structures and metrics, advocating for changes, where needed for product development
  • Interact cross-functionally, making business recommendations (e.g., cost-benefit, forecasting, experiment analysis) with effective presentations of findings at multiple levels of stakeholders through visual displays of quantitative information
  • Research and develop analysis, forecasting, and optimization methods to improve the product quality
  • Build and prototype analysis pipelines iteratively to provide insights at scale
  • Building/maintaining of reports, dashboards, and metrics to monitor the performance of our products
  • Develop deep partnerships with engineering and product teams to deliver on major cross-functional measurements, testing, and modelling



 

 

 

 

 

 

Requirements and Qualifications:

  • Bachelor's or Master's degree in a technology-related field from a premier college
  • Prior 5+ years of experience of leading data science team in a start-up environment
  • Experience working on unstructured data is a plus
  • Deep knowledge of designing, planning, testing, and deploying analytical solutions
  • Implementing advanced AI solutions using at least one scripting language (e.g. Python, R) 
  • Customer oriented, responsive to changes, and able to multi-task in a fast-paced environment


Location:

This role will be based out of New Delhi

 

We offer an innovative, fast paced, and entrepreneurial work environment where you’ll be at the centre of leading change by leveraging technology and creating boundless impact in the FinTech ecosystem.

 

 

 

Job posted by
Deleted User

Machine Learning Engineers

at Ignite Solutions

Founded 2008  •  Products & Services  •  20-100 employees  •  Profitable
Machine Learning (ML)
Python
Data Science
icon
Pune
icon
3 - 7 yrs
icon
₹7L - ₹15L / yr
We are looking for a Machine Learning Engineer with 3+ years of experience with a background in Statistics and hands-on experience in the Python ecosystem, using sound  Software Engineering practices. Skills & Knowledge: - Formal knowledge of fundamentals of probability & statistics along with the ability to apply basic statistical analysis methods like hypothesis testing, t-tests, ANOVA etc. - Hands-on knowledge of data formats, data extraction, loading, wrangling, transformation, pre-processing and analysis. - Thorough understanding of data-modeling and machine-learning concepts - Complete understanding and ability to apply, implement and adapt standard implementations of machine learning algorithms - Good understanding and ability to apply and adapt Neural Networks and Deep Learning, including common high-level Deep Learning architectures like CNNs and RNNs - Fundamentals of computer science & programming, especially Data structures (like multi-dimensional arrays, trees, and graphs) and Algorithms (like searching, sorting, and dynamic programming) - Fundamentals of software engineering and system design, such as requirements analysis, REST APIs, database queries, system and library calls, version control, etc. Languages and Libraries: - Hands-on experience with Python and Python Libraries for data analysis and machine learning, especially Scikit-learn, Tensorflow, Pandas, Numpy, Statsmodels, and Scipy. - Experience with R and its ecosystem is a plus - Knowledge of other open source machine learning and data modeling frameworks like Spark MLlib, H2O, etc. is a plus
Job posted by
Juzar Malubhoy
Did not find a job you were looking for?
icon
Search for relevant jobs from 10000+ companies such as Google, Amazon & Uber actively hiring on Cutshort.
Get to hear about interesting companies hiring right now
iconFollow Cutshort
Want to apply to this role at E-Commerce Product Based Company?
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Learn more
Get to hear about interesting companies hiring right now
iconFollow Cutshort