Data Scientist

at 5 years old AI Startup

icon
Pune
icon
2 - 6 yrs
icon
₹12L - ₹18L / yr
icon
Full time,
Skills
Data Science
Machine Learning (ML)
Python
Natural Language Processing (NLP)
Deep Learning
  •  3+ years of experience in Machine Learning
  • Bachelors/Masters in Computer Engineering/Science.
  • Bachelors/Masters in Engineering/Mathematics/Statistics with sound knowledge of programming and computer concepts.
  • 10 and 12th acedemics 70 % & above.

Skills :
 - Strong Python/ programming skills
 - Good conceptual understanding of Machine Learning/Deep Learning/Natural Language            Processing
 - Strong verbal and written communication skills.
 - Should be able to manage team, meet project deadlines and interface with clients.
 - Should be able to work across different domains and quickly ramp up the business                   processes & flows & translate business problems into the data solutions

Why apply to jobs via Cutshort
Personalized job matches
Stop wasting time. Get matched with jobs that meet your skills, aspirations and preferences.
Verified hiring teams
See actual hiring teams, find common social connections or connect with them directly. No 3rd party agencies here.
Move faster with AI
We use AI to get you faster responses, recommendations and unmatched user experience.
2101133
Matches delivered
3712187
Network size
15000
Companies hiring

Similar jobs

Data Engineer

at Information solution provider company

Agency job
via Jobdost
Spark
Hadoop
Big Data
Data engineering
PySpark
Machine Learning (ML)
Scala
icon
Delhi, Gurugram, Noida, Ghaziabad, Faridabad
icon
2 - 4 yrs
icon
₹5L - ₹9L / yr

Data Engineer 

Responsibilities:

 

  • Designing and implementing fine-tuned production ready data/ML pipelines in Hadoop platform.
  • Driving optimization, testing and tooling to improve quality.
  • Reviewing and approving high level & amp; detailed design to ensure that the solution delivers to the business needs and aligns to the data & analytics architecture principles and roadmap.
  • Understanding business requirements and solution design to develop and implement solutions that adhere to big data architectural guidelines and address business requirements.
  • Following proper SDLC (Code review, sprint process).
  • Identifying, designing, and implementing internal process improvements: automating manual processes, optimizing data delivery, etc.
  • Building robust and scalable data infrastructure (both batch processing and real-time) to support needs from internal and external users.
  • Understanding various data security standards and using secure data security tools to apply and adhere to the required data controls for user access in the Hadoop platform.
  • Supporting and contributing to development guidelines and standards for data ingestion.
  • Working with a data scientist and business analytics team to assist in data ingestion and data related technical issues.
  • Designing and documenting the development & deployment flow.

 

Requirements:

 

  • Experience in developing rest API services using one of the Scala frameworks.
  • Ability to troubleshoot and optimize complex queries on the Spark platform
  • Expert in building and optimizing ‘big data’ data/ML pipelines, architectures and data sets.
  • Knowledge in modelling unstructured to structured data design.
  • Experience in Big Data access and storage techniques.
  • Experience in doing cost estimation based on the design and development.
  • Excellent debugging skills for the technical stack mentioned above which even includes analyzing server logs and application logs.
  • Highly organized, self-motivated, proactive, and ability to propose best design solutions.
  • Good time management and multitasking skills to work to deadlines by working independently and as a part of a team.
Job posted by
Saida Jabbar

Jr Data Scientist

at Designing Solutions for A Better World with AI

Agency job
via Qrata
Data Science
Natural Language Processing (NLP)
Machine Learning (ML)
Deep Learning
icon
Hyderabad
icon
2 - 6 yrs
icon
₹8L - ₹23L / yr
Requirements: Skills
 A Bachelor’s degree in data science, statistics, computer science, or a similar field
 2+ years industry experience working in a data science role, such as statistics, machine learning,
deep learning, quantitative financial analysis, data engineering or natural language processing
 Domain experience in Financial Services (banking, insurance, risk, funds) is preferred
 Have and experience and be involved in producing and rapidly delivering minimum viable products,
results focused with ability to prioritize the most impactful deliverables
 Strong Applied Statistics capabilities. Including excellent understanding of Machine Learning
techniques and algorithms
 Hands on experience preferable in implementing scalable Machine Learning solutions using Python /
Scala / Java on Azure, AWS or Google cloud platform
 Experience with storage frameworks like Hadoop, Spark, Kafka etc
 Experience in building &deploying unsupervised, semi-supervised, and supervised models and be
knowledgeable in various ML algorithms such as regression models, Tree-based algorithms,
ensemble learning techniques, distance-based ML algorithms etc
 Ability to track down complex data quality and data integration issues, evaluate different algorithmic
approaches, and analyse data to solve problems.
 Experience in implementing parallel processing and in-memory frameworks such as H2O.ai
Job posted by
Rayal Rajan

Data Engineer

at Propellor.ai

Founded 2016  •  Products & Services  •  20-100 employees  •  Raised funding
SQL
API
Python
Spark
icon
Remote only
icon
2 - 5 yrs
icon
₹5L - ₹15L / yr

Job Description - Data Engineer

About us
Propellor is aimed at bringing Marketing Analytics and other Business Workflows to the Cloud ecosystem. We work with International Clients to make their Analytics ambitions come true, by deploying the latest tech stack and data science and engineering methods, making their business data insightful and actionable. 

 

What is the role?
This team is responsible for building a Data Platform for many different units. This platform will be built on Cloud and therefore in this role, the individual will be organizing and orchestrating different data sources, and
giving recommendations on the services that fulfil goals based on the type of data

Qualifications:

• Experience with Python, SQL, Spark
• Knowledge/notions of JavaScript
• Knowledge of data processing, data modeling, and algorithms
• Strong in data, software, and system design patterns and architecture
• API building and maintaining
• Strong soft skills, communication
Nice to have:
• Experience with cloud: Google Cloud Platform, AWS, Azure
• Knowledge of Google Analytics 360 and/or GA4.
Key Responsibilities
• Work on the core backend and ensure it meets the performance benchmarks.
• Designing and developing APIs for the front end to consume.
• Constantly improve the architecture of the application by clearing the technical backlog.
• Meeting both technical and consumer needs.
• Staying abreast of developments in web applications and programming languages.

Key Responsibilities
• Design and develop platform based on microservices architecture.
• Work on the core backend and ensure it meets the performance benchmarks.
• Work on the front end with ReactJS.
• Designing and developing APIs for the front end to consume.
• Constantly improve the architecture of the application by clearing the technical backlog.
• Meeting both technical and consumer needs.
• Staying abreast of developments in web applications and programming languages.

What are we looking for?
An enthusiastic individual with the following skills. Please do not hesitate to apply if you do not match all of it. We are open to promising candidates who are passionate about their work and are team players.
• Education - BE/MCA or equivalent.
• Agnostic/Polyglot with multiple tech stacks.
• Worked on open-source technologies – NodeJS, ReactJS, MySQL, NoSQL, MongoDB, DynamoDB.
• Good experience with Front-end technologies like ReactJS.
• Backend exposure – good knowledge of building API.
• Worked on serverless technologies.
• Efficient in building microservices in combining server & front-end.
• Knowledge of cloud architecture.
• Should have sound working experience with relational and columnar DB.
• Should be innovative and communicative in approach.
• Will be responsible for the functional/technical track of a project.

Whom will you work with?
You will closely work with the engineering team and support the Product Team.

Hiring Process includes : 

a. Written Test on Python and SQL

b. 2 - 3 rounds of Interviews

Immediate Joiners will be preferred

Job posted by
Anila Nair
SAP ABAP
SAP HANA
RPAS
Machine Learning (ML)
Python
Javascript
Eclipse (IDE)
GitHub
Jenkins
Tableau
PowerBI
SAP
icon
Remote, Mumbai, Bengaluru (Bangalore), Hyderabad, NCR (Delhi | Gurgaon | Noida), Pune
icon
4.5 - 12 yrs
icon
₹4L - ₹18L / yr
JD..SAP ABAP S4 HANA CONSULTANT

  • Design thinking to really understand the business problem
  • Understanding new ways to deliver (agile, DT)
  • Being able to do a functional design across S/4HANA and SCP). An understanding of the possibilities around automation/RPA (which should include UIPath, Blueprism, Contextor) and how these can be identified and embedded in business processes
  • Following on from this, the same is true for AI and ML: What is available in SAP standard, how can these be enhanced/developed further, how these technologies can be embedded in the business process. There is no point in understanding the standard process, or the AI and ML components, we will need a new type of hybrid SAP practitioner.
Job posted by
Sanjay Biswakarma

Machine Learning Engineer

at A mobile internet company

Machine Learning (ML)
Deep Learning
Data Science
Neural networks
icon
Bengaluru (Bangalore)
icon
5 - 8 yrs
icon
₹30L - ₹50L / yr

Basic Qualifications:

  • Five+ years experience working in a Big Data Software Development role
  • Experience managing and deploying ML models in real world environments
  • Bachelor's degree in Computer Science, Mathematics, Statistics, or other analytical fields
  • Experience working with Python, Scala, Spark or other open-source software with data science libraries
  • Experience in advanced math and statistics
  • Excellent familiarity with command line linux environment
  • Able to understand various data structures and common methods in data transformation
  • Experience deploying machine learning models
Working knowledge of a variety of machine learning techniques (clustering, decision tree learning, artificial neural networks, etc.).
Job posted by
Chandramohan Subramanian

Data Analyst

at Wheelseye Technology India Pvt Ltd.

Founded 2017  •  Product  •  100-500 employees  •  Raised funding
Python
SQL
Microsoft Excel
MS-Excel
icon
NCR (Delhi | Gurgaon | Noida)
icon
1 - 5 yrs
icon
₹8L - ₹16L / yr

About WheelsEye :
Logistics in India is a complex business - layered with multiple stakeholders, unorganized, primarily offline, and with many trivial yet deep-rooted problems. Though this industry contributes 14% to the GDP, its problems have gone unattended and ignored, until now.

WheelsEye is a logistics company, building a digital infrastructure around fleet owners. Currently, we offer solutions to empower truck fleet owners. Our proprietary software & hardware solutions help automate operations, secure fleet, save costs, improve on-time performance, and streamline their business.

 

Why WheelsEye?

  • Work on a real Indian problem of scale impact lives of 5.5 cr fleet owners, drivers and their families in a meaningful way
  • Different from current market players, heavily focused and built around truck owners Problem solving and learning-oriented organization
  • Audacious goals, high speed, and action orientation
  • Opportunity to scale the organization across the country
  • Opportunity to build and execute the culture
  • Contribute to and become a part of the action plan for building the tech, finance, and service infrastructure for the logistics industry It's Tough!

Requirements:

  • Bachelor’s degree with additional 2-5 years experience in analytics domain
  • Experience in articulating and translating business questions and using statistical techniques to arrive​ ​at an answer using available data
  • Proficient with scripting and/or programming language, e.g. Python, R(Optional), Advanced SQL; advanced knowledge​ ​of data processing, database programming and data analytics tools and techniques
  • Extensive background in data mining, modelling and statistical analysis; able to understand various data structures and common methods in data transformation​ e.g. Linear and logistic regression, clustering, decision trees etc.
  • ​Working knowledge of tools like Mixpanel, Metabase, Google sheets, Google BigQuery & Data​ ​studio is preferred
  • ​Ability to self-start and self-directed work in a fast-paced environment

If you are willing to work on solving real world problems for truck owners, Join us!
Job posted by
Rupali Goel

Consultant

at IQVIA

Founded 1969  •  Products & Services  •  100-1000 employees  •  Profitable
Data Warehouse (DWH)
Business Intelligence (BI)
Amazon Web Services (AWS)
SQL
MDM
Python
icon
Pune
icon
3 - 6 yrs
icon
₹5L - ₹15L / yr
Consultants will have the opportunity to :
- Build a team with skills in ETL, reporting, MDM and ad-hoc analytics support
- Build technical solutions using latest open source and cloud based technologies
- Work closely with offshore senior consultant, onshore team and client's business and IT teams to gather project requirements
- Assist overall project execution from India - starting from project planning, team formation system design and development, testing, UAT and deployment
- Build demos and POCs in support of business development for new and existing clients
- Prepare project documents and PowerPoint presentations for client communication
- Conduct training sessions to train associates and help shape their growth
Job posted by
Nishigandha Wagh

Senior Data Scientist

at SigTuple

Founded 2015  •  Products & Services  •  100-1000 employees  •  Raised funding
Data Science
R Programming
Python
Machine Learning (ML)
icon
Bengaluru (Bangalore)
icon
2 - 6 yrs
icon
₹4L - ₹20L / yr
We are looking for highly passionate and enthusiastic players for solving problems in medical data analysis using a combination of image processing, machine learning and deep learning. As a Senior Computer Scientist at SigTuple you will have the onus of creating and leveraging the state-of-the-art algorithms in machine learning, image processing and AI which will impact billions of people across the world by creating healthcare solutions that are accurate and affordable. You will collaborate with our current team of super awesome geeks in cracking super complex problems in a simple way by creating experiments, algorithms and prototypes that not only yield high-accuracy but are also designed and engineered to scale. We believe in innovation - needless to say that you will be part of creating intellectual properties like patents and contributing to the research communities by publishing papers - it is something that we value the most What we are looking for: · Hands on experience along with a strong understanding of foundational algorithms in either machine learning, computer vision or deep learning. Prior experience of applying these techniques on images and videos would be good-to-have. · Hands on experience in building and implementing advanced statistical analysis and machine learning and data mining algorithms. · Programming experience in C, C++, Python What should you have: · 2 - 5 years of relevant experience in solving problems using machine learning or computer vision · Bachelor degree or Master degree or PhD in computer science or related fields. · Be an innovative and creative thinker, somebody who is not afraid to try something new and inspire others to do so. · Thrive in a fast-paced and fun environment. · Work with a bunch of data scientist geeks and disruptors striving for a big cause. What SigTuple can offer: You will be working with an incredible team of smart & supportive people, driven by a common force to change things for the better. With an opportunity to deliver high-calibre mobile and desktop solutions integrated with hardware that will transform healthcare ground up, there will ultimately be different challenges for you to face. Sufficient to say that if you thrive in these environments, the buzz alone will keep you energized. In short you will snag a place at the table of one of the most vibrant start-ups in the industry!!
Job posted by
Sneha Chakravorty

Data Scientist

at Jiva adventures

Founded 2017  •  Product  •  20-100 employees  •  Profitable
Data Science
Python
Machine Learning (ML)
Natural Language Processing (NLP)
icon
Bengaluru (Bangalore)
icon
1 - 4 yrs
icon
₹5L - ₹15L / yr
Should be experienced in building Machine learning pipelines.   Should be proficient in Python and scientific packages like pandas, numpy, scikit, matplotlib, etc. Experience with techniques such as Data mining, Distributed Computing, Applied Mathematics and Algorthims, Probablity & statistics, Strong problem solving and conceptual thinking abilities Hands on experience in Model building Building highly customized and optimized data pipelines integrating third party API’s and inhouse data sources. Extracting features from text data using tools like Scapy Deep learning for NLP using any modern framework
Job posted by
Bharat Chinhara

Data Scientist

at Thrymr Software

Founded 2013  •  Products & Services  •  100-1000 employees  •  Profitable
R Programming
Python
Data Science
icon
Hyderabad
icon
0 - 2 yrs
icon
₹4L - ₹5L / yr
Company Profile: Thrymr Software is an outsourced product development startup. Our primary development center is in Hyderabad, India with a team of about 100+ members across various technical roles. Thrymr is also in Singapore, Hamburg (Germany) and Amsterdam (Netherlands). Thrymr works with companies to take complete ownership of building their end to end products be it web or mobile applications or advanced analytics including GIS, machine learning or computer vision. http://thrymr.net Job Location Hyderabad, Financial District Job Description: As a Data Scientist, you will evaluate and improve products. You will collaborate with a multidisciplinary team of engineers and analysts on a wide range of problems. As a Data Scientist your responsibility will be (but not limited to) Design, benchmark and tune machine-learning algorithms,Painlessly and securely manipulate large and complex relational data sets, Assemble complex machine-learning strategies, Build new predictive apps and services Responsibilities: 1. Work with large, complex datasets. Solve difficult, non-routine analysis problems, applying advanced analytical methods as needed. Conduct end-to-end analysis that includes data gathering and requirements specification, processing, analysis, ongoing deliverables, and presentations. 2. Build and prototype analysis pipelines iteratively to provide insights at scale. Develop a comprehensive understanding of Google data ​structures and metrics, advocating for changes were needed for both products development and sales activity. 3. Interact cross-functionally with a wide variety of people and teams. Work closely with engineers to identify opportunities for, design, and assess improvements to various products. 4. Make business recommendations (e.g. cost-benefit, forecasting, experiment analysis) with effective presentations of findings at multiple levels of stakeholders through visual displays of quantitative information. 5. Research and develop analysis, forecasting, and optimization methods to improve the quality of uuser-facingproducts; example application areas include ads quality, search quality, end-user behavioral modeling, and live experiments. 6. Apart from the core data science work (subjected to projects and availability), you may also be required to contribute to regular software development work such as web, backend and mobile applications development. Our Expectations 1. You should have at least B.Tech/B.Sc degree or equivalent practical experience (e.g., statistics, operations research, bioinformatics, economics, computational biology, computer science, mathematics, physics, electrical engineering, industrial engineering). 2. You should have practical experience of working experience with statistical packages (e.g., R, Python, NumPy ) ​and databases (e.g., SQL). 3. You should have experience articulating business questions and using mathematical techniques to arrive at an answer using available data​. Experience translating analysis results into business recommendations. 4. You should have strong analytical and research skills 5. You should have good academics 6. You will have to very proactive and submit your daily/weekly reports very diligently 7. You should be comfortable with working exceptionally hard as we are a startup and this is a high-performance work environment. 8. This is NOT a 9 to 5 ​kind of job, you should be able to work long hours. What you can expect 1. High-performance work culture 2. Short-term travel across the globe at very short notices 3. Accelerated learning (you will learn at least thrice as much compared to other companies in similar roles) and become a lot more technical 4. Happy go lucky team with zero politics 5. Zero tolerance for unprofessional behavior and substandard performance 6. Performance-based appraisals that can happen anytime with considerable hikes compared to single-digit annual hikes as the market standard
Job posted by
Sharmistha Debnath
Did not find a job you were looking for?
icon
Search for relevant jobs from 10000+ companies such as Google, Amazon & Uber actively hiring on Cutshort.
Get to hear about interesting companies hiring right now
iconFollow Cutshort
Want to apply to this role at 5 years old AI Startup?
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Learn more
Get to hear about interesting companies hiring right now
iconFollow Cutshort