Loading...

{{notif_text}}

The next CutShort event, {{next_cs_event.name}}, in partnership with UpGrad, will happen on {{next_cs_event.startDate | date: 'd MMMM'}}The next CutShort event, {{next_cs_event.name}}, in partnership with UpGrad, will begin in a few hoursThe CutShort event, {{next_cs_event.name}}, in partnership with UpGrad, is LIVE.Join now!
{{hours_remaining}}:{{minutes_remaining}}:{{seconds_remaining}}Want to save 90% of your recruiting time? Learn how in our next webinar on 22nd March at 3 pmLearn more

Locations

Pune

Experience

4 - 8 years

Salary

{{100000 / ('' == 'MONTH' ? 12 : 100000) | number}} - {{1600000 / ('' == 'MONTH' ? 12 : 100000) | number}} {{'' == 'MONTH' ? '/mo' : 'lpa'}}

Skills

Data Science
Python
Machine Learning (ML)
Natural Language Processing (NLP)
Big Data
Agile/Scrum
Project Management

Job description

Description Must have Direct Hands- on, 4 years of experience, building complex Data Science solutions Must have fundamental knowledge of Inferential Statistics Should have worked on Predictive Modelling, using Python / R Experience should include the following, File I/ O, Data Harmonization, Data Exploration Machine Learning Techniques (Supervised, Unsupervised) Multi- Dimensional Array Processing Deep Learning NLP, Image Processing Prior experience in Healthcare Domain, is a plus Experience using Big Data, is a plus Should have Excellent Analytical, Problem Solving ability. Should be able to grasp new concepts quickly Should be well familiar with Agile Project Management Methodology Should have excellent written and verbal communication skills Should be a team player with open mind

About Saama Technologies

Saama is a US headquartered, global leader in Big Data and Analytics space. Our unique and focused solutions are helping clients in the Insurance, Life Sciences, Healthcare and CPG industry verticals to achieve faster business results through data driven insights.

Founded

1997

Type

Products & Services

Size

250+ employees

Stage

Profitable
View company

Similar jobs

Data Scientist

Founded 2003
Products and services{{j_company_types[2 - 1]}}
{{j_company_sizes[3 - 1]}} employees
{{j_company_stages[3 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore), Chennai, Pune, Mumbai
Experience icon
7 - 13 years
Experience icon
14 - 20 lacs/annum

Requirement Specifications: Job Title:: Data Scientist Experience:: 7 to 10 Years Work Location:: Mumbai, Bengaluru, Chennai Job Role:: Permanent Notice Period :: Immediate to 60 days Job description: • Support delivery of one or more data science use cases, leading on data discovery and model building activities Conceptualize and quickly build POC on new product ideas - should be willing to work as an individual contributor • Open to learn, implement newer tools\products • Experiment & identify best methods\techniques, algorithms for analytical problems • Operationalize – Work closely with the engineering, infrastructure, service management and business teams to operationalize use cases Essential Skills • Minimum 2-7 years of hands-on experience with statistical software tools: SQL, R, Python • 3+ years’ experience in business analytics, forecasting or business planning with emphasis on analytical modeling, quantitative reasoning and metrics reporting • Experience working with large data sets in order to extract business insights or build predictive models • Proficiency in one or more statistical tools/languages – Python, Scala, R, SPSS or SAS and related packages like Pandas, SciPy/Scikit-learn, NumPy etc. • Good data intuition / analysis skills; sql, plsql knowledge must • Manage and transform variety of datasets to cleanse, join, aggregate the datasets • Hands-on experience running in running various methods like Regression, Random forest, k-NN, k-Means, boosted trees, SVM, Neural Network, text mining, NLP, statistical modelling, data mining, exploratory data analysis, statistics (hypothesis testing, descriptive statistics) • Deep domain (BFSI, Manufacturing, Auto, Airlines, Supply Chain, Retail & CPG) knowledge • Demonstrated ability to work under time constraints while delivering incremental value. • Education Minimum a Masters in Statistics, or PhD in domains linked to applied statistics, applied physics, Artificial Intelligence, Computer Vision etc. BE/BTECH/BSC Statistics/BSC Maths

Job posted by
apply for job
apply for job
Sowmya M picture
Sowmya M
Job posted by
Sowmya M picture
Sowmya M
Apply for job
apply for job

Hadoop Engineers

Founded 2012
Products and services{{j_company_types[3 - 1]}}
{{j_company_sizes[3 - 1]}} employees
{{j_company_stages[ - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore)
Experience icon
4 - 7 years
Experience icon
24 - 30 lacs/annum

Position Description Demonstrates up-to-date expertise in Software Engineering and applies this to the development, execution, and improvement of action plans Models compliance with company policies and procedures and supports company mission, values, and standards of ethics and integrity Provides and supports the implementation of business solutions Provides support to the business Troubleshoots business, production issues and on call support. Minimum Qualifications BS/MS in Computer Science or related field 5+ years’ experience building web applications Solid understanding of computer science principles Excellent Soft Skills Understanding the major algorithms like searching and sorting Strong skills in writing clean code using languages like Java and J2EE technologies. Understanding how to engineer the RESTful, Micro services and knowledge of major software patterns like MVC, Singleton, Facade, Business Delegate Deep knowledge of web technologies such as HTML5, CSS, JSON Good understanding of continuous integration tools and frameworks like Jenkins Experience in working with the Agile environments, like Scrum and Kanban. Experience in dealing with the performance tuning for very large-scale apps. Experience in writing scripting using Perl, Python and Shell scripting. Experience in writing jobs using Open source cluster computing frameworks like Spark Relational database design experience- MySQL, Oracle, SOLR, NoSQL - Cassandra, Mango DB and Hive. Aptitude for writing clean, succinct and efficient code. Attitude to thrive in a fun, fast-paced start-up like environment

Job posted by
apply for job
apply for job
Sampreetha Pai picture
Sampreetha Pai
Job posted by
Sampreetha Pai picture
Sampreetha Pai
Apply for job
apply for job

Machine Learning Engineers

Founded 2008
Products and services{{j_company_types[3 - 1]}}
{{j_company_sizes[2 - 1]}} employees
{{j_company_stages[3 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Pune
Experience icon
3 - 7 years
Experience icon
7 - 15 lacs/annum

We are looking for a Machine Learning Engineer with 3+ years of experience with a background in Statistics and hands-on experience in the Python ecosystem, using sound  Software Engineering practices. Skills & Knowledge: - Formal knowledge of fundamentals of probability & statistics along with the ability to apply basic statistical analysis methods like hypothesis testing, t-tests, ANOVA etc. - Hands-on knowledge of data formats, data extraction, loading, wrangling, transformation, pre-processing and analysis. - Thorough understanding of data-modeling and machine-learning concepts - Complete understanding and ability to apply, implement and adapt standard implementations of machine learning algorithms - Good understanding and ability to apply and adapt Neural Networks and Deep Learning, including common high-level Deep Learning architectures like CNNs and RNNs - Fundamentals of computer science & programming, especially Data structures (like multi-dimensional arrays, trees, and graphs) and Algorithms (like searching, sorting, and dynamic programming) - Fundamentals of software engineering and system design, such as requirements analysis, REST APIs, database queries, system and library calls, version control, etc. Languages and Libraries: - Hands-on experience with Python and Python Libraries for data analysis and machine learning, especially Scikit-learn, Tensorflow, Pandas, Numpy, Statsmodels, and Scipy. - Experience with R and its ecosystem is a plus - Knowledge of other open source machine learning and data modeling frameworks like Spark MLlib, H2O, etc. is a plus

Job posted by
apply for job
apply for job
Juzar Malubhoy picture
Juzar Malubhoy
Job posted by
Juzar Malubhoy picture
Juzar Malubhoy
Apply for job
apply for job

Tech Lead

Founded 2017
Products and services{{j_company_types[3 - 1]}}
{{j_company_sizes[2 - 1]}} employees
{{j_company_stages[1 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Kolkata
Experience icon
2 - 7 years
Experience icon
5 - 8 lacs/annum

A research lab with roots laid in innovation, we are looking for someone who can take reins of our AI based development think tank. Given the work ethics and results, the salary can be re-negotiated in 5 months.

Job posted by
apply for job
apply for job
Praveen Baheti picture
Praveen Baheti
Job posted by
Praveen Baheti picture
Praveen Baheti
Apply for job
apply for job

Senior Research Data Scientist

Founded 2016
Products and services{{j_company_types[3 - 1]}}
{{j_company_sizes[1 - 1]}} employees
{{j_company_stages[2 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Pune
Experience icon
0 - 5 years
Experience icon
5 - 15 lacs/annum

LeanAgri is looking for a Research Scientist to solve some of the most challenging problems in the field of Agriculture. Our mission and vision is to revolutionize Agriculture in India using technology. Our customers, farmers, constitute of 50% of the Indian workforce accounting for just 13.7% of the GDP. We aim to bring them technology to revolutionize Agriculture in India.   LeanAgri has expertise in creating solutions for Agriculture which are capable of increasing agriculture yields and farmer incomes. We are working on building models for unsolved problems that affect millions of farmers across the world. We want to devise innovative solutions for these problems and employ them to help our customers. Your role You will be working at a chief scientist role where you would be creating enhanced technology for solving problems in Agriculture. The role encompasses designing and executing experiments in Agriculture with help of our sophisticated research farms, and use the generated data to create models which can be employed in the real world. Since your role will require you to closely work with Agriculture systems, you would need to do a lot of research of farming patterns, and effect of biotic and abiotic parameters in Agriculture. The final aim would be to build mathematical models based on all the experiments which can help provide real value to our customers. Does this interest you? If yes, then let’s get you onboard. Requirements Passion about working with interdisciplinary problems, you are not a machine learning or an Agri engineer, you will be a Scientist who would be trying to solve problems in every way possible. Ability to learn and explore, you will be reading hundreds of research papers on Agriculture and Machine learning and trying to build models encompassing both. Can dive deep to create solutions for hard problems. We would be doing innovation unheard of, so expect big challenges. Good mathematical modelling capabilities, and experience with writing some code. We would be giving you problem statements related to Machine learning to see how you analyse and work around to create solutions to those problems.

Job posted by
apply for job
apply for job
Kunal Grover picture
Kunal Grover
Job posted by
Kunal Grover picture
Kunal Grover
Apply for job
apply for job

Data Scientist

Founded 2007
Products and services{{j_company_types[1 - 1]}}
{{j_company_sizes[2 - 1]}} employees
{{j_company_stages[3 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore)
Experience icon
0 - 3 years
Experience icon
2 - 6 lacs/annum

We are an AI-based education platform, that pushes for directed, focused, smart learning, that helps every individual user at a personal level, with the power of AI. Responsibilities & Skills Required: • Excellent programming skill, being language agnostic and being able to implement the tested out models, into the existing platform seamlessly. • Reinforcement Learning, Natural Language Processing (NLP), Neural Networks, Text Clustering, Topic Modelling, Information Extraction, Information Retrieval, Deep learning, Machine learning, cognitive science, and analytics. • Proven experience implementing and deploying advanced AI solutions using R/Python. • Apply machine learning algorithms, statistical data analysis, text clustering, summarization, extracting insights from multiple data points. • Excellent understanding of Analytics concepts and methodologies including machine learning (unsupervised and supervised). • Hand on in handling large amounts of structured and unstructured data. Skills Required: Skills: • Visualisation using d3.js, Chart.js, Tableau • Javascript • Python, R, NLP, NLG, Machine Learning, Deep Learning & Neural Networks • CNN • Reinforcement Learning • Unsupervised Learning • Supervised Learning • Deep Neural Networks • Frameworks : Keras/tensorflow

Job posted by
apply for job
apply for job
Flyn Sequeira picture
Flyn Sequeira
Job posted by
Flyn Sequeira picture
Flyn Sequeira
Apply for job
apply for job

Machine Learning Data Engineer

Founded 2006
Products and services{{j_company_types[1 - 1]}}
{{j_company_sizes[2 - 1]}} employees
{{j_company_stages[3 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
NCR (Delhi | Gurgaon | Noida), Bengaluru (Bangalore)
Experience icon
3 - 9 years
Experience icon
12 - 25 lacs/annum

Job Description Who are we? BlueOptima provides industry leading objective metrics in software development using it’s proprietary Coding Effort Analytics that enable large organisations to deliver better software, faster, and at lower cost. Founded in 2007, BlueOptima is a profitable, independent, high growth software vendor commercialising technology initially devised in seminal research carried out at Cambridge University. We are headquartered in London with offices in New York, Bangalore, and Gurgaon. BlueOptima’s technology is deployed with global enterprises driving value from their software development activities For example, we work with seven of the world’s top ten Universal Banks (by revenue), three of the world’s top ten telecommunications companies (by revenue, excl. China). Our technology is pushing the limits of complex analytics on large data-sets with more than 15 billion static source code metric observations of software engineers working in an Enterprise software development environment. BlueOptima is an Equal Opportunities employer. Whom are we looking for? BlueOptima has a truly unique collection of vast datasets relating to the changes that software developers make in source code when working in an enterprise software development environment. We are looking for analytically minded individuals with expertise in statistical analysis, Machine Learning and Data Engineering. Who will work on real world problems, unique to the data that we have, develop new algorithms and tools to solve problems. The use of Machine Learning is a growing internal incentive and we have a large range of opportunities, to expand the value that we deliver to our clients. What does the role involve? As a Data Engineer you will be take problems and ideas from both our onsite Data Scientists, analyze what is involved, spec and build intelligent solutions using our data. You will take responsibility for the end to end process. Further to this, you are encouraged to identify new ideas, metrics and opportunities within our dataset and identify and report when an idea or approach isn’t being successful and should be stopped. You will use tools ranging from advance Machine Learning algorithms to Statistical approaches and will be able to select the best tool for the job. Finally, you will support and identify improvements to our existing algorithms and approaches. Responsibilities include: Solve problems using Machine Learning and advanced statistical techniques based on business needs. Identify opportunities to add value and solve problems using Machine Learning across the business. Develop tools to help senior managers identify actionable information based on metrics like BlueOptima Coding Effort and explain the insight they reveal to senior managers to support decision-making. Develop additional & supporting metrics for the BlueOptima product and data predominantly using R and Python and/or similar statistical tools. Producing ad hoc or bespoke analysis and reports. Coordinate with both engineers & client side data-scientists to understand requirements and opportunities to add value. Spec the requirements to solve a problem and identify the critical path and timelines and be able to give clear estimates. Resolve issues and find improvements to existing Machine Learning solution and explain their impacts. ESSENTIAL SKILLS / EXPERIENCE REQUIRED: Minimum Bachelor's degree in Computer Science/Statistics/Mathematics or equivalent. Minimum of 3+ years experience in developing solutions using Machine learning Algorithms. Strong Analytical skills demonstrated through data engineering or similar experience. Strong fundamentals in Statistical Analysis using R or a similar programming language. Experience apply Machine Learning algorithms and techniques to resolve problems on structured and unstructured data. An in depth understanding of a wide range of Machine Learning techniques, and an understanding of which algorithms are suited to which problems. A drive to not only identify a solution to a technical problem but to see it all the way through to inclusion in a product. Strong written and verbal communication skills Strong interpersonal and time management skills DESIRABLE SKILLS / EXPERIENCE: Experience with automating basic tasks to maximise time for more important problems. Experience with PostgreSQL or similar Rational Database. Experience with MongoDB or similar nosql database. Experience with Data Visualisation experience (via Tableau, Qlikview, SAS BI or similar) is preferable. Experience using task tracking systems e.g. Jira and distributed version control systems e.g. Git. Be comfortable explaining very technical concepts to non-expert people. Experience of project management and designing processes to deliver successful outcomes. Why work for us? Work with a unique a truly vast collection of datasets Above market remuneration Stimulating challenges that fully utilise your skills Work on real-world technical problems to which solution cannot simply be found on the internet Working alongside other passionate, talented engineers Hardware of your choice Our fast-growing company offers the potential for rapid career progression

Job posted by
apply for job
apply for job
Rashmi Anand picture
Rashmi Anand
Job posted by
Rashmi Anand picture
Rashmi Anand
Apply for job
apply for job

Machine Learning Data Engineer

Founded 2006
Products and services{{j_company_types[1 - 1]}}
{{j_company_sizes[2 - 1]}} employees
{{j_company_stages[3 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
NCR (Delhi | Gurgaon | Noida), Bengaluru (Bangalore)
Experience icon
1 - 4 years
Experience icon
12 - 15 lacs/annum

Machine Learning Data Engineer Engineering Gurgaon, Haryana, India Job Description Who are we? BlueOptima provides industry leading objective metrics in software development using it’s proprietary Coding Effort Analytics that enable large organisations to deliver better software, faster, and at lower cost. Founded in 2007, BlueOptima is a profitable, independent, high growth software vendor commercialising technology initially devised in seminal research carried out at Cambridge University. We are headquartered in London with offices in New York, Bangalore, and Gurgaon. BlueOptima’s technology is deployed with global enterprises driving value from their software development activities For example, we work with seven of the world’s top ten Universal Banks (by revenue), three of the world’s top ten telecommunications companies (by revenue, excl. China). Our technology is pushing the limits of complex analytics on large data-sets with more than 15 billion static source code metric observations of software engineers working in an Enterprise software development environment. BlueOptima is an Equal Opportunities employer. Whom are we looking for? BlueOptima has a truly unique collection of vast datasets relating to the changes that software developers make in source code when working in an enterprise software development environment. We are looking for analytically minded individuals with expertise in statistical analysis, Machine Learning and Data Engineering. Who will work on real world problems, unique to the data that we have, develop new algorithms and tools to solve problems. The use of Machine Learning is a growing internal incentive and we have a large range of opportunities, to expand the value that we deliver to our clients. What does the role involve? As a Data Engineer you will be take problems and ideas from both our onsite Data Scientists, analyze what is involved, spec and build intelligent solutions using our data. You will take responsibility for the end to end process. Further to this, you are encouraged to identify new ideas, metrics and opportunities within our dataset and identify and report when an idea or approach isn’t being successful and should be stopped. You will use tools ranging from advance Machine Learning algorithms to Statistical approaches and will be able to select the best tool for the job. Finally, you will support and identify improvements to our existing algorithms and approaches. Responsibilities include: Solve problems using Machine Learning and advanced statistical techniques based on business needs. Identify opportunities to add value and solve problems using Machine Learning across the business. Develop tools to help senior managers identify actionable information based on metrics like BlueOptima Coding Effort and explain the insight they reveal to senior managers to support decision-making. Develop additional & supporting metrics for the BlueOptima product and data predominantly using R and Python and/or similar statistical tools. Producing ad hoc or bespoke analysis and reports. Coordinate with both engineers & client side data-scientists to understand requirements and opportunities to add value. Spec the requirements to solve a problem and identify the critical path and timelines and be able to give clear estimates. Resolve issues and find improvements to existing Machine Learning solution and explain their impacts. ESSENTIAL SKILLS / EXPERIENCE REQUIRED: Minimum Bachelor's degree in Computer Science/Statistics/Mathematics or equivalent. Minimum of 3+ years experience in developing solutions using Machine learning Algorithms. Strong Analytical skills demonstrated through data engineering or similar experience. Strong fundamentals in Statistical Analysis using R or a similar programming language. Experience apply Machine Learning algorithms and techniques to resolve problems on structured and unstructured data. An in depth understanding of a wide range of Machine Learning techniques, and an understanding of which algorithms are suited to which problems. A drive to not only identify a solution to a technical problem but to see it all the way through to inclusion in a product. Strong written and verbal communication skills Strong interpersonal and time management skills DESIRABLE SKILLS / EXPERIENCE: Experience with automating basic tasks to maximise time for more important problems. Experience with PostgreSQL or similar Rational Database. Experience with MongoDB or similar nosql database. Experience with Data Visualisation experience (via Tableau, Qlikview, SAS BI or similar) is preferable. Experience using task tracking systems e.g. Jira and distributed version control systems e.g. Git. Be comfortable explaining very technical concepts to non-expert people. Experience of project management and designing processes to deliver successful outcomes. Why work for us? Work with a unique a truly vast collection of datasets Above market remuneration Stimulating challenges that fully utilise your skills Work on real-world technical problems to which solution cannot simply be found on the internet Working alongside other passionate, talented engineers Hardware of your choice Our fast-growing company offers the potential for rapid career progression

Job posted by
apply for job
apply for job
Deepthi Ravindran picture
Deepthi Ravindran
Job posted by
Deepthi Ravindran picture
Deepthi Ravindran
Apply for job
apply for job

Data Scientist - Precily AI

Founded 2016
Products and services{{j_company_types[1 - 1]}}
{{j_company_sizes[2 - 1]}} employees
{{j_company_stages[2 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore), NCR (Delhi | Gurgaon | Noida)
Experience icon
3 - 7 years
Experience icon
4 - 25 lacs/annum

Job Description – Data Scientist About Company Profile Precily is a startup headquartered in Noida, IN. Precily is currently working with leading consulting & law firms, research firms & technology companies. Aura (Precily AI) is data-analysis platform for enterprises that increase the efficiency of the workforce by providing AI-based solutions. Responsibilities & Skills Required: The role requires deep knowledge in designing, planning, testing and deploying analytics solutions including the following: • Natural Language Processing (NLP), Neural Networks , Text Clustering, Topic Modelling, Information Extraction, Information Retrieval, Deep learning, Machine learning, cognitive science and analytics. • Proven experience implementing and deploying advanced AI solutions using R/Python. • Apply machine learning algorithms, statistical data analysis, text clustering, summarization, extracting insights from multiple data points. • Excellent understanding of Analytics concepts and methodologies including machine learning (unsupervised and supervised). • Hand on in handling large amounts of structured and unstructured data. • Measure, interpret, and derive learning from results of analysis that will lead to improvements document processing. Skills Required: • Python, R, NLP, NLG, Machine Learning, Deep Learning & Neural Networks • Word Vectorizers • Word Embeddings ( word2vec & GloVe ) • RNN ( CNN vs RNN ) • LSTM & GRU ( LSTM vs GRU ) • Pretrained Embeddings ( Implementation in RNN ) • Unsupervised Learning • Supervised Learning • Deep Neural Networks • Framework : Keras/tensorflow • Keras Embedding Layer output Please reach out to us: careers@precily.com

Job posted by
apply for job
apply for job
Bharath Rao picture
Bharath Rao
Job posted by
Bharath Rao picture
Bharath Rao
Apply for job
apply for job

Head of Engineering - AI

Founded 2016
Products and services{{j_company_types[1 - 1]}}
{{j_company_sizes[2 - 1]}} employees
{{j_company_stages[2 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
NCR (Delhi | Gurgaon | Noida)
Experience icon
3 - 13 years
Experience icon
10 - 30 lacs/annum

Precily is Artificial Intelligence platform for enterprises that increase the efficiency of the workforce by providing AI-based solutions. Head of Engineering is a leadership role reporting to the CTO of Precily. We're looking for candidates with good team management skills with expertise in AI & Machine Learning.

Job posted by
apply for job
apply for job
Bharath Rao picture
Bharath Rao
Job posted by
Bharath Rao picture
Bharath Rao
Apply for job
apply for job
Want to apply for this role at Saama Technologies?
Hiring team responds within a day
apply for this job
Why apply via CutShort?
Connect with actual hiring teams and get their fast response. No 3rd party recruiters. No spam.