Loading...

{{notif_text}}

The next CutShort event, {{next_cs_event.name}}, in partnership with UpGrad, will happen on {{next_cs_event.startDate | date: 'd MMMM'}}The next CutShort event, {{next_cs_event.name}}, in partnership with UpGrad, will begin in a few hoursThe CutShort event, {{next_cs_event.name}}, in partnership with UpGrad, is LIVE.Join now!
{{hours_remaining}}:{{minutes_remaining}}:{{seconds_remaining}}Want to save 90% of your recruiting time? Learn how in our next webinar on 22nd March at 3 pmLearn more
The job you are looking for has expired or has been deleted. Check out similar jobs below.

Similar jobs

Hadoop Engineers

Founded 2012
Products and services{{j_company_types[3 - 1]}}
{{j_company_sizes[3 - 1]}} employees
{{j_company_stages[ - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore)
Experience icon
4 - 7 years
Experience icon
24 - 30 lacs/annum

Position Description Demonstrates up-to-date expertise in Software Engineering and applies this to the development, execution, and improvement of action plans Models compliance with company policies and procedures and supports company mission, values, and standards of ethics and integrity Provides and supports the implementation of business solutions Provides support to the business Troubleshoots business, production issues and on call support. Minimum Qualifications BS/MS in Computer Science or related field 5+ years’ experience building web applications Solid understanding of computer science principles Excellent Soft Skills Understanding the major algorithms like searching and sorting Strong skills in writing clean code using languages like Java and J2EE technologies. Understanding how to engineer the RESTful, Micro services and knowledge of major software patterns like MVC, Singleton, Facade, Business Delegate Deep knowledge of web technologies such as HTML5, CSS, JSON Good understanding of continuous integration tools and frameworks like Jenkins Experience in working with the Agile environments, like Scrum and Kanban. Experience in dealing with the performance tuning for very large-scale apps. Experience in writing scripting using Perl, Python and Shell scripting. Experience in writing jobs using Open source cluster computing frameworks like Spark Relational database design experience- MySQL, Oracle, SOLR, NoSQL - Cassandra, Mango DB and Hive. Aptitude for writing clean, succinct and efficient code. Attitude to thrive in a fun, fast-paced start-up like environment

Job posted by
apply for job
apply for job
Sampreetha Pai picture
Sampreetha Pai
Job posted by
Sampreetha Pai picture
Sampreetha Pai
Apply for job
apply for job

Senior Research Data Scientist

Founded 2016
Products and services{{j_company_types[3 - 1]}}
{{j_company_sizes[1 - 1]}} employees
{{j_company_stages[2 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Pune
Experience icon
0 - 5 years
Experience icon
5 - 15 lacs/annum

LeanAgri is looking for a Research Scientist to solve some of the most challenging problems in the field of Agriculture. Our mission and vision is to revolutionize Agriculture in India using technology. Our customers, farmers, constitute of 50% of the Indian workforce accounting for just 13.7% of the GDP. We aim to bring them technology to revolutionize Agriculture in India.   LeanAgri has expertise in creating solutions for Agriculture which are capable of increasing agriculture yields and farmer incomes. We are working on building models for unsolved problems that affect millions of farmers across the world. We want to devise innovative solutions for these problems and employ them to help our customers. Your role You will be working at a chief scientist role where you would be creating enhanced technology for solving problems in Agriculture. The role encompasses designing and executing experiments in Agriculture with help of our sophisticated research farms, and use the generated data to create models which can be employed in the real world. Since your role will require you to closely work with Agriculture systems, you would need to do a lot of research of farming patterns, and effect of biotic and abiotic parameters in Agriculture. The final aim would be to build mathematical models based on all the experiments which can help provide real value to our customers. Does this interest you? If yes, then let’s get you onboard. Requirements Passion about working with interdisciplinary problems, you are not a machine learning or an Agri engineer, you will be a Scientist who would be trying to solve problems in every way possible. Ability to learn and explore, you will be reading hundreds of research papers on Agriculture and Machine learning and trying to build models encompassing both. Can dive deep to create solutions for hard problems. We would be doing innovation unheard of, so expect big challenges. Good mathematical modelling capabilities, and experience with writing some code. We would be giving you problem statements related to Machine learning to see how you analyse and work around to create solutions to those problems.

Job posted by
apply for job
apply for job
Kunal Grover picture
Kunal Grover
Job posted by
Kunal Grover picture
Kunal Grover
Apply for job
apply for job

Machine Learning Engineers

Founded 2008
Products and services{{j_company_types[3 - 1]}}
{{j_company_sizes[2 - 1]}} employees
{{j_company_stages[3 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Pune
Experience icon
3 - 7 years
Experience icon
7 - 15 lacs/annum

We are looking for a Machine Learning Engineer with 3+ years of experience with a background in Statistics and hands-on experience in the Python ecosystem, using sound  Software Engineering practices. Skills & Knowledge: - Formal knowledge of fundamentals of probability & statistics along with the ability to apply basic statistical analysis methods like hypothesis testing, t-tests, ANOVA etc. - Hands-on knowledge of data formats, data extraction, loading, wrangling, transformation, pre-processing and analysis. - Thorough understanding of data-modeling and machine-learning concepts - Complete understanding and ability to apply, implement and adapt standard implementations of machine learning algorithms - Good understanding and ability to apply and adapt Neural Networks and Deep Learning, including common high-level Deep Learning architectures like CNNs and RNNs - Fundamentals of computer science & programming, especially Data structures (like multi-dimensional arrays, trees, and graphs) and Algorithms (like searching, sorting, and dynamic programming) - Fundamentals of software engineering and system design, such as requirements analysis, REST APIs, database queries, system and library calls, version control, etc. Languages and Libraries: - Hands-on experience with Python and Python Libraries for data analysis and machine learning, especially Scikit-learn, Tensorflow, Pandas, Numpy, Statsmodels, and Scipy. - Experience with R and its ecosystem is a plus - Knowledge of other open source machine learning and data modeling frameworks like Spark MLlib, H2O, etc. is a plus

Job posted by
apply for job
apply for job
Juzar Malubhoy picture
Juzar Malubhoy
Job posted by
Juzar Malubhoy picture
Juzar Malubhoy
Apply for job
apply for job

Engineering Intern (Backend, Data science, Machine Learning, AI)

Founded 2015
Products and services{{j_company_types[1 - 1]}}
{{j_company_sizes[2 - 1]}} employees
{{j_company_stages[3 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Pune
Experience icon
0 - 1 years
Experience icon
0 - 0 /month

We are looking for smart summer interns in the field of server engineering, data science and machine learning. Requirements: 1. In your chosen internship area, show us your projects and describe the hardest problems you faced and how you solved them. 2. Before applying, solve the Logical Programming test that we have on CutShort. Internship duration: 2-3 months Type: Full time (in office). Remote not available. Stipend: 15K/month Location: Pune PPO: We would love to offer full time roles to outstanding performers

Job posted by
apply for job
apply for job
Priyank Agrawal picture
Priyank Agrawal
Job posted by
Priyank Agrawal picture
Priyank Agrawal
Apply for job
apply for job

Data Scientist

Founded 2015
Products and services{{j_company_types[1 - 1]}}
{{j_company_sizes[2 - 1]}} employees
{{j_company_stages[3 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Pune
Experience icon
1 - 5 years
Experience icon
4 - 10 lacs/annum

Recruitment has been a weird problem. While companies complain they can't get good talent, there are hordes of talented professionals who are unable to easily find their next big opportunity. At CutShort, we are building an intelligent and tech-enabled platform that removes noise and connects these two sides seamlessly. More than 4000 companies have used our platform to hire 3x more people in 1/3rd the time and professionals get a great experience that just works. As we take CutShort in the next growth phase, we want to make it more intelligent. A big initiative is to use our data to simplify UX, reduce user errors and generate better results for our users. We should talk if: 1. You have at least 1 year of full-time experience in using M/L on real data to get real results. 2. Beyond the tools, you have a sound understanding of the underlying mathematical models. 3. You want to work in a fast growing startup where you can complete ownership and minimal supervision. 4. You want to see your work actually making an impact on our user's life. Interested? Let's talk!

Job posted by
apply for job
apply for job
Priyank Agrawal picture
Priyank Agrawal
Job posted by
Priyank Agrawal picture
Priyank Agrawal
Apply for job
apply for job

Data Scientist - Precily AI

Founded 2016
Products and services{{j_company_types[1 - 1]}}
{{j_company_sizes[2 - 1]}} employees
{{j_company_stages[2 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore), NCR (Delhi | Gurgaon | Noida)
Experience icon
3 - 7 years
Experience icon
4 - 25 lacs/annum

Job Description – Data Scientist About Company Profile Precily is a startup headquartered in Noida, IN. Precily is currently working with leading consulting & law firms, research firms & technology companies. Aura (Precily AI) is data-analysis platform for enterprises that increase the efficiency of the workforce by providing AI-based solutions. Responsibilities & Skills Required: The role requires deep knowledge in designing, planning, testing and deploying analytics solutions including the following: • Natural Language Processing (NLP), Neural Networks , Text Clustering, Topic Modelling, Information Extraction, Information Retrieval, Deep learning, Machine learning, cognitive science and analytics. • Proven experience implementing and deploying advanced AI solutions using R/Python. • Apply machine learning algorithms, statistical data analysis, text clustering, summarization, extracting insights from multiple data points. • Excellent understanding of Analytics concepts and methodologies including machine learning (unsupervised and supervised). • Hand on in handling large amounts of structured and unstructured data. • Measure, interpret, and derive learning from results of analysis that will lead to improvements document processing. Skills Required: • Python, R, NLP, NLG, Machine Learning, Deep Learning & Neural Networks • Word Vectorizers • Word Embeddings ( word2vec & GloVe ) • RNN ( CNN vs RNN ) • LSTM & GRU ( LSTM vs GRU ) • Pretrained Embeddings ( Implementation in RNN ) • Unsupervised Learning • Supervised Learning • Deep Neural Networks • Framework : Keras/tensorflow • Keras Embedding Layer output Please reach out to us: careers@precily.com

Job posted by
apply for job
apply for job
Bharath Rao picture
Bharath Rao
Job posted by
Bharath Rao picture
Bharath Rao
Apply for job
apply for job

Artificial Intelligence Developers

Founded 2016
Products and services{{j_company_types[1 - 1]}}
{{j_company_sizes[2 - 1]}} employees
{{j_company_stages[2 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
NCR (Delhi | Gurgaon | Noida)
Experience icon
1 - 3 years
Experience icon
3 - 9 lacs/annum

-Precily AI: Automatic summarization, shortening a business document, book with our AI. Create a summary of the major points of the original document. AI can make a coherent summary taking into account variables such as length, writing style, and syntax. We're also working in the legal domain to reduce the high number of pending cases in India. We use Artificial Intelligence and Machine Learning capabilities such as NLP, Neural Networks in Processing the data to provide solutions for various industries such as Enterprise, Healthcare, Legal.

Job posted by
apply for job
apply for job
Bharath Rao picture
Bharath Rao
Job posted by
Bharath Rao picture
Bharath Rao
Apply for job
apply for job

Senior Specialist - BigData Engineering

Founded 2000
Products and services{{j_company_types[3 - 1]}}
{{j_company_sizes[4 - 1]}} employees
{{j_company_stages[3 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Mumbai, NCR (Delhi | Gurgaon | Noida), Bengaluru (Bangalore)
Experience icon
5 - 10 years
Experience icon
15 - 35 lacs/annum

Role Brief: 6 + years of demonstrable experience designing technological solutions to complex data problems, developing & testing modular, reusable, efficient and scalable code to implement those solutions. Brief about Fractal & Team : Fractal Analytics is Leading Fortune 500 companies to leverage Big Data, analytics, and technology to drive smarter, faster and more accurate decisions in every aspect of their business.​​​​​​​ Our Big Data capability team is hiring technologists who can produce beautiful & functional code to solve complex analytics problems. If you are an exceptional developer and who loves to push the boundaries to solve complex business problems using innovative solutions, then we would like to talk with you.​​​​​​​ Job Responsibilities : Provides technical leadership in BigData space (Hadoop Stack like M/R, HDFS, Pig, Hive, HBase, Flume, Sqoop, etc..NoSQL stores like Cassandra, HBase etc) across Fractal and contributes to open source Big Data technologies. Visualize and evangelize next generation infrastructure in Big Data space (Batch, Near RealTime, RealTime technologies). Evaluate and recommend Big Data technology stack that would align with company's technology Passionate for continuous learning, experimenting, applying and contributing towards cutting edge open source technologies and software paradigms Drive significant technology initiatives end to end and across multiple layers of architecture Provides strong technical leadership in adopting and contributing to open source technologies related to BigData across the company. Provide strong technical expertise (performance, application design, stack upgrades) to lead Platform Engineering Defines and Drives best practices that can be adopted in BigData stack. Evangelizes the best practices across teams and BUs. Drives operational excellence through root cause analysis and continuous improvement for BigData technologies and processes and contributes back to open source community. Provide technical leadership and be a role model to data engineers pursuing technical career path in engineering Provide/inspire innovations that fuel the growth of Fractal as a whole EXPERIENCE : Must Have : Ideally, This Would Include Work On The Following Technologies Expert-level proficiency in at-least one of Java, C++ or Python (preferred). Scala knowledge a strong advantage. Strong understanding and experience in distributed computing frameworks, particularly Apache Hadoop 2.0 (YARN; MR & HDFS) and associated technologies -- one or more of Hive, Sqoop, Avro, Flume, Oozie, Zookeeper, etc.Hands-on experience with Apache Spark and its components (Streaming, SQL, MLLib) is a strong advantage. Operating knowledge of cloud computing platforms (AWS, especially EMR, EC2, S3, SWF services and the AWS CLI) Experience working within a Linux computing environment, and use of command line tools including knowledge of shell/Python scripting for automating common tasks Ability to work in a team in an agile setting, familiarity with JIRA and clear understanding of how Git works. A technologist - Loves to code and design In addition, the ideal candidate would have great problem-solving skills, and the ability & confidence to hack their way out of tight corners. Relevant Experience : Java or Python or C++ expertise Linux environment and shell scripting Distributed computing frameworks (Hadoop or Spark) Cloud computing platforms (AWS) Good to have : Statistical or machine learning DSL like R Distributed and low latency (streaming) application architecture Row store distributed DBMSs such as Cassandra Familiarity with API design Qualification:​​​​​​​ B.E/B.Tech/M.Tech in Computer Science or related technical degree OR Equivalent

Job posted by
apply for job
apply for job
Jesvin Varghese picture
Jesvin Varghese
Job posted by
Jesvin Varghese picture
Jesvin Varghese
Apply for job
apply for job

Data Scientist

Founded 2007
Products and services{{j_company_types[1 - 1]}}
{{j_company_sizes[2 - 1]}} employees
{{j_company_stages[3 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore)
Experience icon
0 - 3 years
Experience icon
2 - 6 lacs/annum

We are an AI-based education platform, that pushes for directed, focused, smart learning, that helps every individual user at a personal level, with the power of AI. Responsibilities & Skills Required: • Excellent programming skill, being language agnostic and being able to implement the tested out models, into the existing platform seamlessly. • Reinforcement Learning, Natural Language Processing (NLP), Neural Networks, Text Clustering, Topic Modelling, Information Extraction, Information Retrieval, Deep learning, Machine learning, cognitive science, and analytics. • Proven experience implementing and deploying advanced AI solutions using R/Python. • Apply machine learning algorithms, statistical data analysis, text clustering, summarization, extracting insights from multiple data points. • Excellent understanding of Analytics concepts and methodologies including machine learning (unsupervised and supervised). • Hand on in handling large amounts of structured and unstructured data. Skills Required: Skills: • Visualisation using d3.js, Chart.js, Tableau • Javascript • Python, R, NLP, NLG, Machine Learning, Deep Learning & Neural Networks • CNN • Reinforcement Learning • Unsupervised Learning • Supervised Learning • Deep Neural Networks • Frameworks : Keras/tensorflow

Job posted by
apply for job
apply for job
Flyn Sequeira picture
Flyn Sequeira
Job posted by
Flyn Sequeira picture
Flyn Sequeira
Apply for job
apply for job

Data Scientist

via Rapido
Founded 2016
Products and services{{j_company_types[1 - 1]}}
{{j_company_sizes[3 - 1]}} employees
{{j_company_stages[2 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore)
Experience icon
2 - 3 years
Experience icon
15 - 20 lacs/annum

Job Description Job Description Do you want to join an innovative team of scientists who use machine learning, NLP and statistical techniques to provide the best customer experience on the earth? Do you want to change the way that people work with customer experience? Our team wants to lead the technical innovations in these spaces and set the bar for every other company that exists. We love data, and we have lots of it. We're looking for business intelligence engineer to own end-to-end business problems and metrics which would have a direct impact on the bottom line of our business while improving customer experience. If you see how big data and cutting-edge technology can be used to improve customer experience, if you love to innovate, if you love to discover knowledge from big structured and unstructured data and if you deliver results, then we want you to be in our team. Major responsibilities Analyze and extract relevant information from large amounts of both structured and unstructured data to help automate and optimize key processes Design structured, multi-source data solutions to deliver the dashboards and reports that make data actionable Drive the collection of new data and the refinement of existing data sources to continually improve data quality Support data analysts and product managers by turning business requirements into functional specifications and then executing delivery Lead the technical lifecycle of data presentation from data sourcing to transforming into user-facing metrics Establish scalable, efficient, automated processes for large scale data analyses, model development, model validation and model implementation Basic Qualifications Bachelors or Master’s Degree in Computer Science, Systems Analysis, or related field 3+ years’ experience in data modeling, ETL development, and Data Warehousing 3+ years’ experience with BI/DW/ETL projects. Strong background in data relationships, modeling, and mining Technical guru; SQL expert and God in one of these Python/ Spark/ Scala/ Tulia Strong communication and data presentation skills Strong problem solving ability Preferred Qualifications Experience working with large-scale data warehousing and analytics projects, including using AWS technologies – S3, EC2, Data-pipeline and other big data technologies. Distributed programming experience is highly recommended 2+ years of industry experience in predictive modeling and analysis Technically deep and business savvy enough to interface with all levels and disciplines within the organization.

Job posted by
apply for job
apply for job
Pushpa Latha picture
Pushpa Latha
Job posted by
Pushpa Latha picture
Pushpa Latha
Apply for job
apply for job
Why apply via CutShort?
Connect with actual hiring teams and get their fast response. No 3rd party recruiters. No spam.