Cutshort logo
Clustering jobs

11+ Clustering Jobs in India

Apply to 11+ Clustering Jobs on CutShort.io. Find your next job, effortlessly. Browse Clustering Jobs and apply today!

icon
IT Product based Org.
Agency job
via OfficeDay Innovation by OFFICEDAY INNOVATION
Ahmedabad
3 - 5 yrs
₹10L - ₹12L / yr
skill iconData Science
skill iconMachine Learning (ML)
Natural Language Processing (NLP)
Computer Vision
skill iconDeep Learning
+7 more
  • 3+ years of Experience majoring in applying AI/ML/ NLP / deep learning / data-driven statistical analysis & modelling solutions.
  • Programming skills in Python, knowledge in Statistics.
  • Hands-on experience developing supervised and unsupervised machine learning algorithms (regression, decision trees/random forest, neural networks, feature selection/reduction, clustering, parameter tuning, etc.). Familiarity with reinforcement learning is highly desirable.
  • Experience in the financial domain and familiarity with financial models are highly desirable.
  • Experience in image processing and computer vision.
  • Experience working with building data pipelines.
  • Good understanding of Data preparation, Model planning, Model training, Model validation, Model deployment and performance tuning.
  • Should have hands on experience with some of these methods: Regression, Decision Trees,CART, Random Forest, Boosting, Evolutionary Programming, Neural Networks, Support Vector Machines, Ensemble Methods, Association Rules, Principal Component Analysis, Clustering, ArtificiAl Intelligence
  • Should have experience in using larger data sets using Postgres Database. 

 

Read more
xpressbees
Pune, Bengaluru (Bangalore)
6 - 8 yrs
₹15L - ₹25L / yr
skill iconData Science
skill iconMachine Learning (ML)
Natural Language Processing (NLP)
Computer Vision
Artificial Intelligence (AI)
+6 more
Company Profile
XressBees – a logistics company started in 2015 – is amongst the fastest growing companies of its sector. Our
vision to evolve into a strong full-service logistics organization reflects itself in the various lines of business like B2C
logistics 3PL, B2B Xpress, Hyperlocal and Cross border Logistics.
Our strong domain expertise and constant focus on innovation has helped us rapidly evolve as the most trusted
logistics partner of India. XB has progressively carved our way towards best-in-class technology platforms, an
extensive logistics network reach, and a seamless last mile management system.
While on this aggressive growth path, we seek to become the one-stop-shop for end-to-end logistics solutions. Our
big focus areas for the very near future include strengthening our presence as service providers of choice and
leveraging the power of technology to drive supply chain efficiencies.
Job Overview
XpressBees would enrich and scale its end-to-end logistics solutions at a high pace. This is a great opportunity to join
the team working on forming and delivering the operational strategy behind Artificial Intelligence / Machine Learning
and Data Engineering, leading projects and teams of AI Engineers collaborating with Data Scientists. In your role, you
will build high performance AI/ML solutions using groundbreaking AI/ML and BigData technologies. You will need to
understand business requirements and convert them to a solvable data science problem statement. You will be
involved in end to end AI/ML projects, starting from smaller scale POCs all the way to full scale ML pipelines in
production.
Seasoned AI/ML Engineers would own the implementation and productionzation of cutting-edge AI driven algorithmic
components for search, recommendation and insights to improve the efficiencies of the logistics supply chain and
serve the customer better.
You will apply innovative ML tools and concepts to deliver value to our teams and customers and make an impact to
the organization while solving challenging problems in the areas of AI, ML , Data Analytics and Computer Science.
Opportunities for application:
- Route Optimization
- Address / Geo-Coding Engine
- Anomaly detection, Computer Vision (e.g. loading / unloading)
- Fraud Detection (fake delivery attempts)
- Promise Recommendation Engine etc.
- Customer & Tech support solutions, e.g. chat bots.
- Breach detection / prediction
An Artificial Intelligence Engineer would apply himself/herself in the areas of -
- Deep Learning, NLP, Reinforcement Learning
- Machine Learning - Logistic Regression, Decision Trees, Random Forests, XGBoost, etc..
- Driving Optimization via LPs, MILPs, Stochastic Programs, and MDPs
- Operations Research, Supply Chain Optimization, and Data Analytics/Visualization
- Computer Vision and OCR technologies
The AI Engineering team enables internal teams to add AI capabilities to their Apps and Workflows easily via APIs
without needing to build AI expertise in each team – Decision Support, NLP, Computer Vision, for Public Clouds and
Enterprise in NLU, Vision and Conversational AI.Candidate is adept at working with large data sets to find
opportunities for product and process optimization and using models to test the effectiveness of different courses of
action. They must have knowledge using a variety of data mining/data analysis methods, using a variety of data tools,
building, and implementing models, using/creating algorithms, and creating/running simulations. They must be
comfortable working with a wide range of stakeholders and functional teams. The right candidate will have a passion
for discovering solutions hidden in large data sets and working with stakeholders to improve business outcomes.

Roles & Responsibilities
● Develop scalable infrastructure, including microservices and backend, that automates training and
deployment of ML models.
● Building cloud services in Decision Support (Anomaly Detection, Time series forecasting, Fraud detection,
Risk prevention, Predictive analytics), computer vision, natural language processing (NLP) and speech that
work out of the box.
● Brainstorm and Design various POCs using ML/DL/NLP solutions for new or existing enterprise problems.
● Work with fellow data scientists/SW engineers to build out other parts of the infrastructure, effectively
communicating your needs and understanding theirs and address external and internal shareholder's
product challenges.
● Build core of Artificial Intelligence and AI Services such as Decision Support, Vision, Speech, Text, NLP, NLU,
and others.
● Leverage Cloud technology –AWS, GCP, Azure
● Experiment with ML models in Python using machine learning libraries (Pytorch, Tensorflow), Big Data,
Hadoop, HBase, Spark, etc
● Work with stakeholders throughout the organization to identify opportunities for leveraging company data to
drive business solutions.
● Mine and analyze data from company databases to drive optimization and improvement of product
development, marketing techniques and business strategies.
● Assess the effectiveness and accuracy of new data sources and data gathering techniques.
● Develop custom data models and algorithms to apply to data sets.
● Use predictive modeling to increase and optimize customer experiences, supply chain metric and other
business outcomes.
● Develop company A/B testing framework and test model quality.
● Coordinate with different functional teams to implement models and monitor outcomes.
● Develop processes and tools to monitor and analyze model performance and data accuracy.
● Develop scalable infrastructure, including microservices and backend, that automates training and
deployment of ML models.
● Brainstorm and Design various POCs using ML/DL/NLP solutions for new or existing enterprise problems.
● Work with fellow data scientists/SW engineers to build out other parts of the infrastructure, effectively
communicating your needs and understanding theirs and address external and internal shareholder's
product challenges.
● Deliver machine learning and data science projects with data science techniques and associated libraries
such as AI/ ML or equivalent NLP (Natural Language Processing) packages. Such techniques include a good
to phenomenal understanding of statistical models, probabilistic algorithms, classification, clustering, deep
learning or related approaches as it applies to financial applications.
● The role will encourage you to learn a wide array of capabilities, toolsets and architectural patterns for
successful delivery.
What is required of you?
You will get an opportunity to build and operate a suite of massive scale, integrated data/ML platforms in a broadly
distributed, multi-tenant cloud environment.
● B.S., M.S., or Ph.D. in Computer Science, Computer Engineering
● Coding knowledge and experience with several languages: C, C++, Java,JavaScript, etc.
● Experience with building high-performance, resilient, scalable, and well-engineered systems
● Experience in CI/CD and development best practices, instrumentation, logging systems
● Experience using statistical computer languages (R, Python, SLQ, etc.) to manipulate data and draw insights
from large data sets.
● Experience working with and creating data architectures.
● Good understanding of various machine learning and natural language processing technologies, such as
classification, information retrieval, clustering, knowledge graph, semi-supervised learning and ranking.

● Knowledge and experience in statistical and data mining techniques: GLM/Regression, Random Forest,
Boosting, Trees, text mining, social network analysis, etc.
● Knowledge on using web services: Redshift, S3, Spark, Digital Ocean, etc.
● Knowledge on creating and using advanced machine learning algorithms and statistics: regression,
simulation, scenario analysis, modeling, clustering, decision trees, neural networks, etc.
● Knowledge on analyzing data from 3rd party providers: Google Analytics, Site Catalyst, Core metrics,
AdWords, Crimson Hexagon, Facebook Insights, etc.
● Knowledge on distributed data/computing tools: Map/Reduce, Hadoop, Hive, Spark, MySQL, Kafka etc.
● Knowledge on visualizing/presenting data for stakeholders using: Quicksight, Periscope, Business Objects,
D3, ggplot, Tableau etc.
● Knowledge of a variety of machine learning techniques (clustering, decision tree learning, artificial neural
networks, etc.) and their real-world advantages/drawbacks.
● Knowledge of advanced statistical techniques and concepts (regression, properties of distributions,
statistical tests, and proper usage, etc.) and experience with applications.
● Experience building data pipelines that prep data for Machine learning and complete feedback loops.
● Knowledge of Machine Learning lifecycle and experience working with data scientists
● Experience with Relational databases and NoSQL databases
● Experience with workflow scheduling / orchestration such as Airflow or Oozie
● Working knowledge of current techniques and approaches in machine learning and statistical or
mathematical models
● Strong Data Engineering & ETL skills to build scalable data pipelines. Exposure to data streaming stack (e.g.
Kafka)
● Relevant experience in fine tuning and optimizing ML (especially Deep Learning) models to bring down
serving latency.
● Exposure to ML model productionzation stack (e.g. MLFlow, Docker)
● Excellent exploratory data analysis skills to slice & dice data at scale using SQL in Redshift/BigQuery.
Read more
RITES

at RITES

1 recruiter
Suraj Kasat
Posted by Suraj Kasat
Pune
2 - 5 yrs
₹3.5L - ₹5L / yr
Data Visualization
skill iconData Analytics
Statistical Modeling
Probability
Clustering
+3 more
This job requires the candidate to have advanced knowledge of python including but not limited to topics such as data analytics, data visualization, statistical analysis, probabilistic analysis, database management.

We will build a comprehensive backtesting platform for trading in the NSE F&O segment.

Any knowledge of financial markets is a bonus
Read more
Intuitive Technology Partners
Aakriti Gupta
Posted by Aakriti Gupta
Remote, Ahmedabad, Pune, Gurugram, Chennai, Bengaluru (Bangalore), india
6 - 12 yrs
Best in industry
DevOps
skill iconKubernetes
skill iconDocker
Terraform
Linux/Unix
+10 more

Intuitive is the fastest growing top-tier Cloud Solutions and Services company supporting Global Enterprise Customer across Americas, Europe and Middle East.

Intuitive is looking for highly talented hands-on Cloud Infrastructure Architects to help accelerate our growing Professional Services consulting Cloud & DevOps practice. This is an excellent opportunity to join Intuitive’s global world class technology teams, working with some of the best and brightest engineers while also developing your skills and furthering your career working with some of the largest customers.

Job Description :

  • Extensive exp. with K8s (EKS/GKE) and k8s eco-system tooling e,g., Prometheus, ArgoCD, Grafana, Istio etc.
  • Extensive AWS/GCP Core Infrastructure skills
  • Infrastructure/ IAC Automation, Integration - Terraform
  • Kubernetes resources engineering and management
  • Experience with DevOps tools, CICD pipelines and release management
  • Good at creating documentation(runbooks, design documents, implementation plans )

Linux Experience :

  1. Namespace
  2. Virtualization
  3. Containers

 

Networking Experience

  1. Virtual networking
  2. Overlay networks
  3. Vxlans, GRE

 

Kubernetes Experience :

Should have experience in bringing up the Kubernetes cluster manually without using kubeadm tool.

 

Observability                              

Experience in observability is a plus

 

Cloud automation :

Familiarity with cloud platforms exclusively AWS, DevOps tools like Jenkins, terraform etc.

 

Read more
Bengaluru (Bangalore), Gurugram
1 - 6 yrs
₹7L - ₹15L / yr
Market Research
skill iconData Analytics
skill iconPython
skill iconR Programming
Linear regression
+4 more

Company Profile:

The company is World's No1 Global management consulting firm.


Job Qualifications
 Graduate or post graduate degree in statistics, economics, econometrics, computer science,
engineering, or mathematics
 2-5 years of relevant experience
 Adept in forecasting, regression analysis and segmentation work
 Understanding of modeling techniques, specifically logistic regression, linear regression, cluster
analysis, CHAID, etc.
 Statistical programming software experience in R & Python, comfortable working with large data
sets; SAS & SQL are also preferred
 Excellent analytical and problem-solving skills, including the ability to disaggregate issues, identify
root causes and recommend solutions
 Excellent time management skills
 Good written and verbal communication skills; understanding of both written and spoken English
 Strong interpersonal skills
 Ability to act autonomously, bringing structure and organization to work
 Creative and action-oriented mindset
 Ability to interact in a fluid, demanding and unstructured environment where priorities evolve
constantly and methodologies are regularly challenged
 Ability to work under pressure and deliver on tight deadlines
Read more
Srijan Technologies

at Srijan Technologies

6 recruiters
Adyasha Satpathy
Posted by Adyasha Satpathy
Remote only
3 - 8 yrs
₹10L - ₹25L / yr
skill iconData Science
skill iconMachine Learning (ML)
skill iconR Programming
skill iconPython
skill iconDeep Learning
+1 more

Job Responsibilities:-

  • Develop robust, scalable and maintainable machine learning models to answer business problems against large data sets.
  • Build methods for document clustering, topic modeling, text classification, named entity recognition, sentiment analysis, and POS tagging.
  • Perform elements of data cleaning, feature selection and feature engineering and organize experiments in conjunction with best practices.
  • Benchmark, apply, and test algorithms against success metrics. Interpret the results in terms of relating those metrics to the business process.
  • Work with development teams to ensure models can be implemented as part of a delivered solution replicable across many clients.
  • Knowledge of Machine Learning, NLP, Document Classification, Topic Modeling and Information Extraction with a proven track record of applying them to real problems.
  • Experience working with big data systems and big data concepts.
  • Ability to provide clear and concise communication both with other technical teams and non-technical domain specialists.
  • Strong team player; ability to provide both a strong individual contribution but also work as a team and contribute to wider goals is a must in this dynamic environment.
  • Experience with noisy and/or unstructured textual data.

knowledge graph and NLP including summarization, topic modelling etc

  • Strong coding ability with statistical analysis tools in Python or R, and general software development skills (source code management, debugging, testing, deployment, etc.)
  • Working knowledge of various text mining algorithms and their use-cases such as keyword extraction, PLSA, LDA, HMM, CRF, deep learning & recurrent ANN, word2vec/doc2vec, Bayesian modeling.
  • Strong understanding of text pre-processing and normalization techniques, such as tokenization,
  • POS tagging and parsing and how they work at a low level.
  • Excellent problem solving skills.
  • Strong verbal and written communication skills
  • Masters or higher in data mining or machine learning; or equivalent practical analytics / modelling experience
  • Practical experience in using NLP related techniques and algorithms
  • Experience in open source coding and communities desirable.

Able to containerize Models and associated modules and work in a Microservices environment

Read more
Sapper.ai

at Sapper.ai

2 recruiters
Hemant Hingmire
Posted by Hemant Hingmire
Pune
3 - 10 yrs
₹5L - ₹20L / yr
DevOps
skill iconJenkins
CI/CD
skill iconDocker
skill iconAmazon Web Services (AWS)
+2 more

DevOps 

Engineers : Min 3 to 5 Years 
Tech Leads : Min 6 to 10 Years 

 

  • Implementing & supporting CI/CD/CT pipelines at scale.  
  • Knowledge and experience using Chef, Puppet or Ansible automation to deploy and be able to manage Linux systems in production and CI environments.  
  • Extensive experience with Shell scripts (bash).  
  • Knowledge and practical experience of Jenkins for CI.  
  • Experienced in build & release management.  
  • Experience of deploying JVM based applications.  
  • Enterprise AWS deployment with sound knowledge on AWS & AWS security.  
  • Knowledge of encryption technologies: IPSec, SSL, SSH. 
  • Minimum of 2 years of experience as a Linux Systems Engineer (CentOS/Red Hat) ideally supporting a highly-available, 24x7 production environments. 
  • DNS providing and maintenance.  
  • Helpful skills: Knowledge of applications relying on Maven, Ant, Gradle, Spring Boot.  
  • Knowledge of app and server monitoring tools such as ELK/AppEngine. 
  • Excellent written, oral communication and interpersonal skills. 

 

Read more
Opt IT Technologies

at Opt IT Technologies

1 recruiter
Satish Narule
Posted by Satish Narule
Bengaluru (Bangalore)
5 - 9 yrs
₹5L - ₹10L / yr
Linux administration
Linux/Unix
Shell Scripting
Bash
skill iconPython
+3 more
The engineer will be placed at the client location
2. Can demonstrate an understanding of the
following concepts
● Common Linux Utilities
● Server/Application Architecture and
Security Hardening
● Virtualization
● Load Balancing
● Networking Concepts
3. Install, configure, manage and maintain all Linux
server-based applications
4. Strong Knowledge in redhat Clustering
5. Ensure all Linux based systems have the
appropriate patches installed, includes but is not
limited to, security, applications, and OS
6. Ensure all Linux based systems are running
current versions of antivirus software with
current virus definition files
7. Follow the Policies and procedures at the client
location
Read more
Egnyte

at Egnyte

4 recruiters
Prasanth Mulleti
Posted by Prasanth Mulleti
Remote, Mumbai
4 - 10 yrs
Best in industry
skill iconData Science
data scientist
skill iconMachine Learning (ML)
Time series
QoS
+7 more

Job Description

We are looking for an experienced engineer to join our data science team, who will help us design, develop, and deploy machine learning models in production. You will develop robust models, prepare their deployment into production in a controlled manner, while providing appropriate means to monitor their performance and stability after deployment.

 

What You’ll Do will include (But not limited to):

  • Preparing datasets needed to train and validate our machine learning models
  • Anticipate and build solutions for problems that interrupt availability, performance, and stability in our systems, services, and products at scale.
  • Defining and implementing metrics to evaluate the performance of the models, both for computing performance (such as CPU & memory usage) and for ML performance (such as precision, recall, and F1)
  • Supporting the deployment of machine learning models on our infrastructure, including containerization, instrumentation, and versioning
  • Supporting the whole lifecycle of our machine learning models, including gathering data for retraining, A/B testing, and redeployments
  • Developing, testing, and evaluating tools for machine learning models deployment, monitoring, retraining.
  • Working closely within a distributed team to analyze and apply innovative solutions over billions of documents
  • Supporting solutions ranging from rule-bases, classical ML techniques  to the latest deep learning systems.
  • Partnering with cross-functional team members to bring large scale data engineering solutions to production
  • Communicating your approach and results to a wider audience through presentations

Your Qualifications:

  • Demonstrated success with machine learning in a SaaS or Cloud environment, with hands–on knowledge of model creation and deployments in production at scale
  • Good knowledge of traditional machine learning methods and neural networks
  • Experience with practical machine learning modeling, especially on time-series forecasting, analysis, and causal inference.
  • Experience with data mining algorithms and statistical modeling techniques for anomaly detection in time series such as clustering, classification, ARIMA, and decision trees is preferred.
  • Ability to implement data import, cleansing and transformation functions at scale
  • Fluency in Docker, Kubernetes
  • Working knowledge of relational and dimensional data models with appropriate visualization techniques such as PCA.
  • Solid English skills to effectively communicate with other team members

 

Due to the nature of the role, it would be nice if you have also:

  • Experience with large datasets and distributed computing, especially with the Google Cloud Platform
  • Fluency in at least one deep learning framework: PyTorch, TensorFlow / Keras
  • Experience with No–SQL and Graph databases
  • Experience working in a Colab, Jupyter, or Python notebook environment
  • Some experience with monitoring, analysis, and alerting tools like New Relic, Prometheus, and the ELK stack
  • Knowledge of Java, Scala or Go-Lang programming languages
  • Familiarity with KubeFlow
  • Experience with transformers, for example the Hugging Face libraries
  • Experience with OpenCV

 

About Egnyte

In a content critical age, Egnyte fuels business growth by enabling content-rich business processes, while also providing organizations with visibility and control over their content assets. Egnyte’s cloud-native content services platform leverages the industry’s leading content intelligence engine to deliver a simple, secure, and vendor-neutral foundation for managing enterprise content across business applications and storage repositories. More than 16,000 customers trust Egnyte to enhance employee productivity, automate data management, and reduce file-sharing cost and complexity. Investors include Google Ventures, Kleiner Perkins, Caufield & Byers, and Goldman Sachs. For more information, visit www.egnyte.com

 

#LI-Remote

Read more
Graphene Services Pte Ltd
Swetha Seshadri
Posted by Swetha Seshadri
Remote, Bengaluru (Bangalore)
3 - 7 yrs
Best in industry
PyTorch
skill iconDeep Learning
Natural Language Processing (NLP)
skill iconPython
skill iconMachine Learning (ML)
+8 more
ML Engineer
WE ARE GRAPHENE

Graphene is an award-winning AI company, developing customized insights and data solutions for corporate clients. With a focus on healthcare, consumer goods and financial services, our proprietary AI platform is disrupting market research with an approach that allows us to get into the mind of customers to a degree unprecedented in traditional market research.

Graphene was founded by corporate leaders from Microsoft and P&G and works closely with the Singapore Government & universities in creating cutting edge technology. We are gaining traction with many Fortune 500 companies globally.

Graphene has a 6-year track record of delivering financially sustainable growth and is one of the few start-ups which are self-funded, yet profitable and debt free.

We already have a strong bench strength of leaders in place. Now, we are looking to groom more talents for our expansion into the US. Join us and take both our growths to the next level!

 

WHAT WILL THE ENGINEER-ML DO?

 

  • Primary Purpose: As part of a highly productive and creative AI (NLP) analytics team, optimize algorithms/models for performance and scalability, engineer & implement machine learning algorithms into services and pipelines to be consumed at web-scale
  • Daily Grind: Interface with data scientists, project managers, and the engineering team to achieve sprint goals on the product roadmap, and ensure healthy models, endpoints, CI/CD,
  • Career Progression: Senior ML Engineer, ML Architect

 

YOU CAN EXPECT TO

  • Work in a product-development team capable of independently authoring software products.
  • Guide junior programmers, set up the architecture, and follow modular development approaches.
  • Design and develop code which is well documented.
  • Optimize of the application for maximum speed and scalability
  • Adhere to the best Information security and Devops practices.
  • Research and develop new approaches to problems.
  • Design and implement schemas and databases with respect to the AI application
  • Cross-pollinated with other teams.

 

HARD AND SOFT SKILLS

Must Have

  • Problem-solving abilities
  • Extremely strong programming background – data structures and algorithm
  • Advanced Machine Learning: TensorFlow, Keras
  • Python, spaCy, NLTK, Word2Vec, Graph databases, Knowledge-graph, BERT (derived models), Hyperparameter tuning
  • Experience with OOPs and design patterns
  • Exposure to RDBMS/NoSQL
  • Test Driven Development Methodology

 

Good to Have

  • Working in cloud-native environments (preferably Azure)
  • Microservices
  • Enterprise Design Patterns
  • Microservices Architecture
  • Distributed Systems
Read more
RecoSense Infosolutions Pvt Ltd
Swathi Meda
Posted by Swathi Meda
Bengaluru (Bangalore)
2 - 5 yrs
₹1L - ₹1L / yr
DevOps
Cloud Computing
Clustering
skill iconAmazon Web Services (AWS)
Skills Required: Cloud Scaling and Performance Management Cloud Deployment Infrastructure management for high Load Scaling Experience - 2-5 Years About Us: RecoSense is a Data Science Led Venture into Recommendation, Personalization and Machine Learning Framework. Email Id: [email protected] Regards, Swathi Udhyakumar
Read more
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Find more jobs
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort