Cutshort logo
k-means clustering Jobs in Chennai

11+ k-means clustering Jobs in Chennai | k-means clustering Job openings in Chennai

Apply to 11+ k-means clustering Jobs in Chennai on CutShort.io. Explore the latest k-means clustering Job opportunities across top companies like Google, Amazon & Adobe.

icon
netmedscom

at netmedscom

3 recruiters
Vijay Hemnath
Posted by Vijay Hemnath
Chennai
5 - 10 yrs
₹10L - ₹30L / yr
skill iconMachine Learning (ML)
Software deployment
CI/CD
Cloud Computing
Snow flake schema
+19 more

We are looking for an outstanding ML Architect (Deployments) with expertise in deploying Machine Learning solutions/models into production and scaling them to serve millions of customers. A candidate with an adaptable and productive working style which fits in a fast-moving environment.

 

Skills:

- 5+ years deploying Machine Learning pipelines in large enterprise production systems.

- Experience developing end to end ML solutions from business hypothesis to deployment / understanding the entirety of the ML development life cycle.
- Expert in modern software development practices; solid experience using source control management (CI/CD).
- Proficient in designing relevant architecture / microservices to fulfil application integration, model monitoring, training / re-training, model management, model deployment, model experimentation/development, alert mechanisms.
- Experience with public cloud platforms (Azure, AWS, GCP).
- Serverless services like lambda, azure functions, and/or cloud functions.
- Orchestration services like data factory, data pipeline, and/or data flow.
- Data science workbench/managed services like azure machine learning, sagemaker, and/or AI platform.
- Data warehouse services like snowflake, redshift, bigquery, azure sql dw, AWS Redshift.
- Distributed computing services like Pyspark, EMR, Databricks.
- Data storage services like cloud storage, S3, blob, S3 Glacier.
- Data visualization tools like Power BI, Tableau, Quicksight, and/or Qlik.
- Proven experience serving up predictive algorithms and analytics through batch and real-time APIs.
- Solid working experience with software engineers, data scientists, product owners, business analysts, project managers, and business stakeholders to design the holistic solution.
- Strong technical acumen around automated testing.
- Extensive background in statistical analysis and modeling (distributions, hypothesis testing, probability theory, etc.)
- Strong hands-on experience with statistical packages and ML libraries (e.g., Python scikit learn, Spark MLlib, etc.)
- Experience in effective data exploration and visualization (e.g., Excel, Power BI, Tableau, Qlik, etc.)
- Experience in developing and debugging in one or more of the languages Java, Python.
- Ability to work in cross functional teams.
- Apply Machine Learning techniques in production including, but not limited to, neuralnets, regression, decision trees, random forests, ensembles, SVM, Bayesian models, K-Means, etc.

 

Roles and Responsibilities:

Deploying ML models into production, and scaling them to serve millions of customers.

Technical solutioning skills with deep understanding of technical API integrations, AI / Data Science, BigData and public cloud architectures / deployments in a SaaS environment.

Strong stakeholder relationship management skills - able to influence and manage the expectations of senior executives.
Strong networking skills with the ability to build and maintain strong relationships with both business, operations and technology teams internally and externally.

Provide software design and programming support to projects.

 

 Qualifications & Experience:

Engineering and post graduate candidates, preferably in Computer Science, from premier institutions with proven work experience as a Machine Learning Architect (Deployments) or a similar role for 5-7 years.

 

Read more
OJCommerce

at OJCommerce

3 recruiters
Rajalakshmi N
Posted by Rajalakshmi N
Chennai
2 - 5 yrs
₹7L - ₹12L / yr
Beautiful Soup
Web Scraping
skill iconPython
Selenium

Role : Web Scraping Engineer

Experience : 2 to 3 Years

Job Location : Chennai

About OJ Commerce: 


OJ Commerce (OJC), a rapidly expanding and profitable online retailer, is headquartered in Florida, USA, with a fully-functional office in Chennai, India. We deliver exceptional value to our customers by harnessing cutting-edge technology, fostering innovation, and establishing strategic brand partnerships to enable a seamless, enjoyable shopping experience featuring high-quality products at unbeatable prices. Our advanced, data-driven system streamlines operations with minimal human intervention.

Our extensive product portfolio encompasses over a million SKUs and more than 2,500 brands across eight primary categories. With a robust presence on major platforms such as Amazon, Walmart, Wayfair, Home Depot, and eBay, we directly serve consumers in the United States.

As we continue to forge new partner relationships, our flagship website, www.ojcommerce.com, has rapidly emerged as a top-performing e-commerce channel, catering to millions of customers annually.

Job Summary:

We are seeking a Web Scraping Engineer and Data Extraction Specialist who will play a crucial role in our data acquisition and management processes. The ideal candidate will be proficient in developing and maintaining efficient web crawlers capable of extracting data from large websites and storing it in a database. Strong expertise in Python, web crawling, and data extraction, along with familiarity with popular crawling tools and modules, is essential. Additionally, the candidate should demonstrate the ability to effectively utilize API tools for testing and retrieving data from various sources. Join our team and contribute to our data-driven success!


Responsibilities:


  • Develop and maintain web crawlers in Python.
  • Crawl large websites and extract data.
  • Store data in a database.
  • Analyze and report on data.
  • Work with other engineers to develop and improve our web crawling infrastructure.
  • Stay up to date on the latest crawling tools and techniques.



Required Skills and Qualifications:


  • Bachelor's degree in computer science or a related field.
  • 2-3 years of experience with Python and web crawling.
  • Familiarity with tools / modules such as
  • Scrapy, Selenium, Requests, Beautiful Soup etc.
  • API tools such as Postman or equivalent. 
  • Working knowledge of SQL.
  • Experience with web crawling and data extraction.
  • Strong problem-solving and analytical skills.
  • Ability to work independently and as part of a team.
  • Excellent communication and documentation skills.


What we Offer

• Competitive salary

• Medical Benefits/Accident Cover

• Flexi Office Working Hours

• Fast paced start up

Read more
Chennai
4 - 6 yrs
₹18L - ₹23L / yr
skill iconMachine Learning (ML)
skill iconData Science
Natural Language Processing (NLP)
Computer Vision
recommendation algorithm
+6 more

Roles & Responsibilities:


-Adopt novel and breakthrough Deep Learning/Machine Learning technology to fully solve real world problems for different industries. -Develop prototypes of machine learning models based on existing research papers.

-Utilize published/existing models to meet business requirements. Tweak existing implementations to improve efficiencies and adapt for use-case variations.

-Optimize machine learning model training and inference time. -Work closely with development and QA teams in transitioning prototypes to commercial products

-Independently work end-to-end from data collection, preparation/annotation to validation of outcomes.


-Define and develop ML infrastructure to improve efficiency of ML development workflows.


Must Have:

- Experience in productizing and deployment of ML solutions.

- AI/ML expertise areas: Computer Vision with Deep Learning. Experience with object detection, classification, recognition; document layout and understanding tasks, OCR/ICR

. - Thorough understanding of full ML pipeline, starting from data collection to model building to inference.

- Experience with Python, OpenCV and at least a few framework/libraries (TensorFlow / Keras / PyTorch / spaCy / fastText / Scikit-learn etc.)

- Years with relevant experience:


5+ -Experience or Knowledge in ML OPS.


Good to Have: NLP: Text classification, entity extraction, content summarization. AWS, Docker.

Read more
Chennai, Hyderabad
5 - 10 yrs
₹10L - ₹25L / yr
PySpark
Data engineering
Big Data
Hadoop
Spark
+2 more

Bigdata with cloud:

 

Experience : 5-10 years

 

Location : Hyderabad/Chennai

 

Notice period : 15-20 days Max

 

1.  Expertise in building AWS Data Engineering pipelines with AWS Glue -> Athena -> Quick sight

2.  Experience in developing lambda functions with AWS Lambda

3.  Expertise with Spark/PySpark – Candidate should be hands on with PySpark code and should be able to do transformations with Spark

4.  Should be able to code in Python and Scala.

5.  Snowflake experience will be a plus

Read more
DATASEE.AI, INC.

at DATASEE.AI, INC.

1 video
3 recruiters
BHASKAR RAGHUNATHAN
Posted by BHASKAR RAGHUNATHAN
Chennai
2 - 4 yrs
₹7L - ₹14L / yr
skill iconPython
skill iconMachine Learning (ML)
skill iconData Science
Computer Vision
OpenCV
+5 more

Do you want to help build real technology for a meaningful purpose? Do you want to contribute to making the world more sustainable, advanced and accomplished extraordinary precision in Analytics? 


What is your role?


As a Computer Vision & Machine Learning Engineer at Datasee.AI, you’ll be core to the development of our robotic harvesting system’s visual intelligence. You’ll bring deep computer vision, machine learning, and software expertise while also thriving in a fast-paced, flexible, and energized startup environment. As an early team member, you’ll directly build our success, growth, and culture. You’ll hold a significant role and are excited to grow your role as Datasee.AI grows. 


What you’ll do

  • You will be working with the core R&D team which drives the computer vision and image processing development. 
  • Build deep learning model for our data and object detection on large scale images. 
  • Design and implement real-time algorithms for object detection, classification, tracking, and segmentation 
  • Coordinate and communicate within computer vision, software, and hardware teams to design and execute commercial engineering solutions. 
  • Automate the workflow process between the fast-paced data delivery systems. 

What we are looking for

  • 1 to 3+ years of professional experience in computer vision and machine learning.
  • Extensive use of Python 
  • Experience in python libraries such as OpenCV, Tensorflow and Numpy 
  • Familiarity with a deep learning library such as Keras and PyTorch 
  • Worked on different CNN architectures such as FCN, R-CNN, Fast R-CNN and YOLO
  • Experienced in hyperparameter tuning, data augmentation, data wrangling, model optimization and model deployment
  • B.E./M.E/M.Sc. Computer Science/Engineering or relevant degree
  • Dockerization, AWS modules and Production level modelling
  • Basic knowledge of the Fundamentals of GIS would be added advantage

Prefered Requirements

  • Experience with Qt, Desktop application development, Desktop Automation 
  • Knowledge on Satellite image processing, Geo-Information System, GDAL, Qgis and ArcGIS


About Datasee.AI:

Datasee>AI, Inc. is an AI driven Image Analytics company offering Asset Management solutions for industries in the sectors of Renewable Energy, Infrastructure, Utilities & Agriculture. With core expertise in Image processing, Computer Vision & Machine Learning, Takvaviya’s solution provides value across the enterprise for all the stakeholders through a data driven approach. 

 

With Sales & Operations based out of US, Europe & India, Datasee.AI is a team of 32 people located across different geographies and with varied domain expertise and interests. 

 

A focused and happy bunch of people who take tasks head-on and build scalable platforms and products.

 
 
 
Read more
Quess Corp Limited

at Quess Corp Limited

6 recruiters
Anjali Singh
Posted by Anjali Singh
Noida, Delhi, Gurugram, Ghaziabad, Faridabad, Bengaluru (Bangalore), Chennai
5 - 8 yrs
₹1L - ₹15L / yr
Google Cloud Platform (GCP)
skill iconPython
Big Data
Data processing
Data Visualization

GCP  Data Analyst profile must have below skills sets :

 

Read more
Genesys

at Genesys

5 recruiters
Manojkumar Ganesh
Posted by Manojkumar Ganesh
Chennai, Hyderabad
4 - 10 yrs
₹10L - ₹40L / yr
ETL
Datawarehousing
Business Intelligence (BI)
Big Data
PySpark
+6 more

Join our team

 

We're looking for an experienced and passionate Data Engineer to join our team. Our vision is to empower Genesys to leverage data to drive better customer and business outcomes. Our batch and streaming solutions turn vast amounts of data into useful insights. If you’re interested in working with the latest big data technologies, using industry leading BI analytics and visualization tools, and bringing the power of data to our customers’ fingertips then this position is for you!

 

Our ideal candidate thrives in a fast-paced environment, enjoys the challenge of highly complex business contexts (that are typically being defined in real-time), and, above all, is a passionate about data and analytics.

 

 

What you'll get to do

 

  • Work in an agile development environment, constantly shipping and iterating.
  • Develop high quality batch and streaming big data pipelines.
  • Interface with our Data Consumers, gathering requirements, and delivering complete data solutions.
  • Own the design, development, and maintenance of datasets that drive key business decisions.
  • Support, monitor and maintain the data models
  • Adopt and define the standards and best practices in data engineering including data integrity, performance optimization, validation, reliability, and documentation.
  • Keep up-to-date with advances in big data technologies and run pilots to design the data architecture to scale with the increased data volume using cloud services.
  • Triage many possible courses of action in a high-ambiguity environment, making use of both quantitative analysis and business judgment.

 

Your experience should include

 

  • Bachelor’s degree in CS or related technical field.
  • 5+ years of experience in data modelling, data development, and data warehousing.
  • Experience working with Big Data technologies (Hadoop, Hive, Spark, Kafka, Kinesis).
  • Experience with large scale data processing systems for both batch and streaming technologies (Hadoop, Spark, Kinesis, Flink).
  • Experience in programming using Python, Java or Scala.
  • Experience with data orchestration tools (Airflow, Oozie, Step Functions).
  • Solid understanding of database technologies including NoSQL and SQL.
  • Strong in SQL queries (experience with Snowflake Cloud Datawarehouse is a plus)
  • Work experience in Talend is a plus
  • Track record of delivering reliable data pipelines with solid test infrastructure, CICD, data quality checks, monitoring, and alerting.
  • Strong organizational and multitasking skills with ability to balance competing priorities.
  • Excellent communication (verbal and written) and interpersonal skills and an ability to effectively communicate with both business and technical teams.
  • An ability to work in a fast-paced environment where continuous innovation is occurring, and ambiguity is the norm.

 

Good to have

  • Experience with AWS big data technologies - S3, EMR, Kinesis, Redshift, Glue
Read more
netmedscom

at netmedscom

3 recruiters
Vijay Hemnath
Posted by Vijay Hemnath
Chennai
2 - 5 yrs
₹6L - ₹25L / yr
Big Data
Hadoop
Apache Hive
skill iconScala
Spark
+12 more

We are looking for an outstanding Big Data Engineer with experience setting up and maintaining Data Warehouse and Data Lakes for an Organization. This role would closely collaborate with the Data Science team and assist the team build and deploy machine learning and deep learning models on big data analytics platforms.

Roles and Responsibilities:

  • Develop and maintain scalable data pipelines and build out new integrations and processes required for optimal extraction, transformation, and loading of data from a wide variety of data sources using 'Big Data' technologies.
  • Develop programs in Scala and Python as part of data cleaning and processing.
  • Assemble large, complex data sets that meet functional / non-functional business requirements and fostering data-driven decision making across the organization.  
  • Responsible to design and develop distributed, high volume, high velocity multi-threaded event processing systems.
  • Implement processes and systems to validate data, monitor data quality, ensuring production data is always accurate and available for key stakeholders and business processes that depend on it.
  • Perform root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement.
  • Provide high operational excellence guaranteeing high availability and platform stability.
  • Closely collaborate with the Data Science team and assist the team build and deploy machine learning and deep learning models on big data analytics platforms.

Skills:

  • Experience with Big Data pipeline, Big Data analytics, Data warehousing.
  • Experience with SQL/No-SQL, schema design and dimensional data modeling.
  • Strong understanding of Hadoop Architecture, HDFS ecosystem and eexperience with Big Data technology stack such as HBase, Hadoop, Hive, MapReduce.
  • Experience in designing systems that process structured as well as unstructured data at large scale.
  • Experience in AWS/Spark/Java/Scala/Python development.
  • Should have Strong skills in PySpark (Python & SPARK). Ability to create, manage and manipulate Spark Dataframes. Expertise in Spark query tuning and performance optimization.
  • Experience in developing efficient software code/frameworks for multiple use cases leveraging Python and big data technologies.
  • Prior exposure to streaming data sources such as Kafka.
  • Should have knowledge on Shell Scripting and Python scripting.
  • High proficiency in database skills (e.g., Complex SQL), for data preparation, cleaning, and data wrangling/munging, with the ability to write advanced queries and create stored procedures.
  • Experience with NoSQL databases such as Cassandra / MongoDB.
  • Solid experience in all phases of Software Development Lifecycle - plan, design, develop, test, release, maintain and support, decommission.
  • Experience with DevOps tools (GitHub, Travis CI, and JIRA) and methodologies (Lean, Agile, Scrum, Test Driven Development).
  • Experience building and deploying applications on on-premise and cloud-based infrastructure.
  • Having a good understanding of machine learning landscape and concepts. 

 

Qualifications and Experience:

Engineering and post graduate candidates, preferably in Computer Science, from premier institutions with proven work experience as a Big Data Engineer or a similar role for 3-5 years.

Certifications:

Good to have at least one of the Certifications listed here:

    AZ 900 - Azure Fundamentals

    DP 200, DP 201, DP 203, AZ 204 - Data Engineering

    AZ 400 - Devops Certification

Read more
Mobile Programming LLC

at Mobile Programming LLC

1 video
34 recruiters
Apurva kalsotra
Posted by Apurva kalsotra
Mohali, Gurugram, Bengaluru (Bangalore), Chennai, Hyderabad, Pune
3 - 8 yrs
₹3L - ₹9L / yr
Data Warehouse (DWH)
Big Data
Spark
Apache Kafka
Data engineering
+14 more
Day-to-day Activities
Develop complex queries, pipelines and software programs to solve analytics and data mining problems
Interact with other data scientists, product managers, and engineers to understand business problems, technical requirements to deliver predictive and smart data solutions
Prototype new applications or data systems
Lead data investigations to troubleshoot data issues that arise along the data pipelines
Collaborate with different product owners to incorporate data science solutions
Maintain and improve data science platform
Must Have
BS/MS/PhD in Computer Science, Electrical Engineering or related disciplines
Strong fundamentals: data structures, algorithms, database
5+ years of software industry experience with 2+ years in analytics, data mining, and/or data warehouse
Fluency with Python
Experience developing web services using REST approaches.
Proficiency with SQL/Unix/Shell
Experience in DevOps (CI/CD, Docker, Kubernetes)
Self-driven, challenge-loving, detail oriented, teamwork spirit, excellent communication skills, ability to multi-task and manage expectations
Preferred
Industry experience with big data processing technologies such as Spark and Kafka
Experience with machine learning algorithms and/or R a plus 
Experience in Java/Scala a plus
Experience with any MPP analytics engines like Vertica
Experience with data integration tools like Pentaho/SAP Analytics Cloud
Read more
Streetmark
Agency job
via STREETMARK Info Solutions by Mohan Guttula
Remote, Bengaluru (Bangalore), Chennai
3 - 9 yrs
₹3L - ₹20L / yr
SCCM
PL/SQL
APPV
Stani's Python Editor
AWS Simple Notification Service (SNS)
+3 more

Hi All,

We are hiring Data Engineer for one of our client for Bangalore & Chennai Location.


Strong Knowledge of SCCM, App V, and Intune infrastructure.

Powershell/VBScript/Python,

Windows Installer

Knowledge of Windows 10 registry

Application Repackaging

Application Sequencing with App-v

Deploying and troubleshooting applications, packages, and Task Sequences.

Security patch deployment and remediation

Windows operating system patching and defender updates

 

Thanks,
Mohan.G

Read more
codeMantra

at codeMantra

3 recruiters
saranya v
Posted by saranya v
Chennai
14 - 18 yrs
₹20L - ₹25L / yr
skill iconMachine Learning (ML)
skill iconData Science
skill iconR Programming
skill iconPython

ML ARCHITECT

 

Job Overview

                         We are looking for a ML Architect to help us discover the information hidden in vast amounts of data, and help us make smarter decisions to deliver even better products. Your primary focus will be in applying data mining techniques, doing statistical analysis, and building high quality prediction systems integrated with our products. They must have strong experience using variety of data mining and data analysis methods, building and implementing models, using/creating algorithm’s and creating/running simulations.  They must be comfortable working with a wide range of stakeholders and functional teams. The right candidate will have a passion for discovering solutions hidden in large data sets and working with stakeholders to improve business outcomes. Automating to identify the textual data with their properties and structure form various type of document.

 

Responsibilities

  • Selecting features, building and optimizing classifiers using machine learning techniques
  • Data mining using state-of-the-art methods
  • Enhancing data collection procedures to include information that is relevant for building analytic systems
  • Processing, cleansing, and verifying the integrity of data used for analysis
  • Creating automated anomaly detection systems and constant tracking of its performance
  • Assemble large, complex data sets that meet functional / non-functional business requirements.
  • Secure and manage when needed GPU cluster resources for events
  • Write comprehensive internal feedback reports and find opportunities for improvements
  • Manage GPU instances/machines to increase the performance and efficiency of the ML/DL model.

 

Skills and Qualifications

  • Strong Hands-on experience in Python Programming
  • Working experience with Computer Vision models - Object Detection Model, Image Classification
  • Good experience in feature extraction, feature selection techniques and transfer learning
  • Working Experience in building deep learning NLP Models for text classification, image analytics-CNN,RNN,LSTM.
  • Working Experience in any of the AWS/GCP cloud platforms, exposure in fetching data from various sources.
  • Good experience in exploratory data analysis, data visualisation, and other data preprocessing techniques.
  • Knowledge in any one of the DL frameworks like Tensorflow, Pytorch, Keras, Caffe
  • Good knowledge in statistics,distribution of data and in supervised and unsupervised machine learning algorithms.
  • Exposure to OpenCV Familiarity with GPUs + CUDA
  • Experience with NVIDIA software for cluster management and provisioning such as nvsm, dcgm and DeepOps.
  • We are looking for a candidate with 14+ years of experience, who has attained a Graduate degree in Computer Science, Statistics, Informatics, Information Systems or another quantitative field. They should also have experience using the following software/tools:
  • Experience with big data tools: Hadoop, Spark, Kafka, etc.
  • Experience with AWS cloud services: EC2, RDS, AWS-Sagemaker(Added advantage)
  • Experience with object-oriented/object function scripting languages in any: Python, Java, C++, Scala, etc.

 

 

Read more
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Find more jobs
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort