Cutshort logo
Datascience jobs

50+ Data Science Jobs in India

Apply to 50+ Data Science Jobs on CutShort.io. Find your next job, effortlessly. Browse Data Science Jobs and apply today!

icon
LetsIntern

at LetsIntern

1 candid answer
Ashish Singh
Posted by Ashish Singh
Remote only
0 - 1 yrs
₹1L - ₹2L / yr
skill iconData Science
Artificial Intelligence (AI)


Job Description:

As a Data Science Intern, you will collaborate with our data science and analytics teams to work on meaningful projects involving data analysis, predictive modeling, and statistical modeling. You will have the opportunity to apply your academic knowledge in a practical, fast-paced environment, contribute to key data-driven projects, and gain valuable experience with industry-leading tools and technologies.


Responsibilities:


  • Assist in collecting, cleaning, and preprocessing data from various sources.
  • Perform exploratory data analysis to identify trends, patterns, and anomalies.
  • Develop and implement machine learning models and algorithms.
  • Create data visualizations and reports to communicate findings to stakeholders.
  • Collaborate with team members on data-driven projects and research.
  • Participate in meetings and contribute to discussions on project progress and strategy.
  • Work with large datasets to clean, preprocess, and analyze data.
  • Build and deploy statistical and machine learning models to generate actionable insights.
  • Conduct exploratory data analysis (EDA) to uncover trends, patterns, and correlations.
  • Assist in the creation of data visualizations and dashboards for reporting insights.
  • Support the development and improvement of data pipelines and algorithms.
  • Collaborate with cross-functional teams to understand data needs and translate them into actionable analytics solutions.
  • Contribute to the documentation and presentation of results, findings, and recommendations.
  • Participate in team meetings, brainstorming sessions, and project discussions.


Duration: 03 Months (with the possibility of extending up to 6 months)

MODE: Work From Home (Online)


Requirements:


  • Any Graduate / PassOuts / Freasher can apply.
  • Currently pursuing a Bachelor's or Master’s degree in Data Science, Computer Science, Mathematics, Statistics, or a related field.
  • Proficiency in programming languages such as Python, R, or SQL.
  • Strong foundation in statistics, probability, and data analysis techniques.


Benefits


Internship Certificate

Letter of recommendation

Stipend Performance Based

Part time work from home (2-3 Hrs per day)

5 days a week, Fully Flexible Shift

Read more
Byteridge

at Byteridge

1 recruiter
Sweety S
Posted by Sweety S
Remote only
3 - 6 yrs
₹10L - ₹18L / yr
skill iconData Science
Generative AI
skill iconPython
skill iconAmazon Web Services (AWS)
Large Language Models (LLM) tuning
+1 more

Job Description

We are looking for a Data Scientist with 3–5 years of experience in data analysis, statistical modeling, and machine learning to drive actionable business insights. This role involves translating complex business problems into analytical solutions, building and evaluating ML models, and communicating insights through compelling data stories. The ideal candidate combines strong statistical foundations with hands-on experience across modern data platforms and cross-functional collaboration.

What will you need to be successful in this role?

Core Data Science Skills

• Strong foundation in statistics, probability, and mathematical modeling

• Expertise in Python for data analysis (NumPy, Pandas, Scikit-learn, SciPy)

• Strong SQL skills for data extraction, transformation, and complex analytical queries

• Experience with exploratory data analysis (EDA) and statistical hypothesis testing

• Proficiency in data visualization tools (Matplotlib, Seaborn, Plotly, Tableau, or Power BI)

• Strong understanding of feature engineering and data preprocessing techniques

• Experience with A/B testing, experimental design, and causal inference

Machine Learning & Analytics

• Strong experience building and deploying ML models (regression, classification, clustering)

• Knowledge of ensemble methods, gradient boosting (XGBoost, LightGBM, CatBoost)

• Understanding of time series analysis and forecasting techniques

• Experience with model evaluation metrics and cross-validation strategies

• Familiarity with dimensionality reduction techniques (PCA, t-SNE, UMAP)

• Understanding of bias-variance tradeoff and model interpretability

• Experience with hyperparameter tuning and model optimization

GenAI & Advanced Analytics

• Working knowledge of LLMs and their application to business problems

• Experience with prompt engineering for analytical tasks

• Understanding of embeddings and semantic similarity for analytics

• Familiarity with NLP techniques (text classification, sentiment analysis, entity,extraction)

• Experience integrating AI/ML models into analytical workflows

Data Platforms & Tools

• Experience with cloud data platforms (Snowflake, Databricks, BigQuery)

• Proficiency in Jupyter notebooks and collaborative development environments

• Familiarity with version control (Git) and collaborative workflows

• Experience working with large datasets and distributed computing (Spark/PySpark)

• Understanding of data warehousing concepts and dimensional modeling

• Experience with cloud platforms (AWS, Azure, or GCP)

Business Acumen & Communication

• Strong ability to translate business problems into analytical frameworks

• Experience presenting complex analytical findings to non-technical stakeholders

• Ability to create compelling data stories and visualizations

• Track record of driving business decisions through data-driven insights

• Experience working with cross-functional teams (Product, Engineering, Business)

• Strong documentation skills for analytical methodologies and findings

Good to have

• Experience with deep learning frameworks (TensorFlow, PyTorch, Keras)

• Knowledge of reinforcement learning and optimization techniques

• Familiarity with graph analytics and network analysis

• Experience with MLOps and model deployment pipelines

• Understanding of model monitoring and performance tracking in production

• Knowledge of AutoML tools and automated feature engineering

• Experience with real-time analytics and streaming data

• Familiarity with causal ML and uplift modeling

• Publications or contributions to data science community

• Kaggle competitions or open-source contributions

• Experience in specific domains (finance, healthcare, e-commerce)

Read more
Well established Fintech Co.

Well established Fintech Co.

Agency job
via Infinium Associate by Toshi Srivastava
Delhi, Gurugram, Noida, Ghaziabad, Faridabad
8 - 12 yrs
₹30L - ₹35L / yr
skill iconData Science
skill iconPython
Artificial Intelligence (AI)
Google Vertex AI
Google Cloud Platform (GCP)

We are looking for a visionary and hands-on Head of Data Science and AI with at least 6 years of experience to lead our data strategy and analytics initiatives. In this pivotal role, you will take full ownership of the end-to-end technology stack, driving a data-analytics-driven business roadmap that delivers tangible ROI. You will not only guide high-level strategy but also remain hands-on in model design and deployment, ensuring our data capabilities directly empower executive decision-making.

If you are passionate about leveraging AI and Data to transform financial services, we invite you to lead our data transformation journey.

Key Responsibilities

Strategic Leadership & Roadmap

  • End-to-End Tech Stack Ownership: Define, own, and evolve the complete data science and analytics technology stack to ensure scalability and performance.
  • Business Roadmap & ROI: Develop and execute a data analytics-driven business roadmap, ensuring every initiative is aligned with organizational goals and delivers measurable Return on Investment (ROI).
  • Executive Decision Support: Create and present high-impact executive decision packs, providing actionable insights that drive key business strategies.

Model Design & Deployment (Hands-on)

  • Hands-on Development: Lead by example with hands-on involvement in AI modeling, machine learning model design, and algorithm development using Python.
  • Deployment & Ops: Oversee and execute the deployment of models into production environments, ensuring reliability, scalability, and seamless integration with existing systems.
  • Leverage expert-level knowledge of Google Cloud Agentic AI, Vertex AI and BigQuery to build advanced predictive models and data pipelines.
  • Develop business dashboards for various sales channels and drive data driven decision making to improve sales and reduce costs. 

Governance & Quality

  • Data Governance: Establish and enforce robust data governance frameworks, ensuring data accuracy, security, consistency, and compliance across the organization.
  • Best Practices: Champion best practices in coding, testing, and documentation to build a world-class data engineering culture.

Collaboration & Innovation

  • Work closely with Product, Engineering, and Business leadership to identify opportunities for AI/ML intervention.
  • Stay ahead of industry trends in AI, Generative AI, and financial modeling to keep Bajaj Capital at the forefront of innovation.

Must-Have Skills & Experience

Experience:

  • At least 7 years of industry experience in Data Science, Machine Learning, or a related field.
  • Proven track record of applying AI and leading data science teams or initiatives that resulted in significant business impact.

Technical Proficiency:

  • Core Languages: Proficiency in Python is mandatory, with strong capabilities in libraries such as Pandas, NumPy, Scikit-learn, TensorFlow/PyTorch.
  • Cloud Data Stack: Expert-level command of Google Cloud Platform (GCP), specifically Agentic AI, Vertex AI and BigQuery.
  • AI & Analytics Stack: Deep understanding of the modern AI and Data Analytics stack, including data warehousing, ETL/ELT pipelines, and MLOps.
  • Visualization: PowerBI in combination with custom web/mobile applications.

Leadership & Soft Skills:

  • Ability to translate complex technical concepts into clear business value for stakeholders.
  • Strong ownership mindset with the ability to manage end-to-end project lifecycles.
  • Experience in creating governance structures and executive-level reporting.

Good-to-Have / Plus

  • Domain Expertise: Prior experience in the BFSI domain (Wealth Management, Insurance, Mutual Funds, or Fintech).
  • Certifications: Google Professional Data Engineer or Google Professional Machine Learning Engineer certifications.
  • Advanced AI: Experience with Generative AI (LLMs), RAG architectures, and real-time analytics.


Read more
Healthcare Industry

Healthcare Industry

Agency job
via Peak Hire Solutions by Dhara Thakkar
Bengaluru (Bangalore)
6 - 10 yrs
₹25L - ₹30L / yr
MLOps
Generative AI
skill iconPython
Natural Language Processing (NLP)
skill iconMachine Learning (ML)
+22 more

JOB DETAILS:

* Job Title: Principal Data Scientist

* Industry: Healthcare

* Salary: Best in Industry

* Experience: 6-10 years

* Location: Bengaluru

 

Preferred Skills: Generative AI, NLP & ASR, Transformer Models, Cloud Deployment, MLOps

 

Criteria:

  1. Candidate must have 7+ years of experience in ML, Generative AI, NLP, ASR, and LLMs (preferably healthcare).
  2. Candidate must have strong Python skills with hands-on experience in PyTorch/TensorFlow and transformer model fine-tuning.
  3. Candidate must have experience deploying scalable AI solutions on AWS/Azure/GCP with MLOps, Docker, and Kubernetes.
  4. Candidate must have hands-on experience with LangChain, OpenAI APIs, vector databases, and RAG architectures.
  5. Candidate must have experience integrating AI with EHR/EMR systems, ensuring HIPAA/HL7/FHIR compliance, and leading AI initiatives.

 

Job Description

Principal Data Scientist

(Healthcare AI | ASR | LLM | NLP | Cloud | Agentic AI)

 

Job Details

  • Designation: Principal Data Scientist (Healthcare AI, ASR, LLM, NLP, Cloud, Agentic AI)
  • Location: Hebbal Ring Road, Bengaluru
  • Work Mode: Work from Office
  • Shift: Day Shift
  • Reporting To: SVP
  • Compensation: Best in the industry (for suitable candidates)

 

Educational Qualifications

  • Ph.D. or Master’s degree in Computer Science, Artificial Intelligence, Machine Learning, or a related field
  • Technical certifications in AI/ML, NLP, or Cloud Computing are an added advantage

 

Experience Required

  • 7+ years of experience solving real-world problems using:
  • Natural Language Processing (NLP)
  • Automatic Speech Recognition (ASR)
  • Large Language Models (LLMs)
  • Machine Learning (ML)
  • Preferably within the healthcare domain
  • Experience in Agentic AI, cloud deployments, and fine-tuning transformer-based models is highly desirable

Role Overview

This position is part of company, a healthcare division of Focus Group specializing in medical coding and scribing.

We are building a suite of AI-powered, state-of-the-art web and mobile solutions designed to:

  • Reduce administrative burden in EMR data entry
  • Improve provider satisfaction and productivity
  • Enhance quality of care and patient outcomes

Our solutions combine cutting-edge AI technologies with live scribing services to streamline clinical workflows and strengthen clinical decision-making.

The Principal Data Scientist will lead the design, development, and deployment of cognitive AI solutions, including advanced speech and text analytics for healthcare applications. The role demands deep expertise in generative AI, classical ML, deep learning, cloud deployments, and agentic AI frameworks.

 

Key Responsibilities

AI Strategy & Solution Development

  • Define and develop AI-driven solutions for speech recognition, text processing, and conversational AI
  • Research and implement transformer-based models (Whisper, LLaMA, GPT, T5, BERT, etc.) for speech-to-text, medical summarization, and clinical documentation
  • Develop and integrate Agentic AI frameworks enabling multi-agent collaboration
  • Design scalable, reusable, and production-ready AI frameworks for speech and text analytics

Model Development & Optimization

  • Fine-tune, train, and optimize large-scale NLP and ASR models
  • Develop and optimize ML algorithms for speech, text, and structured healthcare data
  • Conduct rigorous testing and validation to ensure high clinical accuracy and performance
  • Continuously evaluate and enhance model efficiency and reliability

Cloud & MLOps Implementation

  • Architect and deploy AI models on AWS, Azure, or GCP
  • Deploy and manage models using containerization, Kubernetes, and serverless architectures
  • Design and implement robust MLOps strategies for lifecycle management

Integration & Compliance

  • Ensure compliance with healthcare standards such as HIPAA, HL7, and FHIR
  • Integrate AI systems with EHR/EMR platforms
  • Implement ethical AI practices, regulatory compliance, and bias mitigation techniques

Collaboration & Leadership

  • Work closely with business analysts, healthcare professionals, software engineers, and ML engineers
  • Implement LangChain, OpenAI APIs, vector databases (Pinecone, FAISS, Weaviate), and RAG architectures
  • Mentor and lead junior data scientists and engineers
  • Contribute to AI research, publications, patents, and long-term AI strategy

 

Required Skills & Competencies

  • Expertise in Machine Learning, Deep Learning, and Generative AI
  • Strong Python programming skills
  • Hands-on experience with PyTorch and TensorFlow
  • Experience fine-tuning transformer-based LLMs (GPT, BERT, T5, LLaMA, etc.)
  • Familiarity with ASR models (Whisper, Canary, wav2vec, DeepSpeech)
  • Experience with text embeddings and vector databases
  • Proficiency in cloud platforms (AWS, Azure, GCP)
  • Experience with LangChain, OpenAI APIs, and RAG architectures
  • Knowledge of agentic AI frameworks and reinforcement learning
  • Familiarity with Docker, Kubernetes, and MLOps best practices
  • Understanding of FHIR, HL7, HIPAA, and healthcare system integrations
  • Strong communication, collaboration, and mentoring skills

 

 

Read more
Bengaluru (Bangalore)
7 - 10 yrs
₹27L - ₹32L / yr
skill iconData Science

Candidate must have 7+ years of experience in ML, Generative AI, NLP, ASR, and LLMs (preferably healthcare).


Candidate must have strong Python skills with hands-on experience in PyTorch/TensorFlow and transformer model fine-tuning.


Candidate must have experience deploying scalable AI solutions on AWS/Azure/GCP with MLOps, Docker, and Kubernetes.


Candidate must have hands-on experience with LangChain, OpenAI APIs, vector databases, and RAG architectures.


Candidate must have experience integrating AI with EHR/EMR systems, ensuring HIPAA/HL7/FHIR compliance, and leading AI initiatives.

Read more
AI Industry

AI Industry

Agency job
via Peak Hire Solutions by Dhara Thakkar
Mumbai, Bengaluru (Bangalore), Hyderabad, Gurugram
5 - 17 yrs
₹34L - ₹45L / yr
Dremio
Data engineering
Business Intelligence (BI)
Tableau
PowerBI
+51 more

Review Criteria:

  • Strong Dremio / Lakehouse Data Architect profile
  • 5+ years of experience in Data Architecture / Data Engineering, with minimum 3+ years hands-on in Dremio
  • Strong expertise in SQL optimization, data modeling, query performance tuning, and designing analytical schemas for large-scale systems
  • Deep experience with cloud object storage (S3 / ADLS / GCS) and file formats such as Parquet, Delta, Iceberg along with distributed query planning concepts
  • Hands-on experience integrating data via APIs, JDBC, Delta/Parquet, object storage, and coordinating with data engineering pipelines (Airflow, DBT, Kafka, Spark, etc.)
  • Proven experience designing and implementing lakehouse architecture including ingestion, curation, semantic modeling, reflections/caching optimization, and enabling governed analytics
  • Strong understanding of data governance, lineage, RBAC-based access control, and enterprise security best practices
  • Excellent communication skills with ability to work closely with BI, data science, and engineering teams; strong documentation discipline
  • Candidates must come from enterprise data modernization, cloud-native, or analytics-driven companies


Preferred:

  • Experience integrating Dremio with BI tools (Tableau, Power BI, Looker) or data catalogs (Collibra, Alation, Purview); familiarity with Snowflake, Databricks, or BigQuery environments


Role & Responsibilities:

You will be responsible for architecting, implementing, and optimizing Dremio-based data lakehouse environments integrated with cloud storage, BI, and data engineering ecosystems. The role requires a strong balance of architecture design, data modeling, query optimization, and governance enablement in large-scale analytical environments.


  • Design and implement Dremio lakehouse architecture on cloud (AWS/Azure/Snowflake/Databricks ecosystem).
  • Define data ingestion, curation, and semantic modeling strategies to support analytics and AI workloads.
  • Optimize Dremio reflections, caching, and query performance for diverse data consumption patterns.
  • Collaborate with data engineering teams to integrate data sources via APIs, JDBC, Delta/Parquet, and object storage layers (S3/ADLS).
  • Establish best practices for data security, lineage, and access control aligned with enterprise governance policies.
  • Support self-service analytics by enabling governed data products and semantic layers.
  • Develop reusable design patterns, documentation, and standards for Dremio deployment, monitoring, and scaling.
  • Work closely with BI and data science teams to ensure fast, reliable, and well-modeled access to enterprise data.


Ideal Candidate:

  • Bachelor’s or Master’s in Computer Science, Information Systems, or related field.
  • 5+ years in data architecture and engineering, with 3+ years in Dremio or modern lakehouse platforms.
  • Strong expertise in SQL optimization, data modeling, and performance tuning within Dremio or similar query engines (Presto, Trino, Athena).
  • Hands-on experience with cloud storage (S3, ADLS, GCS), Parquet/Delta/Iceberg formats, and distributed query planning.
  • Knowledge of data integration tools and pipelines (Airflow, DBT, Kafka, Spark, etc.).
  • Familiarity with enterprise data governance, metadata management, and role-based access control (RBAC).
  • Excellent problem-solving, documentation, and stakeholder communication skills.


Preferred:

  • Experience integrating Dremio with BI tools (Tableau, Power BI, Looker) and data catalogs (Collibra, Alation, Purview).
  • Exposure to Snowflake, Databricks, or BigQuery environments.
  • Experience in high-tech, manufacturing, or enterprise data modernization programs.
Read more
Chennai, Bengaluru (Bangalore)
4 - 10 yrs
Best in industry
skill iconData Science
skill iconPython
Forecasting
skill iconMachine Learning (ML)

Hi,


Greetings from Ampera!


we are looking for a Data Scientist with strong Python & Forecasting experience.


Title                               : Data Scientist – Python & Forecasting

Experience                   : 4 to 7 Yrs

Location                       : Chennai/Bengaluru

Type of hire                  : PWD and Non PWD

Employment Type     : Full Time

Notice Period             : Immediate Joiner

Working hours           : 09:00 a.m. to 06:00 p.m.

Workdays                   : Mon - Fri

 

 

Job Description:

 

We are looking for an experienced Data Scientist with strong expertise in Python programming and forecasting techniques. The ideal candidate should have hands-on experience building predictive and time-series forecasting models, working with large datasets, and deploying scalable solutions in production environments.


Key Responsibilities

  • Develop and implement forecasting models (time-series and machine learning based).
  • Perform exploratory data analysis (EDA), feature engineering, and model validation.
  • Build, test, and optimize predictive models for business use cases such as demand forecasting, revenue prediction, trend analysis, etc.
  • Design, train, validate, and optimize machine learning models for real-world business use cases.
  • Apply appropriate ML algorithms based on business problems and data characteristics
  • Write clean, modular, and production-ready Python code.
  • Work extensively with Python Packages & libraries for data processing and modelling.
  • Collaborate with Data Engineers and stakeholders to deploy models into production.
  • Monitor model performance and improve accuracy through continuous tuning.
  • Document methodologies, assumptions, and results clearly for business teams.

 

Technical Skills Required:

Programming

  • Strong proficiency in Python
  • Experience with Pandas, NumPy, Scikit-learn

Forecasting & Modelling

  • Hands-on experience in Time Series Forecasting (ARIMA, SARIMA, Prophet, etc.)
  • Experience with ML-based forecasting models (XGBoost, LightGBM, Random Forest, etc.)
  • Understanding of seasonality, trend decomposition, and statistical modeling

Data & Deployment

  • Experience handling structured and large datasets
  • SQL proficiency
  • Exposure to model deployment (API-based deployment preferred)
  • Knowledge of MLOps concepts is an added advantage

Tools (Preferred)

  • TensorFlow / PyTorch (optional)
  • Airflow / MLflow
  • Cloud platforms (AWS / Azure / GCP)


Educational Qualification

  • Bachelor’s or Master’s degree in Data Science, Statistics, Computer Science, Mathematics, or related field.


Key Competencies

  • Strong analytical and problem-solving skills
  • Ability to communicate insights to technical and non-technical stakeholders
  • Experience working in agile or fast-paced environments


Accessibility & Inclusion Statement

We are committed to creating an inclusive environment for all employees, including persons with disabilities. Reasonable accommodations will be provided upon request.

Equal Opportunity Employer (EOE) Statement

Ampera Technologies is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, disability, age, or veteran status.

Read more
Global digital transformation solutions provider.

Global digital transformation solutions provider.

Agency job
via Peak Hire Solutions by Dhara Thakkar
Hyderabad
5 - 8 yrs
₹11L - ₹20L / yr
PySpark
Apache Kafka
Data architecture
skill iconAmazon Web Services (AWS)
EMR
+32 more

JOB DETAILS:

* Job Title: Lead II - Software Engineering - AWS, Apache Spark (PySpark/Scala), Apache Kafka

* Industry: Global digital transformation solutions provider

* Salary: Best in Industry

* Experience: 5-8 years

* Location: Hyderabad

 

Job Summary

We are seeking a skilled Data Engineer to design, build, and optimize scalable data pipelines and cloud-based data platforms. The role involves working with large-scale batch and real-time data processing systems, collaborating with cross-functional teams, and ensuring data reliability, security, and performance across the data lifecycle.


Key Responsibilities

ETL Pipeline Development & Optimization

  • Design, develop, and maintain complex end-to-end ETL pipelines for large-scale data ingestion and processing.
  • Optimize data pipelines for performance, scalability, fault tolerance, and reliability.

Big Data Processing

  • Develop and optimize batch and real-time data processing solutions using Apache Spark (PySpark/Scala) and Apache Kafka.
  • Ensure fault-tolerant, scalable, and high-performance data processing systems.

Cloud Infrastructure Development

  • Build and manage scalable, cloud-native data infrastructure on AWS.
  • Design resilient and cost-efficient data pipelines adaptable to varying data volume and formats.

Real-Time & Batch Data Integration

  • Enable seamless ingestion and processing of real-time streaming and batch data sources (e.g., AWS MSK).
  • Ensure consistency, data quality, and a unified view across multiple data sources and formats.

Data Analysis & Insights

  • Partner with business teams and data scientists to understand data requirements.
  • Perform in-depth data analysis to identify trends, patterns, and anomalies.
  • Deliver high-quality datasets and present actionable insights to stakeholders.

CI/CD & Automation

  • Implement and maintain CI/CD pipelines using Jenkins or similar tools.
  • Automate testing, deployment, and monitoring to ensure smooth production releases.

Data Security & Compliance

  • Collaborate with security teams to ensure compliance with organizational and regulatory standards (e.g., GDPR, HIPAA).
  • Implement data governance practices ensuring data integrity, security, and traceability.

Troubleshooting & Performance Tuning

  • Identify and resolve performance bottlenecks in data pipelines.
  • Apply best practices for monitoring, tuning, and optimizing data ingestion and storage.

Collaboration & Cross-Functional Work

  • Work closely with engineers, data scientists, product managers, and business stakeholders.
  • Participate in agile ceremonies, sprint planning, and architectural discussions.


Skills & Qualifications

Mandatory (Must-Have) Skills

  1. AWS Expertise
  • Hands-on experience with AWS Big Data services such as EMR, Managed Apache Airflow, Glue, S3, DMS, MSK, and EC2.
  • Strong understanding of cloud-native data architectures.
  1. Big Data Technologies
  • Proficiency in PySpark or Scala Spark and SQL for large-scale data transformation and analysis.
  • Experience with Apache Spark and Apache Kafka in production environments.
  1. Data Frameworks
  • Strong knowledge of Spark DataFrames and Datasets.
  1. ETL Pipeline Development
  • Proven experience in building scalable and reliable ETL pipelines for both batch and real-time data processing.
  1. Database Modeling & Data Warehousing
  • Expertise in designing scalable data models for OLAP and OLTP systems.
  1. Data Analysis & Insights
  • Ability to perform complex data analysis and extract actionable business insights.
  • Strong analytical and problem-solving skills with a data-driven mindset.
  1. CI/CD & Automation
  • Basic to intermediate experience with CI/CD pipelines using Jenkins or similar tools.
  • Familiarity with automated testing and deployment workflows.

 

Good-to-Have (Preferred) Skills

  • Knowledge of Java for data processing applications.
  • Experience with NoSQL databases (e.g., DynamoDB, Cassandra, MongoDB).
  • Familiarity with data governance frameworks and compliance tooling.
  • Experience with monitoring and observability tools such as AWS CloudWatch, Splunk, or Dynatrace.
  • Exposure to cost optimization strategies for large-scale cloud data platforms.

 

Skills: big data, scala spark, apache spark, ETL pipeline development

 

******

Notice period - 0 to 15 days only

Job stability is mandatory

Location: Hyderabad

Note: If a candidate is a short joiner, based in Hyderabad, and fits within the approved budget, we will proceed with an offer

F2F Interview: 14th Feb 2026

3 days in office, Hybrid model.

 


Read more
Bengaluru (Bangalore), Chennai
5 - 15 yrs
Best in industry
skill iconData Science
skill iconMachine Learning (ML)
Artificial Intelligence (AI)
Survival analysis

Hi,


PFB the Job Description for Data Science with ML

 


Type of hire                  : PWD and Non PWD

Employment Type    : Full Time

Notice Period            : Immediate Joiner

Work Days                    : Mon - Fri

 

 


About Ampera:

Ampera Technologies, a purpose driven Digital IT Services with primary focus on supporting our client with their Data, AI / ML, Accessibility and other Digital IT needs. We also ensure that equal opportunities are provided to Persons with Disabilities Talent. Ampera Technologies has its Global Headquarters in Chicago, USA and its Global Delivery Center is based out of Chennai, India. We are actively expanding our Tech Delivery team in Chennai and across India. We offer exciting benefits for our teams, such as 1) Hybrid and Remote work options available, 2) Opportunity to work directly with our Global Enterprise Clients, 3) Opportunity to learn and implement evolving Technologies, 4) Comprehensive healthcare, and 5) Conducive environment for Persons with Disability Talent meeting Physical and Digital Accessibility standards


About the Role

 

We are looking for a skilled Data Scientist with strong Machine Learning experience to design, develop, and deploy data-driven solutions. The role involves working with large datasets, building predictive and ML models, and collaborating with cross-functional teams to translate business problems into analytical solutions.

 

Key Responsibilities

 

  • Analyze large, structured and unstructured datasets to derive actionable insights.
  • Design, build, validate, and deploy Machine Learning models for prediction, classification, recommendation, and optimization.
  • Apply statistical analysis, feature engineering, and model evaluation techniques.
  • Work closely with business stakeholders to understand requirements and convert them into data science solutions.
  • Develop end-to-end ML pipelines including data preprocessing, model training, testing, and deployment.
  • Monitor model performance and retrain models as required.
  • Document assumptions, methodologies, and results clearly.
  • Collaborate with data engineers and software teams to integrate models into production systems.
  • Stay updated with the latest advancements in data science and machine learning.

 

Required Skills & Qualifications

 

  • Bachelor’s or Master’s degree in computer science, Data Science, Statistics, Mathematics, or related fields.
  • 5+ years of hands-on experience in Data Science and Machine Learning.
  • Strong proficiency in Python (NumPy, Pandas, Scikit-learn).
  • Experience with ML algorithms:
  • Regression, Classification, Clustering
  • Decision Trees, Random Forest, Gradient Boosting
  • SVM, KNN, Naïve Bayes
  • Solid understanding of statistics, probability, and linear algebra.
  • Experience with data visualization tools (Matplotlib, Seaborn, Power BI, Tableau – preferred).
  • Experience working with SQL and relational databases.
  • Knowledge of model evaluation metrics and optimization techniques.


Preferred / Good to Have

 

  • Experience with Deep Learning frameworks (TensorFlow, PyTorch, Keras).
  • Exposure to NLP, Computer Vision, or Time Series forecasting.
  • Experience with big data technologies (Spark, Hadoop).
  • Familiarity with cloud platforms (AWS, Azure, GCP).
  • Experience with MLOps, CI/CD pipelines, and model deployment.

 

Soft Skills

 

  • Strong analytical and problem-solving abilities.
  • Excellent communication and stakeholder interaction skills.
  • Ability to work independently and in cross-functional teams.
  • Curiosity and willingness to learn new tools and techniques.


 

Accessibility & Inclusion Statement

We are committed to creating an inclusive environment for all employees, including persons with disabilities. Reasonable accommodations will be provided upon request.

Equal Opportunity Employer (EOE) Statement

Ampera Technologies is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, disability, age, or veteran status.

.


Read more
Vadodara, Baroda
3 - 10 yrs
₹6L - ₹8L / yr
skill iconData Analytics
data analyst
data operations
skill iconData Science
MS-Excel
+5 more

Job Overview


As a Profile Data Setup Analyst, you will play a key role in configuring, analysing, and managing product

data for our customers. You will work closely with internal teams and clients to ensure accurate,

optimized, and timely data setup in Windowmaker software. This role is perfect for someone who

enjoys problem-solving, working with data, and continuously learning.


Key Responsibilities

• Understand customer product configurations and translate them into structured data using

Windowmaker Software.

• Set up and modify profile data including reinforcements, glazing, and accessories, aligned with customer-specific rules and industry practices.

• Analyse data, identify inconsistencies, and ensure high-quality output that supports accurate quoting and manufacturing.

• Collaborate with cross-functional teams (Sales, Software Development, Support) to deliver complete and tested data setups on time.

• Provide training, guidance, and documentation to internal teams and customers as needed.

• Continuously look for process improvements and contribute to knowledge-sharing across the team.

• Support escalated customer cases related to data accuracy or configuration issues.

• Ensure timely delivery of all assigned tasks while maintaining high standards of quality and attention to detail.


Required Qualifications

• 3–5 years of experience in a data-centric role.

• Bachelor’s degree in engineering e.g Computer Science, or a related technical field.

• Experience with product data structures and product lifecycle.

• Strong analytical skills with a keen eye for data accuracy and patterns.

• Ability to break down complex product information into structured data elements.

• Eagerness to learn industry domain knowledge and software capabilities.

• Hands-on experience with Excel, SQL, or other data tools.

• Ability to manage priorities and meet deadlines in a fast-paced environment.

• Excellent written and verbal communication skills.

• A collaborative, growth-oriented mindset.


Nice to Have


• Prior exposure to ERP/CPQ/Manufacturing systems is a plus.

• Knowledge of the window and door (fenestration) industry is an added advantage.


Why Join Us

• Be part of a global product company with a solid industry reputation.

• Work on impactful projects that directly influence customer success.

• Collaborate with a talented, friendly, and supportive team.

• Learn, grow, and make a difference in the digital transformation of the fenestration industry.

Read more
Matchmaking platform

Matchmaking platform

Agency job
via Peak Hire Solutions by Dhara Thakkar
Mumbai
2 - 5 yrs
₹21L - ₹28L / yr
skill iconData Science
skill iconPython
Natural Language Processing (NLP)
MySQL
skill iconMachine Learning (ML)
+15 more

Review Criteria

  • Strong Data Scientist/Machine Learnings/ AI Engineer Profile
  • 2+ years of hands-on experience as a Data Scientist or Machine Learning Engineer building ML models
  • Strong expertise in Python with the ability to implement classical ML algorithms including linear regression, logistic regression, decision trees, gradient boosting, etc.
  • Hands-on experience in minimum 2+ usecaseds out of recommendation systems, image data, fraud/risk detection, price modelling, propensity models
  • Strong exposure to NLP, including text generation or text classification (Text G), embeddings, similarity models, user profiling, and feature extraction from unstructured text
  • Experience productionizing ML models through APIs/CI/CD/Docker and working on AWS or GCP environments
  • Preferred (Company) – Must be from product companies

 

Job Specific Criteria

  • CV Attachment is mandatory
  • What's your current company?
  • Which use cases you have hands on experience?
  • Are you ok for Mumbai location (if candidate is from outside Mumbai)?
  • Reason for change (if candidate has been in current company for less than 1 year)?
  • Reason for hike (if greater than 25%)?

 

Role & Responsibilities

  • Partner with Product to spot high-leverage ML opportunities tied to business metrics.
  • Wrangle large structured and unstructured datasets; build reliable features and data contracts.
  • Build and ship models to:
  • Enhance customer experiences and personalization
  • Boost revenue via pricing/discount optimization
  • Power user-to-user discovery and ranking (matchmaking at scale)
  • Detect and block fraud/risk in real time
  • Score conversion/churn/acceptance propensity for targeted actions
  • Collaborate with Engineering to productionize via APIs/CI/CD/Docker on AWS.
  • Design and run A/B tests with guardrails.
  • Build monitoring for model/data drift and business KPIs


Ideal Candidate

  • 2–5 years of DS/ML experience in consumer internet / B2C products, with 7–8 models shipped to production end-to-end.
  • Proven, hands-on success in at least two (preferably 3–4) of the following:
  • Recommender systems (retrieval + ranking, NDCG/Recall, online lift; bandits a plus)
  • Fraud/risk detection (severe class imbalance, PR-AUC)
  • Pricing models (elasticity, demand curves, margin vs. win-rate trade-offs, guardrails/simulation)
  • Propensity models (payment/churn)
  • Programming: strong Python and SQL; solid git, Docker, CI/CD.
  • Cloud and data: experience with AWS or GCP; familiarity with warehouses/dashboards (Redshift/BigQuery, Looker/Tableau).
  • ML breadth: recommender systems, NLP or user profiling, anomaly detection.
  • Communication: clear storytelling with data; can align stakeholders and drive decisions.


Read more
Talent Pro
Mayank choudhary
Posted by Mayank choudhary
Mumbai
2 - 5 yrs
₹21L - ₹30L / yr
skill iconData Science

Strong Data Scientist/Machine Learnings/ AI Engineer Profile

Mandatory (Experience 1) – Must have 2+ years of hands-on experience as a Data Scientist or Machine Learning Engineer building ML models

Mandatory (Experience 2) – Must have strong expertise in Python with the ability to implement classical ML algorithms including linear regression, logistic regression, decision trees, gradient boosting, etc.

Mandatory (Experience 3) – Must have hands-on experience in minimum 2+ usecaseds out of recommendation systems, image data, fraud/risk detection, price modelling, propensity models

Mandatory (Experience 4) – Must have strong exposure to NLP, including text generation or text classification (Text G), embeddings, similarity models, user profiling, and feature extraction from unstructured text

Mandatory (Experience 5) – Must have experience productionizing ML models through APIs/CI/CD/Docker and working on AWS or GCP environments

Mandatory (Company) – Must be from product companies, Avoid candidates from financial domains (e.g., JPMorgan, banks, fintech)

Read more
Mumbai
2 - 5 yrs
₹21L - ₹30L / yr
skill iconData Science

Strong Data Scientist/Machine Learnings/ AI Engineer Profile

Mandatory (Experience 1) – Must have 2+ years of hands-on experience as a Data Scientist or Machine Learning Engineer building ML models

Mandatory (Experience 2) – Must have strong expertise in Python with the ability to implement classical ML algorithms including linear regression, logistic regression, decision trees, gradient boosting, etc.

Mandatory (Experience 3) – Must have hands-on experience in minimum 2+ usecaseds out of recommendation systems, image data, fraud/risk detection, price modelling, propensity models

Mandatory (Experience 4) – Must have strong exposure to NLP, including text generation or text classification (Text G), embeddings, similarity models, user profiling, and feature extraction from unstructured text

Mandatory (Experience 5) – Must have experience productionizing ML models through APIs/CI/CD/Docker and working on AWS or GCP environments

Mandatory (Company) – Must be from product companies, Avoid candidates from financial domains (e.g., JPMorgan, banks, fintech)

Read more
Talent Pro
Mayank choudhary
Posted by Mayank choudhary
Mumbai
2 - 5 yrs
₹25L - ₹31L / yr
skill iconData Science
skill iconMachine Learning (ML)

Strong Data Scientist/Machine Learnings/ AI Engineer Profile

Mandatory (Experience 1) – Must have 2+ years of hands-on experience as a Data Scientist or Machine Learning Engineer building ML models

Mandatory (Experience 2) – Must have strong expertise in Python with the ability to implement classical ML algorithms including linear regression, logistic regression, decision trees, gradient boosting, etc.

Mandatory (Experience 3) – Must have hands-on experience in minimum 2+ usecaseds out of recommendation systems, image data, fraud/risk detection, price modelling, propensity models

Mandatory (Experience 4) – Must have strong exposure to NLP, including text generation or text classification (Text G), embeddings, similarity models, user profiling, and feature extraction from unstructured text

Mandatory (Experience 5) – Must have experience productionizing ML models through APIs/CI/CD/Docker and working on AWS or GCP environments

Mandatory (Company) – Must be from product companies, Avoid candidates from financial domains (e.g., JPMorgan, banks, fintech)

Read more
Leading digital testing boutique firm

Leading digital testing boutique firm

Agency job
via Peak Hire Solutions by Dhara Thakkar
Delhi
5 - 8 yrs
₹11L - ₹15L / yr
Artificial Intelligence (AI)
skill iconMachine Learning (ML)
Software Testing (QA)
Natural Language Processing (NLP)
Analytics
+11 more

Review Criteria

  • Strong AI/ML Test Engineer
  • 5+ years of overall experience in Testing/QA
  • 2+ years of experience in testing AI/ML models and data-driven applications, across NLP, recommendation engines, fraud detection, and advanced analytics models
  • Must have expertise in validating AI/ML models for accuracy, bias, explainability, and performance, ensuring decisions are fair, reliable, and transparent
  • Must have strong experience to design AI/ML test strategies, including boundary testing, adversarial input simulation, and anomaly monitoring to detect manipulation attempts by marketplace users (buyers/sellers)
  • Proficiency in AI/ML testing frameworks and tools (like PyTest, TensorFlow Model Analysis, MLflow, Python-based data validation libraries, Jupyter) with the ability to integrate into CI/CD pipelines
  • Must understand marketplace misuse scenarios, such as manipulating recommendation algorithms, biasing fraud detection systems, or exploiting gaps in automated scoring
  • Must have strong verbal and written communication skills, able to collaborate with data scientists, engineers, and business stakeholders to articulate testing outcomes and issues.
  • Degree in Engineering, Computer Science, IT, Data Science, or a related discipline (B.E./B.Tech/M.Tech/MCA/MS or equivalent)
  • Candidate must be based within Delhi NCR (100 km radius)


Preferred

  • Certifications such as ISTQB AI Testing, TensorFlow, Cloud AI, or equivalent applied AI credentials are an added advantage.


Job Specific Criteria

  • CV Attachment is mandatory
  • Have you worked with large datasets for AI/ML testing?
  • Have you automated AI/ML testing using PyTest, Jupyter notebooks, or CI/CD pipelines?
  • Please provide details of 2 key AI/ML testing projects you have worked on, including your role, responsibilities, and tools/frameworks used.
  • Are you willing to relocate to Delhi and why (if not from Delhi)?
  • Are you available for a face-to-face round?


Role & Responsibilities

  • 5 years’ experience in testing AIML models and data driven applications including natural language processing NLP recommendation engines fraud detection and advanced analytics models
  • Proven expertise in validating AI models for accuracy bias explainability and performance to ensure decisions eg bid scoring supplier ranking fraud detection are fair reliable and transparent
  • Handson experience in data validation and model testing ensuring training and inference pipelines align with business requirements and procurement rules
  • Strong skills in data science equipped to design test strategies for AI systems including boundary testing adversarial input simulation and dri monitoring to detect manipulation aempts by marketplace users buyers sellers
  • Proficient in data science for defining AIML testing frameworks and tools TensorFlow Model Analysis MLflow PyTest Python based data validation libraries Jupyter with ability to integrate into CICD pipelines
  • Business awareness of marketplace misuse scenarios such as manipulating recommendation algorithms biasing fraud detection systems or exploiting gaps in automated scoring
  • Education Certifications Bachelors masters in engineering CSIT Data Science or equivalent
  • Preferred Certifications ISTQB AI Testing TensorFlowCloud AI certifications or equivalent applied AI credentials


Ideal Candidate

  • 5 years’ experience in testing AIML models and data driven applications including natural language processing NLP recommendation engines fraud detection and advanced analytics models
  • Proven expertise in validating AI models for accuracy bias explainability and performance to ensure decisions eg bid scoring supplier ranking fraud detection are fair reliable and transparent
  • Handson experience in data validation and model testing ensuring training and inference pipelines align with business requirements and procurement rules
  • Strong skills in data science equipped to design test strategies for AI systems including boundary testing adversarial input simulation and dri monitoring to detect manipulation aempts by marketplace users buyers sellers
  • Proficient in data science for defining AIML testing frameworks and tools TensorFlow Model Analysis MLflow PyTest Python based data validation libraries Jupyter with ability to integrate into CICD pipelines
  • Business awareness of marketplace misuse scenarios such as manipulating recommendation algorithms biasing fraud detection systems or exploiting gaps in automated scoring
  • Education Certifications Bachelors masters in engineering CSIT Data Science or equivalent
  • Preferred Certifications ISTQB AI Testing TensorFlow Cloud AI certifications or equivalent applied AI credentials


Read more
is a global digital solutions partner trusted by leading Fortune 500 companies in industries such as pharma & healthcare, retail, and BFSI.It is expertise in data and analytics, data engineering, machine learning, AI, and automation help companies streamline operations and unlock business value.

is a global digital solutions partner trusted by leading Fortune 500 companies in industries such as pharma & healthcare, retail, and BFSI.It is expertise in data and analytics, data engineering, machine learning, AI, and automation help companies streamline operations and unlock business value.

Agency job
via HyrHub by Shwetha Naik
Bengaluru (Bangalore), Mangalore
12 - 14 yrs
₹28L - ₹32L / yr
Data engineering
skill iconMachine Learning (ML)
Generative AI
Architecture
skill iconPython
+1 more

Skills: Gen AI, Machine learning Models, AWS/ Azure, redshift, Python, Apachi, Airflow, Devops, minimum 4-5years experience as Architect, should be from Data Engineering background.

• 8+ years of experience in data engineering, data science, or architecture roles.

• Experience designing enterprise-grade AI platforms.

• Certification in major cloud platforms (AWS/Azure/GCP).

• Experience with governance tooling (Collibra, Alation) and lineage systems

• Strong hands-on background in data engineering, analytics, or data science.

• Expertise in building data platforms using:

o Cloud: AWS (Glue, S3, Redshift), Azure (Data Factory, Synapse), GCP (BigQuery,

Dataflow).

o Compute: Spark, Databricks, Flink.

o Data modelling: dimensional, relational, NoSQL, graph.

• Proficiency with Python, SQL, and data pipeline orchestration tools.

• Understanding of ML frameworks and tools: TensorFlow, PyTorch, Scikit-learn, MLflow, etc.

• Experience implementing MLOps, model deployment, monitoring, logging, and versioning.


Read more
TrumetricAI
Yashika Tiwari
Posted by Yashika Tiwari
Bengaluru (Bangalore)
6 - 10 yrs
₹15L - ₹30L / yr
Natural Language Processing (NLP)
skill iconDeep Learning
Artificial Intelligence (AI)
Generative AI
skill iconMachine Learning (ML)
+1 more

Senior Machine Learning Engineer

About the Role

We are looking for a Senior Machine Learning Engineer who can take business problems, design appropriate machine learning solutions, and make them work reliably in production environments.

This role is ideal for someone who not only understands machine learning models, but also knows when and how ML should be applied, what trade-offs to make, and how to take ownership from problem understanding to production deployment.

Beyond technical skills, we need someone who can lead a team of ML Engineers, design end-to-end ML solutions, and clearly communicate decisions and outcomes to both engineering teams and business stakeholders. If you enjoy solving real problems, making pragmatic decisions, and owning outcomes from idea to deployment, this role is for you.

What You’ll Be Doing

Building and Deploying ML Models

  • Design, build, evaluate, deploy, and monitor machine learning models for real production use cases.
  • Take ownership of how a problem is approached, including deciding whether ML is the right solution and what type of ML approach fits the problem.
  • Ensure scalability, reliability, and efficiency of ML pipelines across cloud and on-prem environments.
  • Work with data engineers to design and validate data pipelines that feed ML systems.
  • Optimize solutions for accuracy, performance, cost, and maintainability, not just model metrics.

Leading and Architecting ML Solutions

  • Lead a team of ML Engineers, providing technical direction, mentorship, and review of ML approaches.
  • Architect ML solutions that integrate seamlessly with business applications and existing systems.
  • Ensure models and solutions are explainable, auditable, and aligned with business goals.
  • Drive best practices in MLOps, including CI/CD, model monitoring, retraining strategies, and operational readiness.
  • Set clear standards for how ML problems are framed, solved, and delivered within the team.

Collaborating and Communicating

  • Work closely with business stakeholders to understand problem statements, constraints, and success criteria.
  • Translate business problems into clear ML objectives, inputs, and expected outputs.
  • Collaborate with software engineers, data engineers, platform engineers, and product managers to integrate ML solutions into production systems.
  • Present ML decisions, trade-offs, and outcomes to non-technical stakeholders in a simple and understandable way.

What We’re Looking For

Machine Learning Expertise

  • Strong understanding of supervised and unsupervised learning, deep learning, NLP techniques, and large language models (LLMs).
  • Experience choosing appropriate modeling approaches based on the problem, available data, and business constraints.
  • Experience training, fine-tuning, and deploying ML and LLM models for real-world use cases.
  • Proficiency in common ML frameworks such as TensorFlow, PyTorch, Scikit-learn, etc.

Production and Cloud Deployment

  • Hands-on experience deploying and running ML systems in production environments on AWS, GCP, or Azure.
  • Good understanding of MLOps practices, including CI/CD for ML models, monitoring, and retraining workflows.
  • Experience with Docker, Kubernetes, or serverless architectures is a plus.
  • Ability to think beyond deployment and consider operational reliability and long-term maintenance.

Data Handling

  • Strong programming skills in Python.
  • Proficiency in SQL and working with large-scale datasets.
  • Ability to reason about data quality, data limitations, and how they impact ML outcomes.
  • Familiarity with distributed computing frameworks like Spark or Dask is a plus.

Leadership and Communication

  • Ability to lead and mentor ML Engineers and work effectively across teams.
  • Strong communication skills to explain ML concepts, decisions, and limitations to business teams.
  • Comfortable taking ownership and making decisions in ambiguous problem spaces.
  • Passion for staying updated with advancements in ML and AI, with a practical mindset toward adoption.

Experience Needed

  • 6+ years of experience in machine learning engineering or related roles.
  • Proven experience designing, selecting, and deploying ML solutions used in production.
  • Experience managing ML systems after deployment, including monitoring and iteration.
  • Proven track record of working in cross-functional teams and leading ML initiatives.


Read more
AI-First Company

AI-First Company

Agency job
via Peak Hire Solutions by Dhara Thakkar
Bengaluru (Bangalore), Mumbai, Hyderabad, Gurugram
5 - 17 yrs
₹30L - ₹45L / yr
Data engineering
Data architecture
SQL
Data modeling
GCS
+47 more

ROLES AND RESPONSIBILITIES:

You will be responsible for architecting, implementing, and optimizing Dremio-based data Lakehouse environments integrated with cloud storage, BI, and data engineering ecosystems. The role requires a strong balance of architecture design, data modeling, query optimization, and governance enablement in large-scale analytical environments.


  • Design and implement Dremio lakehouse architecture on cloud (AWS/Azure/Snowflake/Databricks ecosystem).
  • Define data ingestion, curation, and semantic modeling strategies to support analytics and AI workloads.
  • Optimize Dremio reflections, caching, and query performance for diverse data consumption patterns.
  • Collaborate with data engineering teams to integrate data sources via APIs, JDBC, Delta/Parquet, and object storage layers (S3/ADLS).
  • Establish best practices for data security, lineage, and access control aligned with enterprise governance policies.
  • Support self-service analytics by enabling governed data products and semantic layers.
  • Develop reusable design patterns, documentation, and standards for Dremio deployment, monitoring, and scaling.
  • Work closely with BI and data science teams to ensure fast, reliable, and well-modeled access to enterprise data.


IDEAL CANDIDATE:

  • Bachelor’s or Master’s in Computer Science, Information Systems, or related field.
  • 5+ years in data architecture and engineering, with 3+ years in Dremio or modern lakehouse platforms.
  • Strong expertise in SQL optimization, data modeling, and performance tuning within Dremio or similar query engines (Presto, Trino, Athena).
  • Hands-on experience with cloud storage (S3, ADLS, GCS), Parquet/Delta/Iceberg formats, and distributed query planning.
  • Knowledge of data integration tools and pipelines (Airflow, DBT, Kafka, Spark, etc.).
  • Familiarity with enterprise data governance, metadata management, and role-based access control (RBAC).
  • Excellent problem-solving, documentation, and stakeholder communication skills.


PREFERRED:

  • Experience integrating Dremio with BI tools (Tableau, Power BI, Looker) and data catalogs (Collibra, Alation, Purview).
  • Exposure to Snowflake, Databricks, or BigQuery environments.
  • Experience in high-tech, manufacturing, or enterprise data modernization programs.
Read more
Matchmaking platform

Matchmaking platform

Agency job
via Peak Hire Solutions by Dhara Thakkar
Mumbai
2 - 5 yrs
₹15L - ₹28L / yr
skill iconData Science
skill iconPython
Natural Language Processing (NLP)
MySQL
skill iconMachine Learning (ML)
+15 more

Review Criteria

  • Strong Data Scientist/Machine Learnings/ AI Engineer Profile
  • 2+ years of hands-on experience as a Data Scientist or Machine Learning Engineer building ML models
  • Strong expertise in Python with the ability to implement classical ML algorithms including linear regression, logistic regression, decision trees, gradient boosting, etc.
  • Hands-on experience in minimum 2+ usecaseds out of recommendation systems, image data, fraud/risk detection, price modelling, propensity models
  • Strong exposure to NLP, including text generation or text classification (Text G), embeddings, similarity models, user profiling, and feature extraction from unstructured text
  • Experience productionizing ML models through APIs/CI/CD/Docker and working on AWS or GCP environments
  • Preferred (Company) – Must be from product companies

 

Job Specific Criteria

  • CV Attachment is mandatory
  • What's your current company?
  • Which use cases you have hands on experience?
  • Are you ok for Mumbai location (if candidate is from outside Mumbai)?
  • Reason for change (if candidate has been in current company for less than 1 year)?
  • Reason for hike (if greater than 25%)?

 

Role & Responsibilities

  • Partner with Product to spot high-leverage ML opportunities tied to business metrics.
  • Wrangle large structured and unstructured datasets; build reliable features and data contracts.
  • Build and ship models to:
  • Enhance customer experiences and personalization
  • Boost revenue via pricing/discount optimization
  • Power user-to-user discovery and ranking (matchmaking at scale)
  • Detect and block fraud/risk in real time
  • Score conversion/churn/acceptance propensity for targeted actions
  • Collaborate with Engineering to productionize via APIs/CI/CD/Docker on AWS.
  • Design and run A/B tests with guardrails.
  • Build monitoring for model/data drift and business KPIs


Ideal Candidate

  • 2–5 years of DS/ML experience in consumer internet / B2C products, with 7–8 models shipped to production end-to-end.
  • Proven, hands-on success in at least two (preferably 3–4) of the following:
  • Recommender systems (retrieval + ranking, NDCG/Recall, online lift; bandits a plus)
  • Fraud/risk detection (severe class imbalance, PR-AUC)
  • Pricing models (elasticity, demand curves, margin vs. win-rate trade-offs, guardrails/simulation)
  • Propensity models (payment/churn)
  • Programming: strong Python and SQL; solid git, Docker, CI/CD.
  • Cloud and data: experience with AWS or GCP; familiarity with warehouses/dashboards (Redshift/BigQuery, Looker/Tableau).
  • ML breadth: recommender systems, NLP or user profiling, anomaly detection.
  • Communication: clear storytelling with data; can align stakeholders and drive decisions.




Read more
Talent Pro
Mayank choudhary
Posted by Mayank choudhary
Mumbai
2 - 5 yrs
₹21L - ₹30L / yr
skill iconData Science

Strong Data Scientist/Machine Learnings/ AI Engineer Profile

Mandatory (Experience 1) – Must have 2+ years of hands-on experience as a Data Scientist or Machine Learning Engineer building ML models

Mandatory (Experience 2) – Must have strong expertise in Python with the ability to implement classical ML algorithms including linear regression, logistic regression, decision trees, gradient boosting, etc.

Mandatory (Experience 3) – Must have hands-on experience in minimum 2+ usecaseds out of recommendation systems, image data, fraud/risk detection, price modelling, propensity models

Mandatory (Experience 4) – Must have strong exposure to NLP, including text generation or text classification (Text G), embeddings, similarity models, user profiling, and feature extraction from unstructured text

Mandatory (Experience 5) – Must have experience productionizing ML models through APIs/CI/CD/Docker and working on AWS or GCP environments

Mandatory (Company) – Must be from product companies, Avoid candidates from financial domains (e.g., JPMorgan, banks, fintech)

Read more
Global digital transformation solutions provider.

Global digital transformation solutions provider.

Agency job
via Peak Hire Solutions by Dhara Thakkar
Bengaluru (Bangalore), Chennai, Kochi (Cochin), Trivandrum, Hyderabad, Thiruvananthapuram
8 - 10 yrs
₹10L - ₹25L / yr
Business Analysis
Data Visualization
PowerBI
SQL
Tableau
+18 more

Job Description – Senior Technical Business Analyst

Location: Trivandrum (Preferred) | Open to any location in India

Shift Timings - 8 hours window between the 7:30 PM IST - 4:30 AM IST

 

About the Role

We are seeking highly motivated and analytically strong Senior Technical Business Analysts who can work seamlessly with business and technology stakeholders to convert a one-line problem statement into a well-defined project or opportunity. This role is ideal for fresh graduates who have a strong foundation in data analytics, data engineering, data visualization, and data science, along with a strong drive to learn, collaborate, and grow in a dynamic, fast-paced environment.

As a Technical Business Analyst, you will be responsible for translating complex business challenges into actionable user stories, analytical models, and executable tasks in Jira. You will work across the entire data lifecycle—from understanding business context to delivering insights, solutions, and measurable outcomes.

 

Key Responsibilities

Business & Analytical Responsibilities

  • Partner with business teams to understand one-line problem statements and translate them into detailed business requirementsopportunities, and project scope.
  • Conduct exploratory data analysis (EDA) to uncover trends, patterns, and business insights.
  • Create documentation including Business Requirement Documents (BRDs)user storiesprocess flows, and analytical models.
  • Break down business needs into concise, actionable, and development-ready user stories in Jira.

Data & Technical Responsibilities

  • Collaborate with data engineering teams to design, review, and validate data pipelinesdata models, and ETL/ELT workflows.
  • Build dashboards, reports, and data visualizations using leading BI tools to communicate insights effectively.
  • Apply foundational data science concepts such as statistical analysispredictive modeling, and machine learning fundamentals.
  • Validate and ensure data quality, consistency, and accuracy across datasets and systems.

Collaboration & Execution

  • Work closely with product, engineering, BI, and operations teams to support the end-to-end delivery of analytical solutions.
  • Assist in development, testing, and rollout of data-driven solutions.
  • Present findings, insights, and recommendations clearly and confidently to both technical and non-technical stakeholders.

 

Required Skillsets

Core Technical Skills

  • 6+ years of Technical Business Analyst experience within an overall professional experience of 8+ years
  • Data Analytics: SQL, descriptive analytics, business problem framing.
  • Data Engineering (Foundational): Understanding of data warehousing, ETL/ELT processes, cloud data platforms (AWS/GCP/Azure preferred).
  • Data Visualization: Experience with Power BI, Tableau, or equivalent tools.
  • Data Science (Basic/Intermediate): Python/R, statistical methods, fundamentals of ML algorithms.

 

Soft Skills

  • Strong analytical thinking and structured problem-solving capability.
  • Ability to convert business problems into clear technical requirements.
  • Excellent communication, documentation, and presentation skills.
  • High curiosity, adaptability, and eagerness to learn new tools and techniques.

 

Educational Qualifications

  • BE/B.Tech or equivalent in:
  • Computer Science / IT
  • Data Science

 

What We Look For

  • Demonstrated passion for data and analytics through projects and certifications.
  • Strong commitment to continuous learning and innovation.
  • Ability to work both independently and in collaborative team environments.
  • Passion for solving business problems using data-driven approaches.
  • Proven ability (or aptitude) to convert a one-line business problem into a structured project or opportunity.

 

Why Join Us?

  • Exposure to modern data platforms, analytics tools, and AI technologies.
  • A culture that promotes innovation, ownership, and continuous learning.
  • Supportive environment to build a strong career in data and analytics.

 

Skills: Data Analytics, Business Analysis, Sql


Must-Haves

Technical Business Analyst (6+ years), SQL, Data Visualization (Power BI, Tableau), Data Engineering (ETL/ELT, cloud platforms), Python/R

Read more
Gurugram, Bengaluru (Bangalore), Hyderabad, Mumbai
5 - 12 yrs
₹35L - ₹52L / yr
skill iconData Science
skill iconMachine Learning (ML)
Artificial Intelligence (AI)

Strong Senior Data Scientist (AI/ML/GenAI) Profile

Mandatory (Experience 1) – Must have a minimum of 5+ years of experience in designing, developing, and deploying Machine Learning / Deep Learning (ML/DL) systems in production

Mandatory (Experience 2) – Must have strong hands-on experience in Python and deep learning frameworks such as PyTorch, TensorFlow, or JAX.

Mandatory (Experience 3) – Must have 1+ years of experience in fine-tuning Large Language Models (LLMs) using techniques like LoRA/QLoRA, and building RAG (Retrieval-Augmented Generation) pipelines.

Mandatory (Experience 4) – Must have experience with MLOps and production-grade systems including Docker, Kubernetes, Spark, model registries, and CI/CD workflows

mandatory (Note): Budget for upto 5 yoe is 25Lacs, Upto 7 years is 35 lacs and upto 12 years is 45 lacs, Also client can pay max 30-40% based on candidature

Read more
AI company

AI company

Agency job
via Peak Hire Solutions by Dhara Thakkar
Bengaluru (Bangalore), Mumbai, Hyderabad, Gurugram
5 - 17 yrs
₹30L - ₹45L / yr
Data architecture
Data engineering
SQL
Data modeling
GCS
+21 more

Review Criteria

  • Strong Dremio / Lakehouse Data Architect profile
  • 5+ years of experience in Data Architecture / Data Engineering, with minimum 3+ years hands-on in Dremio
  • Strong expertise in SQL optimization, data modeling, query performance tuning, and designing analytical schemas for large-scale systems
  • Deep experience with cloud object storage (S3 / ADLS / GCS) and file formats such as Parquet, Delta, Iceberg along with distributed query planning concepts
  • Hands-on experience integrating data via APIs, JDBC, Delta/Parquet, object storage, and coordinating with data engineering pipelines (Airflow, DBT, Kafka, Spark, etc.)
  • Proven experience designing and implementing lakehouse architecture including ingestion, curation, semantic modeling, reflections/caching optimization, and enabling governed analytics
  • Strong understanding of data governance, lineage, RBAC-based access control, and enterprise security best practices
  • Excellent communication skills with ability to work closely with BI, data science, and engineering teams; strong documentation discipline
  • Candidates must come from enterprise data modernization, cloud-native, or analytics-driven companies


Preferred

  • Preferred (Nice-to-have) – Experience integrating Dremio with BI tools (Tableau, Power BI, Looker) or data catalogs (Collibra, Alation, Purview); familiarity with Snowflake, Databricks, or BigQuery environments


Job Specific Criteria

  • CV Attachment is mandatory
  • How many years of experience you have with Dremio?
  • Which is your preferred job location (Mumbai / Bengaluru / Hyderabad / Gurgaon)?
  • Are you okay with 3 Days WFO?
  • Virtual Interview requires video to be on, are you okay with it?


Role & Responsibilities

You will be responsible for architecting, implementing, and optimizing Dremio-based data lakehouse environments integrated with cloud storage, BI, and data engineering ecosystems. The role requires a strong balance of architecture design, data modeling, query optimization, and governance enablement in large-scale analytical environments.

  • Design and implement Dremio lakehouse architecture on cloud (AWS/Azure/Snowflake/Databricks ecosystem).
  • Define data ingestion, curation, and semantic modeling strategies to support analytics and AI workloads.
  • Optimize Dremio reflections, caching, and query performance for diverse data consumption patterns.
  • Collaborate with data engineering teams to integrate data sources via APIs, JDBC, Delta/Parquet, and object storage layers (S3/ADLS).
  • Establish best practices for data security, lineage, and access control aligned with enterprise governance policies.
  • Support self-service analytics by enabling governed data products and semantic layers.
  • Develop reusable design patterns, documentation, and standards for Dremio deployment, monitoring, and scaling.
  • Work closely with BI and data science teams to ensure fast, reliable, and well-modeled access to enterprise data.


Ideal Candidate

  • Bachelor’s or master’s in computer science, Information Systems, or related field.
  • 5+ years in data architecture and engineering, with 3+ years in Dremio or modern lakehouse platforms.
  • Strong expertise in SQL optimization, data modeling, and performance tuning within Dremio or similar query engines (Presto, Trino, Athena).
  • Hands-on experience with cloud storage (S3, ADLS, GCS), Parquet/Delta/Iceberg formats, and distributed query planning.
  • Knowledge of data integration tools and pipelines (Airflow, DBT, Kafka, Spark, etc.).
  • Familiarity with enterprise data governance, metadata management, and role-based access control (RBAC).
  • Excellent problem-solving, documentation, and stakeholder communication skills.
Read more
AI Industry

AI Industry

Agency job
via Peak Hire Solutions by Dhara Thakkar
Mumbai, Bengaluru (Bangalore), Hyderabad, Gurugram
5 - 12 yrs
₹20L - ₹46L / yr
skill iconData Science
Artificial Intelligence (AI)
skill iconMachine Learning (ML)
Generative AI
skill iconDeep Learning
+14 more

Review Criteria

  • Strong Senior Data Scientist (AI/ML/GenAI) Profile
  • 5+ years of experience in designing, developing, and deploying Machine Learning / Deep Learning (ML/DL) systems in production
  • Must have strong hands-on experience in Python and deep learning frameworks such as PyTorch, TensorFlow, or JAX.
  • 1+ years of experience in fine-tuning Large Language Models (LLMs) using techniques like LoRA/QLoRA, and building RAG (Retrieval-Augmented Generation) pipelines.
  • Must have experience with MLOps and production-grade systems including Docker, Kubernetes, Spark, model registries, and CI/CD workflows

 

Preferred

  • Prior experience in open-source GenAI contributions, applied LLM/GenAI research, or large-scale production AI systems
  • Preferred (Education) – B.S./M.S./Ph.D. in Computer Science, Data Science, Machine Learning, or a related field.

 

Job Specific Criteria

  • CV Attachment is mandatory
  • Which is your preferred job location (Mumbai / Bengaluru / Hyderabad / Gurgaon)?
  • Are you okay with 3 Days WFO?
  • Virtual Interview requires video to be on, are you okay with it?


Role & Responsibilities

Company is hiring a Senior Data Scientist with strong expertise in AI, machine learning engineering (MLE), and generative AI. You will play a leading role in designing, deploying, and scaling production-grade ML systems — including large language model (LLM)-based pipelines, AI copilots, and agentic workflows. This role is ideal for someone who thrives on balancing cutting-edge research with production rigor and loves mentoring while building impact-first AI applications.

 

Responsibilities:

  • Own the full ML lifecycle: model design, training, evaluation, deployment
  • Design production-ready ML pipelines with CI/CD, testing, monitoring, and drift detection
  • Fine-tune LLMs and implement retrieval-augmented generation (RAG) pipelines
  • Build agentic workflows for reasoning, planning, and decision-making
  • Develop both real-time and batch inference systems using Docker, Kubernetes, and Spark
  • Leverage state-of-the-art architectures: transformers, diffusion models, RLHF, and multimodal pipelines
  • Collaborate with product and engineering teams to integrate AI models into business applications
  • Mentor junior team members and promote MLOps, scalable architecture, and responsible AI best practices


Ideal Candidate

  • 5+ years of experience in designing, deploying, and scaling ML/DL systems in production
  • Proficient in Python and deep learning frameworks such as PyTorch, TensorFlow, or JAX
  • Experience with LLM fine-tuning, LoRA/QLoRA, vector search (Weaviate/PGVector), and RAG pipelines
  • Familiarity with agent-based development (e.g., ReAct agents, function-calling, orchestration)
  • Solid understanding of MLOps: Docker, Kubernetes, Spark, model registries, and deployment workflows
  • Strong software engineering background with experience in testing, version control, and APIs
  • Proven ability to balance innovation with scalable deployment
  • B.S./M.S./Ph.D. in Computer Science, Data Science, or a related field
  • Bonus: Open-source contributions, GenAI research, or applied systems at scale


Read more
KGiSL MICROCOLLEGE
Hiring Recruitment
Posted by Hiring Recruitment
kerala
1 - 6 yrs
₹1L - ₹6L / yr
skill iconPython
skill iconData Science
skill iconDeep Learning
skill iconMachine Learning (ML)

Job description


Job Title: Python Trainer (Workshop Model Freelance / Part-time)


Location: Thrissur & Ernakulam


Program Duration: 30 or 60 Hours (Workshop Model)


Job Type: Freelance / Contract


About the Role:


We are seeking an experienced and passionate Python Trainer to deliver interactive, hands-on training sessions for students under a workshop model in Thrissur and Ernakulam locations. The trainer will be responsible for engaging learners with practical examples and real-time coding exercises.


Key Responsibilities:


Conduct offline workshop-style Python training sessions (30 or 60 hours total).


Deliver interactive lectures and coding exercises focused on Python programming fundamentals and applications.


Customize the curriculum based on learners skill levels and project needs.


Guide students through mini-projects, assignments, and coding challenges.


Ensure effective knowledge transfer through practical, real-world examples.


Requirements:


Experience: 15 years of training or industry experience in Python programming.


Technical Skills: Strong knowledge of Python, including OOPs concepts, file handling, libraries (NumPy, Pandas, etc.), and basic data visualization.


Prior experience in academic or corporate training preferred.


Excellent communication and presentation skills.


Mode: Offline Workshop (Thrissur / Ernakulam)


Duration: Flexible – 30 Hours or 60 Hours Total


Organization: KGiSL Microcollege


Role: Other


Industry Type: Education / Training


Department: Other


Employment Type: Full Time, Permanent


Role Category: Other


Education


UG: Any Graduate


Key Skills


Data Science,Artificial Intelligence



Role: Other

Industry Type: Education / Training

Department: Other

Employment Type: Full Time, Permanent

Role Category: Other

Education

UG: Any Graduate

Read more
KGiSL MICROCOLLEGE
Hiring Recruitment
Posted by Hiring Recruitment
Thrissur
1 - 6 yrs
₹1L - ₹6L / yr
skill iconData Science
skill iconPython
Prompt engineering
skill iconMachine Learning (ML)

Job description


Job Title: Python Trainer (Workshop Model Freelance / Part-time)


Location: Thrissur & Ernakulam


Program Duration: 30 or 60 Hours (Workshop Model)


Job Type: Freelance / Contract


About the Role:


We are seeking an experienced and passionate Python Trainer to deliver interactive, hands-on training sessions for students under a workshop model in Thrissur and Ernakulam locations. The trainer will be responsible for engaging learners with practical examples and real-time coding exercises.


Key Responsibilities:


Conduct offline workshop-style Python training sessions (30 or 60 hours total).


Deliver interactive lectures and coding exercises focused on Python programming fundamentals and applications.


Customize the curriculum based on learners skill levels and project needs.


Guide students through mini-projects, assignments, and coding challenges.


Ensure effective knowledge transfer through practical, real-world examples.


Requirements:


Experience: 15 years of training or industry experience in Python programming.


Technical Skills: Strong knowledge of Python, including OOPs concepts, file handling, libraries (NumPy, Pandas, etc.), and basic data visualization.


Prior experience in academic or corporate training preferred.


Excellent communication and presentation skills.


Mode: Offline Workshop (Thrissur / Ernakulam)


Duration: Flexible – 30 Hours or 60 Hours Total


Organization: KGiSL Microcollege


Role: Other


Industry Type: Education / Training


Department: Other


Employment Type: Full Time, Permanent


Role Category: Other


Education


UG: Any Graduate


Key Skills


Data Science,Artificial Intelligence



Role: Other

Industry Type: Education / Training

Department: Other

Employment Type: Full Time, Permanent

Role Category: Other

Education

UG: Any Graduate

Read more
Data Havn

Data Havn

Agency job
via Infinium Associate by Toshi Srivastava
Delhi, Gurugram, Noida, Ghaziabad, Faridabad
4 - 6 yrs
₹40L - ₹45L / yr
skill iconR Programming
Google Cloud Platform (GCP)
skill iconData Science
skill iconPython
Data Visualization
+3 more

DataHavn IT Solutions is a company that specializes in big data and cloud computing, artificial intelligence and machine learning, application development, and consulting services. We want to be in the frontrunner into anything to do with data and we have the required expertise to transform customer businesses by making right use of data.

 

About the Role:

As a Data Scientist specializing in Google Cloud, you will play a pivotal role in driving data-driven decision-making and innovation within our organization. You will leverage the power of Google Cloud's robust data analytics and machine learning tools to extract valuable insights from large datasets, develop predictive models, and optimize business processes.

Key Responsibilities:

  • Data Ingestion and Preparation:
  • Design and implement efficient data pipelines for ingesting, cleaning, and transforming data from various sources (e.g., databases, APIs, cloud storage) into Google Cloud Platform (GCP) data warehouses (BigQuery) or data lakes (Dataflow).
  • Perform data quality assessments, handle missing values, and address inconsistencies to ensure data integrity.
  • Exploratory Data Analysis (EDA):
  • Conduct in-depth EDA to uncover patterns, trends, and anomalies within the data.
  • Utilize visualization techniques (e.g., Tableau, Looker) to communicate findings effectively.
  • Feature Engineering:
  • Create relevant features from raw data to enhance model performance and interpretability.
  • Explore techniques like feature selection, normalization, and dimensionality reduction.
  • Model Development and Training:
  • Develop and train predictive models using machine learning algorithms (e.g., linear regression, logistic regression, decision trees, random forests, neural networks) on GCP platforms like Vertex AI.
  • Evaluate model performance using appropriate metrics and iterate on the modeling process.
  • Model Deployment and Monitoring:
  • Deploy trained models into production environments using GCP's ML tools and infrastructure.
  • Monitor model performance over time, identify drift, and retrain models as needed.
  • Collaboration and Communication:
  • Work closely with data engineers, analysts, and business stakeholders to understand their requirements and translate them into data-driven solutions.
  • Communicate findings and insights in a clear and concise manner, using visualizations and storytelling techniques.

Required Skills and Qualifications:

  • Strong proficiency in Python or R programming languages.
  • Experience with Google Cloud Platform (GCP) services such as BigQuery, Dataflow, Cloud Dataproc, and Vertex AI.
  • Familiarity with machine learning algorithms and techniques.
  • Knowledge of data visualization tools (e.g., Tableau, Looker).
  • Excellent problem-solving and analytical skills.
  • Ability to work independently and as part of a team.
  • Strong communication and interpersonal skills.

Preferred Qualifications:

  • Experience with cloud-native data technologies (e.g., Apache Spark, Kubernetes).
  • Knowledge of distributed systems and scalable data architectures.
  • Experience with natural language processing (NLP) or computer vision applications.
  • Certifications in Google Cloud Platform or relevant machine learning frameworks.


Read more
Data Havn

Data Havn

Agency job
via Infinium Associate by Toshi Srivastava
Delhi, Gurugram, Noida, Ghaziabad, Faridabad
2.5 - 4.5 yrs
₹10L - ₹20L / yr
skill iconPython
SQL
Google Cloud Platform (GCP)
SQL server
ETL
+9 more

About the Role:


We are seeking a talented Data Engineer to join our team and play a pivotal role in transforming raw data into valuable insights. As a Data Engineer, you will design, develop, and maintain robust data pipelines and infrastructure to support our organization's analytics and decision-making processes.

Responsibilities:

  • Data Pipeline Development: Build and maintain scalable data pipelines to extract, transform, and load (ETL) data from various sources (e.g., databases, APIs, files) into data warehouses or data lakes.
  • Data Infrastructure: Design, implement, and manage data infrastructure components, including data warehouses, data lakes, and data marts.
  • Data Quality: Ensure data quality by implementing data validation, cleansing, and standardization processes.
  • Team Management: Able to handle team.
  • Performance Optimization: Optimize data pipelines and infrastructure for performance and efficiency.
  • Collaboration: Collaborate with data analysts, scientists, and business stakeholders to understand their data needs and translate them into technical requirements.
  • Tool and Technology Selection: Evaluate and select appropriate data engineering tools and technologies (e.g., SQL, Python, Spark, Hadoop, cloud platforms).
  • Documentation: Create and maintain clear and comprehensive documentation for data pipelines, infrastructure, and processes.

 

 Skills:

  • Strong proficiency in SQL and at least one programming language (e.g., Python, Java).
  • Experience with data warehousing and data lake technologies (e.g., Snowflake, AWS Redshift, Databricks).
  • Knowledge of cloud platforms (e.g., AWS, GCP, Azure) and cloud-based data services.
  • Understanding of data modeling and data architecture concepts.
  • Experience with ETL/ELT tools and frameworks.
  • Excellent problem-solving and analytical skills.
  • Ability to work independently and as part of a team.

Preferred Qualifications:

  • Experience with real-time data processing and streaming technologies (e.g., Kafka, Flink).
  • Knowledge of machine learning and artificial intelligence concepts.
  • Experience with data visualization tools (e.g., Tableau, Power BI).
  • Certification in cloud platforms or data engineering.
Read more
 Global Digital Transformation Solutions Provider

Global Digital Transformation Solutions Provider

Agency job
via Peak Hire Solutions by Dhara Thakkar
Bengaluru (Bangalore)
7 - 9 yrs
₹10L - ₹28L / yr
Artificial Intelligence (AI)
Natural Language Processing (NLP)
skill iconPython
skill iconData Science
Generative AI
+10 more

Job Details

Job Title: Lead II - Software Engineering- AI, NLP, Python, Data science

Industry: Technology

Domain - Information technology (IT)

Experience Required: 7-9 years

Employment Type: Full Time

Job Location: Bangalore

CTC Range: Best in Industry


Job Description:

Role Proficiency:

Act creatively to develop applications by selecting appropriate technical options optimizing application development maintenance and performance by employing design patterns and reusing proven solutions. Account for others' developmental activities; assisting Project Manager in day-to-day project execution.


Additional Comments:

Mandatory Skills Data Science Skill to Evaluate AI, Gen AI, RAG, Data Science

Experience 8 to 10 Years

Location Bengaluru

Job Description

Job Title AI Engineer Mandatory Skills Artificial Intelligence, Natural Language Processing, python, data science Position AI Engineer – LLM & RAG Specialization Company Name: Sony India Software Centre About the role: We are seeking a highly skilled AI Engineer with 8-10 years of experience to join our innovation-driven team. This role focuses on the design, development, and deployment of advanced enterprise-scale Large Language Models (eLLM) and Retrieval Augmented Generation (RAG) solutions. You will work on end-to-end AI pipelines, from data processing to cloud deployment, delivering impactful solutions that enhance Sony’s products and services. Key Responsibilities: Design, implement, and optimize LLM-powered applications, ensuring high performance and scalability for enterprise use cases. Develop and maintain RAG pipelines, including vector database integration (e.g., Pinecone, Weaviate, FAISS) and embedding model optimization. Deploy, monitor, and maintain AI/ML models in production, ensuring reliability, security, and compliance. Collaborate with product, research, and engineering teams to integrate AI solutions into existing applications and workflows. Research and evaluate the latest LLM and AI advancements, recommending tools and architectures for continuous improvement. Preprocess, clean, and engineer features from large datasets to improve model accuracy and efficiency. Conduct code reviews and enforce AI/ML engineering best practices. Document architecture, pipelines, and results; present findings to both technical and business stakeholders. Job Description: 8-10 years of professional experience in AI/ML engineering, with at least 4+ years in LLM development and deployment. Proven expertise in RAG architectures, vector databases, and embedding models. Strong proficiency in Python; familiarity with Java, R, or other relevant languages is a plus. Experience with AI/ML frameworks (PyTorch, TensorFlow, etc.) and relevant deployment tools. Hands-on experience with cloud-based AI platforms such as AWS SageMaker, AWS Q Business, AWS Bedrock or Azure Machine Learning. Experience in designing, developing, and deploying Agentic AI systems, with a focus on creating autonomous agents that can reason, plan, and execute tasks to achieve specific goals. Understanding of security concepts in AI systems, including vulnerabilities and mitigation strategies. Solid knowledge of data processing, feature engineering, and working with large-scale datasets. Experience in designing and implementing AI-native applications and agentic workflows using the Model Context Protocol (MCP) is nice to have. Strong problem-solving skills, analytical thinking, and attention to detail. Excellent communication skills with the ability to explain complex AI concepts to diverse audiences. Day-to-day responsibilities: Design and deploy AI-driven solutions to address specific security challenges, such as threat detection, vulnerability prioritization, and security automation. Optimize LLM-based models for various security use cases, including chatbot development for security awareness or automated incident response. Implement and manage RAG pipelines for enhanced LLM performance. Integrate AI models with existing security tools, including Endpoint Detection and Response (EDR), Threat and Vulnerability Management (TVM) platforms, and Data Science/Analytics platforms. This will involve working with APIs and understanding data flows. Develop and implement metrics to evaluate the performance of AI models. Monitor deployed models for accuracy and performance and retrain as needed. Adhere to security best practices and ensure that all AI solutions are developed and deployed securely. Consider data privacy and compliance requirements. Work closely with other team members to understand security requirements and translate them into AI-driven solutions. Communicate effectively with stakeholders, including senior management, to present project updates and findings. Stay up to date with the latest advancements in AI/ML and security and identify opportunities to leverage new technologies to improve our security posture. Maintain thorough documentation of AI models, code, and processes. What We Offer Opportunity to work on cutting-edge LLM and RAG projects with global impact. A collaborative environment fostering innovation, research, and skill growth. Competitive salary, comprehensive benefits, and flexible work arrangements. The chance to shape AI-powered features in Sony’s next-generation products. Be able to function in an environment where the team is virtual and geographically dispersed

Education Qualification: Graduate


Skills: AI, NLP, Python, Data science


Must-Haves

Skills

AI, NLP, Python, Data science

NP: Immediate – 30 Days

 

Read more
Kanerika Software

at Kanerika Software

3 candid answers
2 recruiters
Mounami J
Posted by Mounami J
Hyderabad, Indore, Ahmedabad
6 - 18 yrs
₹18L - ₹60L / yr
skill icon.NET
skill iconData Science
skill iconMongoDB
skill iconAngular (2+)

Job Location: Hyderabad, India.

Roles and Responsibilities:

  • The Sr .NET Data Engineer will be responsible for designing and developing scalable backend systems using .NET Core, Web API, and Azure-based data engineering tools like Databricks, MS Fabric, or Snowflake.
  • They will build and maintain data pipelines, optimize SQL/NoSQL databases, and ensure high-performance systems through design patterns and microservices architecture.
  • Strong communication skills and the ability to collaborate with US counterparts in an Agile environment are essential. Experience with Azure DevOps, Angular, and MongoDB is a plus.

Technical skills:

  • Strong hands-on experience on C#, SQL Server, OOPS Concepts, Micro Services Architecture. · At least one-year hands-on experience on .NET Core, ASP.NET Core, Web API, SQL, No SQL, Entity Framework 6 or above, Azure, Database performance tuning, Applying Design Patterns, Agile.
  • Net back-end development with data engineering expertise.
  • Must have experience with Azure Data Engineering, Azure Databricks, MS Fabric as data platform/ Snowflake or similar tools.
  • Skill for writing reusable libraries.
  • Excellent Communication skills both oral & written.
  • Excellent troubleshooting and communication skills, ability to communicate clearly with US counter parts

What we need?

  • Educational Qualification: B.Tech, B.E, MCA, M.Tech.
  • Experience: Minimum 6+ Years.
  • Work Mode: Must be willing to work from the office (onsite only).

Nice to Have:

  • Knowledge on Angular, Mongo DB, NPM and Azure Devops Build/ Release configuration.
  • Self – Starter with solid analytical and problem- solving skills.
  • This is an experienced level position, and we train the qualified candidate in the required applications.
  • Willingness to work extra hours to meet deliverables.


Read more
KGISL MICROCOLLEGE
skillryt hr
Posted by skillryt hr
Remote only
5 - 8 yrs
₹10L - ₹15L / yr
Training and Development
Artificial Intelligence (AI)
DS
skill iconData Science
trainer

Job Title: Freelance AI & Data Science Trainer | 5+ Years Experience | Tamil Nadu

Location: Coimbatore / Tamil Nadu (Remote or Hybrid)

Engagement: Freelance / Contract-only

Experience: Minimum 5+ years (Industry + Training)

About the Role:

We are looking for an experienced Freelance AI & Data Science Trainer to deliver project-based, industry-relevant training sessions. The trainer should have deep expertise in Machine Learning, Deep Learning, and Python for Data Science, with the ability to guide learners through real-world use cases.

Requirements:

  • Minimum 5 years of experience in AI / Data Science (training or real-world projects).
  • Strong hands-on skills in Python, Pandas, NumPy, Scikit-learn, TensorFlow/PyTorch.
  • Expertise in data analysis, ML algorithms, and deployment workflows.
  • Excellent communication and mentoring skills.
  • Freelancers only (no full-time employment).
  • Must be based in Tamil Nadu (preferably Coimbatore).

Compensation:

  • Per session / per batch payment (competitive, based on experience).
Read more
VyTCDC
Gobinath Sundaram
Posted by Gobinath Sundaram
Bengaluru (Bangalore), Pune, Hyderabad
6 - 12 yrs
₹5L - ₹28L / yr
skill iconData Science
skill iconPython
Large Language Models (LLM)

Job Description:

 

Role: Data Scientist

 

Responsibilities:

 

 Lead data science and machine learning projects, contributing to model development, optimization and evaluation. 

 Perform data cleaning, feature engineering, and exploratory data analysis.  

 

Translate business requirements into technical solutions, document and communicate project progress, manage non-technical stakeholders.

 

Collaborate with other DS and engineers to deliver projects.

 

Technical Skills – Must have:

 

Experience in and understanding of the natural language processing (NLP) and large language model (LLM) landscape.

 

Proficiency with Python for data analysis, supervised & unsupervised learning ML tasks.

 

Ability to translate complex machine learning problem statements into specific deliverables and requirements.

 

Should have worked with major cloud platforms such as AWS, Azure or GCP.

 

Working knowledge of SQL and no-SQL databases.

 

Ability to create data and ML pipelines for more efficient and repeatable data science projects using MLOps principles.

 

Keep abreast with new tools, algorithms and techniques in machine learning and works to implement them in the organization.

 

Strong understanding of evaluation and monitoring metrics for machine learning projects.

Read more
Gyansys Infotech
Bengaluru (Bangalore)
4 - 8 yrs
₹14L - ₹15L / yr
skill iconMachine Learning (ML)
skill iconDeep Learning
TensorFlow
Keras
PyTorch
+5 more

Role: Sr. Data Scientist

Exp: 4 -8 Years

CTC: up to 28 LPA


Technical Skills:

o Strong programming skills in Python, with hands-on experience in deep learning frameworks like TensorFlow, PyTorch, or Keras.

o Familiarity with Databricks notebooks, MLflow, and Delta Lake for scalable machine learning workflows.

o Experience with MLOps best practices, including model versioning, CI/CD pipelines, and automated deployment.

o Proficiency in data preprocessing, augmentation, and handling large-scale image/video datasets.

o Solid understanding of computer vision algorithms, including CNNs, transfer learning, and transformer-based vision models (e.g., ViT).

o Exposure to natural language processing (NLP) techniques is a plus.


Cloud & Infrastructure:

o Strong expertise in Azure cloud ecosystem,

o Experience working in UNIX/Linux environments and using command-line tools for automation and scripting.


If interested kindly share your updated resume at 82008 31681

Read more
GyanSys Inc.
Bengaluru (Bangalore)
4 - 8 yrs
₹14L - ₹15L / yr
skill iconMachine Learning (ML)
skill iconData Science
skill iconPython
PyTorch
TensorFlow
+5 more

Role: Sr. Data Scientist

Exp: 4-8 Years

CTC: up to 25 LPA



Technical Skills:

● Strong programming skills in Python, with hands-on experience in deep learning frameworks like TensorFlow, PyTorch, or Keras.

● Familiarity with Databricks notebooks, MLflow, and Delta Lake for scalable machine learning workflows.

● Experience with MLOps best practices, including model versioning, CI/CD pipelines, and automated deployment.

● Proficiency in data preprocessing, augmentation, and handling large-scale image/video datasets.

● Solid understanding of computer vision algorithms, including CNNs, transfer learning, and transformer-based vision models (e.g., ViT).

● Exposure to natural language processing (NLP) techniques is a plus.



• Educational Qualifications:

  • B.E./B.Tech/M.Tech/MCA in Computer Science, Electronics & Communication, Electrical Engineering, or a related field.
  • A master’s degree in computer science, Artificial Intelligence, or a specialization in Deep Learning or Computer Vision is highly preferred



If interested share your resume on 82008 31681

Read more
GyanSys Inc.
Bengaluru (Bangalore)
4 - 8 yrs
₹14L - ₹15L / yr
skill iconData Science
CI/CD
Natural Language Processing (NLP)
skill iconMachine Learning (ML)
TensorFlow
+5 more

Role: Sr. Data Scientist

Exp: 4-8 Years

CTC: up to 25 LPA



Technical Skills:

● Strong programming skills in Python, with hands-on experience in deep learning frameworks like TensorFlow, PyTorch, or Keras.

● Familiarity with Databricks notebooks, MLflow, and Delta Lake for scalable machine learning workflows.

● Experience with MLOps best practices, including model versioning, CI/CD pipelines, and automated deployment.

● Proficiency in data preprocessing, augmentation, and handling large-scale image/video datasets.

● Solid understanding of computer vision algorithms, including CNNs, transfer learning, and transformer-based vision models (e.g., ViT).

● Exposure to natural language processing (NLP) techniques is a plus.



• Educational Qualifications:

  • B.E./B.Tech/M.Tech/MCA in Computer Science, Electronics & Communication, Electrical Engineering, or a related field.
  • A master’s degree in computer science, Artificial Intelligence, or a specialization in Deep Learning or Computer Vision is highly preferred


Read more
Internshala

at Internshala

5 recruiters
Gayatri Mudgil
Posted by Gayatri Mudgil
Gurugram
4 - 7 yrs
₹10L - ₹15L / yr
Natural Language Processing (NLP)
SQL
MS-Excel
skill iconMachine Learning (ML)
PowerBI
+3 more

💯What will you do?

  • Create and conduct engaging and informative Data Science classes that incorporate real-world examples and hands-on activities to ensure student engagement and retention.
  • Evaluate student projects to ensure they meet industry standards and provide personalised, constructive feedback to students to help them improve their skills and understanding.
  • Conduct viva sessions to assess student understanding and comprehension of the course materials. You will evaluate each student's ability to apply the concepts they have learned in real-world scenarios and provide feedback on their performance.
  • Conduct regular assessments to evaluate student progress, provide feedback to students, and identify areas for improvement in the curriculum.
  • Stay up-to-date with industry developments, best practices, and trends in Data Science, and incorporate this knowledge into course materials and instruction.
  • Work with the placements team to provide guidance and support to students as they navigate their job search, including resume and cover letter reviews, mock interviews, and career coaching.
  • Train the TAs to take the doubt sessions and for project evaluations


💯Who are we looking for?

We are looking for someone who has:

  • A minimum of 1-2 years of industry work experience in data science or a related field. Teaching experience is a plus.
  • In-depth knowledge of various aspects of data science like Python, MYSQL, Power BI, Excel, Machine Learning with statistics, NLP, DL.
  • Knowledge of AI tools like ChatGPT (latest versions as well), debugcode.ai, etc.
  • Passion for teaching and a desire to impart practical knowledge to students.
  • Excellent communication and interpersonal skills, with the ability to engage and motivate students of all levels.
  • Experience with curriculum development, lesson planning, and instructional design is a plus.
  • Familiarity with learning management systems (LMS) and digital teaching tools will be an added advantage.
  • Ability to work independently and as part of a team in a fast-paced, dynamic environment.


💯What do we offer in return?

  • Awesome colleagues & a great work environment - Internshala is known for its culture (see for yourself) and has twice been recognized as a Great Place To Work in the last 3 years
  • A massive learning opportunity to be an early member of a new initiative and experience building it from scratch
  • Competitive remuneration


💰 Compensation - Competitive remuneration based on your experience and skills

📅 Start date - Immediately

Read more
MindCrew Technologies

at MindCrew Technologies

3 recruiters
Agency job
via TIGI HR Solution Pvt. Ltd. by Vaidehi Sarkar
Pune
10 - 14 yrs
₹10L - ₹15L / yr
Snowflake
ETL
SQL
Snow flake schema
Data modeling
+3 more

Exp: 10+ Years

CTC: 1.7 LPM

Location: Pune

SnowFlake Expertise Profile


Should hold 10 + years of experience with strong skills with core understanding of cloud data warehouse principles and extensive experience in designing, building, optimizing, and maintaining robust and scalable data solutions on the Snowflake platform.

Possesses a strong background in data modelling, ETL/ELT, SQL development, performance tuning, scaling, monitoring and security handling.


Responsibilities:

* Collaboration with Data and ETL team to review code, understand current architecture and help improve it based on Snowflake offerings and experience

* Review and implement best practices to design, develop, maintain, scale, efficiently monitor data pipelines and data models on the Snowflake platform for

ETL or BI.

* Optimize complex SQL queries for data extraction, transformation, and loading within Snowflake.

* Ensure data quality, integrity, and security within the Snowflake environment.

* Participate in code reviews and contribute to the team's development standards.

Education:

* Bachelor’s degree in computer science, Data Science, Information Technology, or anything equivalent.

* Relevant Snowflake certifications are a plus (e.g., Snowflake certified Pro / Architecture / Advanced).

Read more
QAgile Services

at QAgile Services

1 recruiter
Radhika Chotai
Posted by Radhika Chotai
Bengaluru (Bangalore)
4 - 6 yrs
₹12L - ₹25L / yr
skill iconData Science
skill iconPython
skill iconMachine Learning (ML)
PowerBI
SQL
+5 more

Proven experience as a Data Scientist or similar role with relevant experience of at least 4 years and total experience 6-8 years.


· Technical expertiseregarding data models, database design development, data mining and segmentation techniques


· Strong knowledge of and experience with reporting packages (Business Objects and likewise), databases, programming in ETL frameworks


· Experience with data movement and management in the Cloud utilizing a combination of Azure or AWS features


· Hands on experience in data visualization tools – Power BI preferred


· Solid understanding of machine learning


· Knowledge of data management and visualization techniques


· A knack for statistical analysis and predictive modeling


· Good knowledge of Python and Matlab


· Experience with SQL and NoSQL databases including ability to write complex queries and procedures

Read more
NeoGenCode Technologies Pvt Ltd
Ritika Verma
Posted by Ritika Verma
Gurugram
4 - 6 yrs
₹5L - ₹14L / yr
Large Language Models (LLM)
skill iconData Science
Natural Language Processing (NLP)
Recurrent neural network (RNN)


We’re searching for an experienced Data Scientist with a strong background in NLP and large language models to join our innovative team! If you thrive on solving complex language problems and are hands-on with spaCy, NER, RNN, LSTM, Transformers, and LLMs (like GPT), we want to connect.

What You’ll Do:

  • Build & deploy advanced NLP solutions: entity recognition, text classification, and more.
  • Fine-tune and train state-of-the-art deep learning models (RNN, LSTM, Transformer, GPT).
  • Apply libraries like spaCy for NER and text processing.
  • Collaborate across teams to integrate AI-driven features.
  • Preprocess, annotate, and manage data workflows.
  • Analyze model performance and drive continuous improvement.
  • Stay current with AI/NLP breakthroughs and advocate innovation.

What You Bring:

  • 4-5+ years of industry experience in data science/NLP.
  • Strong proficiency in Python, spaCy, NLTK, PyTorch or TensorFlow.
  • Hands-on with NER, custom pipelines, and prompt engineering.
  • Deep understanding and experience with RNN, LSTM, Transformer, and LLMs/GPT.
  • Collaborative and independent problem solver.

Nice to Have:

  • Experience deploying NLP models (Docker, cloud).
  • MLOps, vector databases, RAG, semantic search.
  • Annotation tools and team management.

Why Join Us?

  • Work with cutting-edge technology and real-world impact.
  • Flexible hours, remote options, and a supportive, inclusive culture.
  • Competitive compensation and benefits.

Ready to push the boundaries of AI with us? Apply now or DM for more info!

Read more
NeoGenCode Technologies Pvt Ltd
Akshay Patil
Posted by Akshay Patil
Gurugram
4 - 6 yrs
₹10L - ₹18L / yr
skill iconData Science
Natural Language Processing (NLP)
Large Language Models (LLM)
spaCy
Named-entity recognition
+4 more

Job Title : Data Scientist – NLP & LLM

Experience Required : 4 to 5+ Years

Location : Gurugram

Notice Period : Immediate Joiner Preferred

Employment Type : Full-Time


Job Summary :

We are seeking a highly skilled Data Scientist with strong expertise in Natural Language Processing (NLP) and modern deep learning techniques. The ideal candidate will have hands-on experience working with NER, RNN, LSTM, Transformers, GPT models, and Large Language Models (LLMs), including frameworks such as spaCy.


Mandatory Skills : NLP, spaCy, NER (Named-entity Recognition), RNN (Recurrent Neural Network), LSTM (Long Short-Term Memory), Transformers, GPT, LLMs, Python


Key Responsibilities :

  • Design and implement NLP models for tasks like text classification, entity recognition, summarization, and question answering.
  • Develop and optimize models using deep learning architectures such as RNN, LSTM, and Transformer-based models.
  • Fine-tune or build models using pre-trained LLMs such as GPT, BERT, etc.
  • Work with tools and libraries including spaCy, Hugging Face Transformers, and other relevant frameworks.
  • Perform data preprocessing, feature extraction, and training pipeline development.
  • Evaluate model performance and iterate with scalable solutions.
  • Collaborate with engineering and product teams to integrate models into production.

Required Skills :

  • 4 to 5+ years of hands-on experience in Data Science and NLP.
  • Strong understanding of NER, RNN, LSTM, Transformers, GPT, and LLM architectures.
  • Experience with spaCy, TensorFlow, PyTorch, and Hugging Face Transformers.
  • Proficient in Python and its ML ecosystem (NumPy, pandas, scikit-learn, etc.).
  • Familiarity with prompt engineering and fine-tuning LLMs is a plus.
  • Excellent problem-solving and communication skills.
Read more
Zolvit (formerly Vakilsearch)

at Zolvit (formerly Vakilsearch)

1 video
2 recruiters
Lakshmi J
Posted by Lakshmi J
Bengaluru (Bangalore), Chennai
1 - 4 yrs
₹10L - ₹15L / yr
skill iconMachine Learning (ML)
skill iconData Science
Generative AI
Artificial Intelligence (AI)
Natural Language Processing (NLP)
+1 more

About the Role

We are seeking an innovative Data Scientist specializing in Natural Language Processing (NLP) to join our technology team in Bangalore. The ideal candidate will harness the power of language models and document extraction techniques to transform legal information into accessible, actionable insights for our clients.

Responsibilities

  • Develop and implement NLP solutions to automate legal document analysis and extraction
  • Create and optimize prompt engineering strategies for large language models
  • Design search functionality leveraging semantic understanding of legal documents
  • Build document extraction pipelines to process unstructured legal text data
  • Develop data visualizations using PowerBI and Tableau to communicate insights
  • Collaborate with product and legal teams to enhance our tech-enabled services
  • Continuously improve model performance and user experience

Requirements

  • Bachelor's degree in relevant field
  • 1-5 years of professional experience in data science, with focus on NLP applications
  • Demonstrated experience working with LLM APIs (e.g., OpenAI, Anthropic, )
  • Proficiency in prompt engineering and optimization techniques
  • Experience with document extraction and information retrieval systems
  • Strong skills in data visualization tools, particularly PowerBI and Tableau
  • Excellent programming skills in Python and familiarity with NLP libraries
  • Strong understanding of legal terminology and document structures (preferred)
  • Excellent communication skills in English

What We Offer

  • Competitive salary and benefits package
  • Opportunity to work at India's largest legal tech company
  • Professional growth in the fast-evolving legal technology sector
  • Collaborative work environment with industry experts
  • Modern office located in Bangalore
  • Flexible work arrangements


Qualified candidates are encouraged to apply with a resume highlighting relevant experience with NLP, prompt engineering, and data visualization tools.

Location: Bangalore, India



Read more
TNQ Tech Pvt Ltd
Ramprasad Balasubramanian (TNQ Tech)
Posted by Ramprasad Balasubramanian (TNQ Tech)
Chennai
4 - 8 yrs
₹20L - ₹30L / yr
skill iconData Science

Company Description

TNQTech is a publishing technology and services company. Our AI-enabled technology and products deliver content services to some of the largest commercial publishers, prestigious learned societies, associations, and university presses. These services reach millions of authors through our clientele. We are dedicated to advancing publishing technology and providing innovative solutions to our users.


Role Description

This is a full-time on-site role for a Senior Data Scientist - Lead Role located in Chennai. The Senior Data Scientist will lead the data science team, conduct statistical analyses, develop data models, and interpret complex data to provide actionable insights. Additionally, responsibilities include overseeing data analytics projects, creating data visualizations, and ensuring the accuracy and quality of data analysis. Collaboration with cross-functional teams to understand data needs and drive data-driven decision-making is also key.


Qualifications

  • Proficiency in Data Science and Data Analysis
  • Strong background in Statistics and Data Analytics
  • Experience with Data Visualization tools and techniques
  • Excellent problem-solving and analytical skills
  • Ability to lead and mentor a team of data scientists
  • Strong communication and collaboration skills
  • Master's or Ph.D. in Data Science, Statistics, Computer Science, or a related field
Read more
AI Startup company

AI Startup company

Agency job
via People Impact by Ranjita Shrivastava
Bengaluru (Bangalore)
8 - 20 yrs
₹15L - ₹30L / yr
doctoral
skill iconData Science
Mathematics
Teaching
  1. Curriculum Development: Collaborate with Academic Advisory Committee and Marketing team members to design and develop comprehensive curriculum for data science programs at undergraduate and graduate levels. Ensure alignment with industry trends, emerging technologies, and best practices in data science education.
  2. Faculty Recruitment and Development: Lead the recruitment, selection, and development of faculty members with expertise in data science. Provide mentorship, support, and professional development opportunities to faculty to enhance teaching effectiveness and academic performance.
  3. Quality Assurance: Establish and maintain robust mechanisms for quality assurance and academic oversight. Implement assessment strategies, evaluation criteria, and continuous improvement processes to ensure the delivery of high-quality education and student outcomes.
  4. Student Engagement: Foster a culture of student engagement, innovation, and success. Develop initiatives to support student learning, retention, and career readiness in the field of data science. Provide academic counselling and support services to students as needed.
  5. Industry Collaboration: Collaborate with industry partners, employers, and professional organizations to enhance experiential learning opportunities, internships, and job placement prospects for students. Organize industry events, guest lectures, and networking opportunities to facilitate knowledge exchange and industry engagement.
  6. Research and Innovation: Encourage research and innovation in data science education. Facilitate faculty research projects, interdisciplinary collaborations, and scholarly activities to advance knowledge and contribute to the academic community.

Budget Management: Develop and manage the academic budget in collaboration with the finance department. Ensure efficient allocation of resources to support academic programs, faculty development, and student services.

Read more
KGISL MICROCOLLEGE
Agency job
via EWU by Pavasshrie Muruganandham
Thrissur
2 - 5 yrs
₹2L - ₹6L / yr
skill iconData Analytics
skill iconData Science
trainer
PowerBI
Tableau
+3 more

We are seeking a dynamic and experienced Data Analytics and Data Science Trainer to deliver high-quality training sessions, mentor learners, and design engaging course content. The ideal candidate will have a strong foundation in statistics, programming, and data visualization tools, and should be passionate about teaching and guiding aspiring professionals.

Read more
QuaXigma IT solutions Private Limited
Tirupati
3 - 5 yrs
₹6L - ₹10L / yr
skill iconPython
skill iconMachine Learning (ML)
SQL
EDA
skill iconData Analytics
+3 more

Data Scientist

Job Id: QX003

About Us:

QX impact was launched with a mission to make AI accessible and affordable and deliver AI Products/Solutions at scale for enterprises by bringing the power of Data, AI, and Engineering to drive digital transformation. We believe without insights, businesses will continue to face challenges to better understand their customers and even lose them; Secondly, without insights businesses won't’ be able to deliver differentiated products/services; and finally, without insights, businesses can’t achieve a new level of “Operational Excellence” is crucial to remain competitive, meeting rising customer expectations, expanding markets, and digitalization.

Position Overview:

We are seeking a collaborative and analytical Data Scientist who can bridge the gap between business needs and data science capabilities. In this role, you will lead and support projects that apply machine learning, AI, and statistical modeling to generate actionable insights and drive business value.

Key Responsibilities:

  • Collaborate with stakeholders to define and translate business challenges into data science solutions.
  • Conduct in-depth data analysis on structured and unstructured datasets.
  • Build, validate, and deploy machine learning models to solve real-world problems.
  • Develop clear visualizations and presentations to communicate insights.
  • Drive end-to-end project delivery, from exploration to production.
  • Contribute to team knowledge sharing and mentorship activities.

Must-Have Skills:

  • 3+ years of progressive experience in data science, applied analytics, or a related quantitative role, demonstrating a proven track record of delivering impactful data-driven solutions.
  • Exceptional programming proficiency in Python, including extensive experience with core libraries such as Pandas, NumPy, Scikit-learn, NLTK and XGBoost. 
  • Expert-level SQL skills for complex data extraction, transformation, and analysis from various relational databases.
  • Deep understanding and practical application of statistical modeling and machine learning techniques, including but not limited to regression, classification, clustering, time series analysis, and dimensionality reduction.
  • Proven expertise in end-to-end machine learning model development lifecycle, including robust feature engineering, rigorous model validation and evaluation (e.g., A/B testing), and model deployment strategies.
  • Demonstrated ability to translate complex business problems into actionable analytical frameworks and data science solutions, driving measurable business outcomes.
  • Proficiency in advanced data analysis techniques, including Exploratory Data Analysis (EDA), customer segmentation (e.g., RFM analysis), and cohort analysis, to uncover actionable insights.
  • Experience in designing and implementing data models, including logical and physical data modeling, and developing source-to-target mappings for robust data pipelines.
  • Exceptional communication skills, with the ability to clearly articulate complex technical findings, methodologies, and recommendations to diverse business stakeholders (both technical and non-technical audiences).
  • Experience in designing and implementing data models, including logical and physical data modeling, and developing source-to-target mappings for robust data pipelines.
  • Exceptional communication skills, with the ability to clearly articulate complex technical findings, methodologies, and recommendations to diverse business stakeholders (both technical and non-technical audiences).

Good-to-Have Skills:

  • Experience with cloud platforms (Azure, AWS, GCP) and specific services like Azure ML, Synapse, Azure Kubernetes and Databricks.
  • Familiarity with big data processing tools like Apache Spark or Hadoop.
  • Exposure to MLOps tools and practices (e.g., MLflow, Docker, Kubeflow) for model lifecycle management.
  • Knowledge of deep learning libraries (TensorFlow, PyTorch) or experience with Generative AI (GenAI) and Large Language Models (LLMs).
  • Proficiency with business intelligence and data visualization tools such as Tableau, Power BI, or Plotly.
  • Experience working within Agile project delivery methodologies.

Competencies:

·        Tech Savvy - Anticipating and adopting innovations in business-building digital and technology applications.

·        Self-Development - Actively seeking new ways to grow and be challenged using both formal and informal development channels.

·        Action Oriented - Taking on new opportunities and tough challenges with a sense of urgency, high energy, and enthusiasm.

·        Customer Focus - Building strong customer relationships and delivering customer-centric solutions.

·        Optimizes Work Processes - Knowing the most effective and efficient processes to get things done, with a focus on continuous improvement.

Why Join Us?

  • Be part of a collaborative and agile team driving cutting-edge AI and data engineering solutions.
  • Work on impactful projects that make a difference across industries.
  • Opportunities for professional growth and continuous learning.
  • Competitive salary and benefits package.

 


Read more
KJBN labs

at KJBN labs

2 candid answers
sakthi ganesh
Posted by sakthi ganesh
Bengaluru (Bangalore)
4 - 7 yrs
₹10L - ₹30L / yr
Hadoop
Apache Kafka
Spark
skill iconPython
skill iconJava
+8 more

Senior Data Engineer Job Description

Overview

The Senior Data Engineer will design, develop, and maintain scalable data pipelines and

infrastructure to support data-driven decision-making and advanced analytics. This role requires deep

expertise in data engineering, strong problem-solving skills, and the ability to collaborate with

cross-functional teams to deliver robust data solutions.

Key Responsibilities


Data Pipeline Development: Design, build, and optimize scalable, secure, and reliable data

pipelines to ingest, process, and transform large volumes of structured and unstructured data.

Data Architecture: Architect and maintain data storage solutions, including data lakes, data

warehouses, and databases, ensuring performance, scalability, and cost-efficiency.

Data Integration: Integrate data from diverse sources, including APIs, third-party systems,

and streaming platforms, ensuring data quality and consistency.

Performance Optimization: Monitor and optimize data systems for performance, scalability,

and cost, implementing best practices for partitioning, indexing, and caching.

Collaboration: Work closely with data scientists, analysts, and software engineers to

understand data needs and deliver solutions that enable advanced analytics, machine

learning, and reporting.

Data Governance: Implement data governance policies, ensuring compliance with data

security, privacy regulations (e.g., GDPR, CCPA), and internal standards.

Automation: Develop automated processes for data ingestion, transformation, and validation

to improve efficiency and reduce manual intervention.

Mentorship: Guide and mentor junior data engineers, fostering a culture of technical

excellence and continuous learning.

Troubleshooting: Diagnose and resolve complex data-related issues, ensuring high

availability and reliability of data systems.

Required Qualifications

Education: Bachelor’s or Master’s degree in Computer Science, Engineering, Data Science,

or a related field.

Experience: 5+ years of experience in data engineering or a related role, with a proven track

record of building scalable data pipelines and infrastructure.

Technical Skills:

Proficiency in programming languages such as Python, Java, or Scala.

Expertise in SQL and experience with NoSQL databases (e.g., MongoDB, Cassandra).

Strong experience with cloud platforms (e.g., AWS, Azure, GCP) and their data services

(e.g., Redshift, BigQuery, Snowflake).

Hands-on experience with ETL/ELT tools (e.g., Apache Airflow, Talend, Informatica) and

data integration frameworks.

Familiarity with big data technologies (e.g., Hadoop, Spark, Kafka) and distributed

systems.

Knowledge of containerization and orchestration tools (e.g., Docker, Kubernetes) is a

plus.

Soft Skills:

Excellent problem-solving and analytical skills.

Strong communication and collaboration abilities.

Ability to work in a fast-paced, dynamic environment and manage multiple priorities.

Certifications (optional but preferred): Cloud certifications (e.g., AWS Certified Data Analytics,

Google Professional Data Engineer) or relevant data engineering certifications.

Preferred Qualifica

Experience with real-time data processing and streaming architectures.

Familiarity with machine learning pipelines and MLOps practices.

Knowledge of data visualization tools (e.g., Tableau, Power BI) and their integration with data

pipelines.

Experience in industries with high data complexity, such as finance, healthcare, or

e-commerce.

Work Environment

Location: Hybrid/Remote/On-site (depending on company policy).

Team: Collaborative, cross-functional team environment with data scientists, analysts, and

business stakeholders.

Hours: Full-time, with occasional on-call responsibilities for critical data systems.

Read more
HaystackAnalytics
Careers Hr
Posted by Careers Hr
Navi Mumbai
1 - 4 yrs
₹6L - ₹12L / yr
skill iconRust
skill iconPython
Artificial Intelligence (AI)
skill iconMachine Learning (ML)
skill iconData Science
+2 more

Position – Python Developer

Location – Navi Mumbai


Who are we

Based out of IIT Bombay, HaystackAnalytics is a HealthTech company creating clinical genomics products, which enable diagnostic labs and hospitals to offer accurate and personalized diagnostics. Supported by India's most respected science agencies (DST, BIRAC, DBT), we created and launched a portfolio of products to offer genomics in infectious diseases. Our genomics-based diagnostic solution for Tuberculosis was recognized as one of the top innovations supported by BIRAC in the past 10 years, and was launched by the Prime Minister of India in the BIRAC Showcase event in Delhi, 2022.


Objectives of this Role:

  • Design and implement efficient, scalable backend services using Python.
  • Work closely with healthcare domain experts to create innovative and accurate diagnostics solutions.
  • Build APIs, services, and scripts to support data processing pipelines and front-end applications.
  • Automate recurring tasks and ensure robust integration with cloud services.
  • Maintain high standards of software quality and performance using clean coding principles and testing practices.
  • Collaborate within the team to upskill and unblock each other for faster and better outcomes.





Primary Skills – Python Development

  • Proficient in Python 3 and its ecosystem
  • Frameworks: Flask / Django / FastAPI
  • RESTful API development
  • Understanding of OOPs and SOLID design principles
  • Asynchronous programming (asyncio, aiohttp)
  • Experience with task queues (Celery, RQ)
  • Rust programming experience for systems-level or performance-critical components

Testing & Automation

  • Unit Testing: PyTest / unittest
  • Automation tools: Ansible / Terraform (good to have)
  • CI/CD pipelines

DevOps & Cloud

  • Docker, Kubernetes (basic knowledge expected)
  • Cloud platforms: AWS / Azure / GCP
  • GIT and GitOps workflows
  • Familiarity with containerized deployment & serverless architecture

Bonus Skills

  • Data handling libraries: Pandas / NumPy
  • Experience with scripting: Bash / PowerShell
  • Functional programming concepts
  • Familiarity with front-end integration (REST API usage, JSON handling)

 Other Skills

  • Innovation and thought leadership
  • Interest in learning new tools, languages, workflows
  • Strong communication and collaboration skills
  • Basic understanding of UI/UX principles


To know more about ushttps://haystackanalytics.in




Read more
Client based at Bangalore location.

Client based at Bangalore location.

Agency job
Remote only
4 - 20 yrs
₹12L - ₹30L / yr
clinical trial data
EMR
electronics medical record
claims
registry data
+5 more

Position Overview:

We are seeking a highly motivated and skilled Real-World Evidence (RWE) Analyst to join growing team. The successful candidate will be instrumental in generating crucial insights from real-world healthcare data to inform decision-making, improve patient outcomes, and advance medical understanding. This role offers an exciting opportunity to work with diverse healthcare datasets and contribute to impactful research that drives real-world change.

Key Responsibilities:

For Both RWE Analyst (Junior) & Senior RWE Analyst:

  • Data Expertise: Work extensively with real-world healthcare data, including Electronic Medical Records (EMR), claims data, and/or patient registries. (Experience with clinical trial data does not fulfill this requirement.)
  • Methodology: Apply appropriate statistical and epidemiological methodologies to analyze complex healthcare datasets.
  • Communication: Clearly communicate findings through presentations, reports, and data visualizations to both technical and non-technical audiences.
  • Collaboration: Collaborate effectively with cross-functional teams, including clinicians, epidemiologists, statisticians, and data scientists.
  • Quality Assurance: Ensure the accuracy, reliability, and validity of all analyses and reports.
  • Ethical Conduct: Adhere to all relevant data privacy regulations and ethical guidelines in real-world data research.

Specific Responsibilities for RWE Analyst (Junior):

  • Perform statistical analysis on real-world healthcare datasets under guidance.
  • Contribute to the development of analysis plans, often by implementing predefined methodologies or refining existing approaches.
  • Prepare and clean data for analysis, identifying and addressing data quality issues.
  • Assist in the interpretation of study results and the drafting of reports or presentations.
  • Support the preparation of journal publication materials based on RWE studies.

Specific Responsibilities for Senior RWE Analyst:

  • Analysis Design & Leadership: Independently design and develop comprehensive analysis plans from inception for RWE studies, identifying appropriate methodologies, data sources, and analytical approaches. This role requires a "thinker" who can conceptualize and drive the analytical strategy, not just execute pre-defined requests.
  • Project Management: Lead and manage RWE projects from conception to completion, ensuring timely delivery and high-quality outputs.
  • Mentorship: Mentor and guide junior RWE analysts, fostering their development in real-world data analysis and research.
  • Methodological Innovation: Proactively identify and evaluate new methodologies and technologies to enhance RWE capabilities.
  • Strategic Input: Provide strategic input on study design, data acquisition, and evidence generation strategies.

Qualifications:

For Both RWE Analyst (Junior) & Senior RWE Analyst:

  • Bachelor's or Master's degree in Epidemiology, Biostatistics, Public Health, Health Economics, Data Science, or a related quantitative field. (PhD preferred for Senior RWE Analyst).
  • Demonstrable hands-on experience working with real-world healthcare data, specifically EMR, claims, and/or registry data. Clinical trial data experience will not be considered as meeting this requirement.
  • Proficiency in at least one statistical programming language (e.g., R, Python, SAS, SQL).
  • Strong understanding of epidemiological study designs and statistical methods relevant to RWE.
  • Excellent analytical, problem-solving, and critical thinking skills.
  • Strong written and verbal communication skills.

Specific Qualifications for RWE Analyst (Junior):

  • 4+ years of experience in real-world data analysis in a healthcare or pharmaceutical setting.
  • Involvement with journal publications is highly desirable. (e.g., co-authorship, contribution to manuscript preparation).

Specific Qualifications for Senior RWE Analyst:

  • 5+ years of progressive experience in real-world data analysis, with a significant portion dedicated to independent study design and leadership.
  • A strong track record of journal publications is essential. (e.g., lead author, significant contribution to multiple peer-reviewed publications).
  • Proven ability to translate complex analytical findings into actionable insights for diverse stakeholders.
  • Experience with advanced analytical techniques (e.g., machine learning, causal inference) is a plus.

Preferred Skills (for both roles, but more emphasized for Senior):

  • Experience with large healthcare databases (e.g., IQVIA, Optum, IBM MarketScan, SEER, NDHM).
  • Knowledge of common data models (e.g., OMOP CDM).
  • Familiarity with regulatory guidelines and best practices for RWE generation.


Read more
VyTCDC
Gobinath Sundaram
Posted by Gobinath Sundaram
Hyderabad, Bengaluru (Bangalore), Pune
6 - 11 yrs
₹8L - ₹26L / yr
skill iconData Science
skill iconPython
Large Language Models (LLM)
Natural Language Processing (NLP)

POSITION / TITLE: Data Science Lead

Location: Offshore – Hyderabad/Bangalore/Pune

Who are we looking for?

Individuals with 8+ years of experience implementing and managing data science projects. Excellent working knowledge of traditional machine learning and LLM techniques. 

‎ The candidate must demonstrate the ability to navigate and advise on complex ML ecosystems from a model building and evaluation perspective. Experience in NLP and chatbots domains is preferred.

We acknowledge the job market is blurring the line between data roles: while software skills are necessary, the emphasis of this position is on data science skills, not on data-, ML- nor software-engineering.

Responsibilities:

· Lead data science and machine learning projects, contributing to model development, optimization and evaluation. 

· Perform data cleaning, feature engineering, and exploratory data analysis.  

· Translate business requirements into technical solutions, document and communicate project progress, manage non-technical stakeholders.

· Collaborate with other DS and engineers to deliver projects.

Technical Skills – Must have:

· Experience in and understanding of the natural language processing (NLP) and large language model (LLM) landscape.

· Proficiency with Python for data analysis, supervised & unsupervised learning ML tasks.

· Ability to translate complex machine learning problem statements into specific deliverables and requirements.

· Should have worked with major cloud platforms such as AWS, Azure or GCP.

· Working knowledge of SQL and no-SQL databases.

· Ability to create data and ML pipelines for more efficient and repeatable data science projects using MLOps principles.

· Keep abreast with new tools, algorithms and techniques in machine learning and works to implement them in the organization.

· Strong understanding of evaluation and monitoring metrics for machine learning projects.

Technical Skills – Good to have:

· Track record of getting ML models into production

· Experience building chatbots.

· Experience with closed and open source LLMs.

· Experience with frameworks and technologies like scikit-learn, BERT, langchain, autogen…

· Certifications or courses in data science.

Education:

· Master’s/Bachelors/PhD Degree in Computer Science, Engineering, Data Science, or a related field. 

Process Skills:

· Understanding of  Agile and Scrum  methodologies.  

· Ability to follow SDLC processes and contribute to technical documentation.  

Behavioral Skills :

· Self-motivated and capable of working independently with minimal management supervision.

· Well-developed design, analytical & problem-solving skills

· Excellent communication and interpersonal skills.  

· Excellent team player, able to work with virtual teams in several time zones.

Read more
Client based at Bangalore location.

Client based at Bangalore location.

Agency job
Remote only
4 - 12 yrs
₹12L - ₹40L / yr
Data scientist
skill iconData Science
Prompt engineering
skill iconPython
Artificial Intelligence (AI)
+4 more

Role: Data Scientist

Location: Bangalore (Remote)

Experience: 4 - 15 years


Skills Required - Radiology, visual images, text, classical model, LLM multi model, Primarily Generative AI, Prompt Engineering, Large Language Models, Speech & Text Domain AI, Python coding, AI Skills, Real world evidence, Healthcare domain

 

JOB DESCRIPTION

We are seeking an experienced Data Scientist with a proven track record in Machine Learning, Deep Learning, and a demonstrated focus on Large Language Models (LLMs) to join our cutting-edge Data Science team. You will play a pivotal role in developing and deploying innovative AI solutions that drive real-world impact to patients and healthcare providers.

Responsibilities

• LLM Development and Fine-tuning: fine-tune, customize, and adapt large language models (e.g., GPT, Llama2, Mistral, etc.) for specific business applications and NLP tasks such as text classification, named entity recognition, sentiment analysis, summarization, and question answering. Experience in other transformer-based NLP models such as BERT, etc. will be an added advantage.

• Data Engineering: collaborate with data engineers to develop efficient data pipelines, ensuring the quality and integrity of large-scale text datasets used for LLM training and fine-tuning

• Experimentation and Evaluation: develop rigorous experimentation frameworks to evaluate model performance, identify areas for improvement, and inform model selection. Experience in LLM testing frameworks such as TruLens will be an added advantage.

• Production Deployment: work closely with MLOps and Data Engineering teams to integrate models into scalable production systems.

• Predictive Model Design and Implementation: leverage machine learning/deep learning and LLM methods to design, build, and deploy predictive models in oncology (e.g., survival models)

• Cross-functional Collaboration: partner with product managers, domain experts, and stakeholders to understand business needs and drive the successful implementation of data science solutions

• Knowledge Sharing: mentor junior team members and stay up to date with the latest advancements in machine learning and LLMs

Qualifications Required

• Doctoral or master’s degree in computer science, Data Science, Artificial Intelligence, or related field

• 5+ years of hands-on experience in designing, implementing, and deploying machine learning and deep learning models

• 12+ months of in-depth experience working with LLMs. Proficiency in Python and NLP-focused libraries (e.g., spaCy, NLTK, Transformers, TensorFlow/PyTorch).

• Experience working with cloud-based platforms (AWS, GCP, Azure)

Additional Skills

• Excellent problem-solving and analytical abilities

• Strong communication skills, both written and verbal

• Ability to thrive in a collaborative and fast-paced environment


Read more
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Find more jobs
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort