Cutshort logo
Datascience jobs

50+ Data Science Jobs in India

Apply to 50+ Data Science Jobs on CutShort.io. Find your next job, effortlessly. Browse Data Science Jobs and apply today!

icon
Proximity Works

at Proximity Works

1 video
5 recruiters
Eman Khan
Posted by Eman Khan
Remote only
5 - 10 yrs
₹30L - ₹60L / yr
skill iconPython
skill iconData Science
pandas
Scikit-Learn
TensorFlow
+9 more

We’re seeking a highly skilled, execution-focused Senior Data Scientist with a minimum of 5 years of experience. This role demands hands-on expertise in building, deploying, and optimizing machine learning models at scale, while working with big data technologies and modern cloud platforms. You will be responsible for driving data-driven solutions from experimentation to production, leveraging advanced tools and frameworks across Python, SQL, Spark, and AWS. The role requires strong technical depth, problem-solving ability, and ownership in delivering business impact through data science.


Responsibilities

  • Design, build, and deploy scalable machine learning models into production systems.
  • Develop advanced analytics and predictive models using Python, SQL, and popular ML/DL frameworks (Pandas, Scikit-learn, TensorFlow, PyTorch).
  • Leverage Databricks, Apache Spark, and Hadoop for large-scale data processing and model training.
  • Implement workflows and pipelines using Airflow and AWS EMR for automation and orchestration.
  • Collaborate with engineering teams to integrate models into cloud-based applications on AWS.
  • Optimize query performance, storage usage, and data pipelines for efficiency.
  • Conduct end-to-end experiments, including data preprocessing, feature engineering, model training, validation, and deployment.
  • Drive initiatives independently with high ownership and accountability.
  • Stay up to date with industry best practices in machine learning, big data, and cloud-native deployments.



Requirements:

  • Minimum 5 years of experience in Data Science or Applied Machine Learning.
  • Strong proficiency in Python, SQL, and ML libraries (Pandas, Scikit-learn, TensorFlow, PyTorch).
  • Proven expertise in deploying ML models into production systems.
  • Experience with big data platforms (Hadoop, Spark) and distributed data processing.
  • Hands-on experience with Databricks, Airflow, and AWS EMR.
  • Strong knowledge of AWS cloud services (S3, Lambda, SageMaker, EC2, etc.).
  • Solid understanding of query optimization, storage systems, and data pipelines.
  • Excellent problem-solving skills, with the ability to design scalable solutions.
  • Strong communication and collaboration skills to work in cross-functional teams.



Benefits:

  • Best in class salary: We hire only the best, and we pay accordingly.
  • Proximity Talks: Meet other designers, engineers, and product geeks — and learn from experts in the field.
  • Keep on learning with a world-class team: Work with the best in the field, challenge yourself constantly, and learn something new every day.


About Us:

Proximity is the trusted technology, design, and consulting partner for some of the biggest Sports, Media, and Entertainment companies in the world! We’re headquartered in San Francisco and have offices in Palo Alto, Dubai, Mumbai, and Bangalore. Since 2019, Proximity has created and grown high-impact, scalable products used by 370 million daily users, with a total net worth of $45.7 billion among our client companies.


Today, we are a global team of coders, designers, product managers, geeks, and experts. We solve complex problems and build cutting-edge tech, at scale. Our team of Proxonauts is growing quickly, which means your impact on the company’s success will be huge. You’ll have the chance to work with experienced leaders who have built and led multiple tech, product, and design teams.

Read more
Data Havn

Data Havn

Agency job
via Infinium Associate by Toshi Srivastava
Delhi, Gurugram, Noida, Ghaziabad, Faridabad
4 - 6 yrs
₹40L - ₹45L / yr
skill iconR Programming
Google Cloud Platform (GCP)
skill iconData Science
skill iconPython
Data Visualization
+3 more

DataHavn IT Solutions is a company that specializes in big data and cloud computing, artificial intelligence and machine learning, application development, and consulting services. We want to be in the frontrunner into anything to do with data and we have the required expertise to transform customer businesses by making right use of data.

 

About the Role:

As a Data Scientist specializing in Google Cloud, you will play a pivotal role in driving data-driven decision-making and innovation within our organization. You will leverage the power of Google Cloud's robust data analytics and machine learning tools to extract valuable insights from large datasets, develop predictive models, and optimize business processes.

Key Responsibilities:

  • Data Ingestion and Preparation:
  • Design and implement efficient data pipelines for ingesting, cleaning, and transforming data from various sources (e.g., databases, APIs, cloud storage) into Google Cloud Platform (GCP) data warehouses (BigQuery) or data lakes (Dataflow).
  • Perform data quality assessments, handle missing values, and address inconsistencies to ensure data integrity.
  • Exploratory Data Analysis (EDA):
  • Conduct in-depth EDA to uncover patterns, trends, and anomalies within the data.
  • Utilize visualization techniques (e.g., Tableau, Looker) to communicate findings effectively.
  • Feature Engineering:
  • Create relevant features from raw data to enhance model performance and interpretability.
  • Explore techniques like feature selection, normalization, and dimensionality reduction.
  • Model Development and Training:
  • Develop and train predictive models using machine learning algorithms (e.g., linear regression, logistic regression, decision trees, random forests, neural networks) on GCP platforms like Vertex AI.
  • Evaluate model performance using appropriate metrics and iterate on the modeling process.
  • Model Deployment and Monitoring:
  • Deploy trained models into production environments using GCP's ML tools and infrastructure.
  • Monitor model performance over time, identify drift, and retrain models as needed.
  • Collaboration and Communication:
  • Work closely with data engineers, analysts, and business stakeholders to understand their requirements and translate them into data-driven solutions.
  • Communicate findings and insights in a clear and concise manner, using visualizations and storytelling techniques.

Required Skills and Qualifications:

  • Strong proficiency in Python or R programming languages.
  • Experience with Google Cloud Platform (GCP) services such as BigQuery, Dataflow, Cloud Dataproc, and Vertex AI.
  • Familiarity with machine learning algorithms and techniques.
  • Knowledge of data visualization tools (e.g., Tableau, Looker).
  • Excellent problem-solving and analytical skills.
  • Ability to work independently and as part of a team.
  • Strong communication and interpersonal skills.

Preferred Qualifications:

  • Experience with cloud-native data technologies (e.g., Apache Spark, Kubernetes).
  • Knowledge of distributed systems and scalable data architectures.
  • Experience with natural language processing (NLP) or computer vision applications.
  • Certifications in Google Cloud Platform or relevant machine learning frameworks.


Read more
Data Havn

Data Havn

Agency job
via Infinium Associate by Toshi Srivastava
Delhi, Gurugram, Noida, Ghaziabad, Faridabad
2.5 - 4.5 yrs
₹10L - ₹20L / yr
skill iconPython
SQL
Google Cloud Platform (GCP)
SQL server
ETL
+9 more

About the Role:


We are seeking a talented Data Engineer to join our team and play a pivotal role in transforming raw data into valuable insights. As a Data Engineer, you will design, develop, and maintain robust data pipelines and infrastructure to support our organization's analytics and decision-making processes.

Responsibilities:

  • Data Pipeline Development: Build and maintain scalable data pipelines to extract, transform, and load (ETL) data from various sources (e.g., databases, APIs, files) into data warehouses or data lakes.
  • Data Infrastructure: Design, implement, and manage data infrastructure components, including data warehouses, data lakes, and data marts.
  • Data Quality: Ensure data quality by implementing data validation, cleansing, and standardization processes.
  • Team Management: Able to handle team.
  • Performance Optimization: Optimize data pipelines and infrastructure for performance and efficiency.
  • Collaboration: Collaborate with data analysts, scientists, and business stakeholders to understand their data needs and translate them into technical requirements.
  • Tool and Technology Selection: Evaluate and select appropriate data engineering tools and technologies (e.g., SQL, Python, Spark, Hadoop, cloud platforms).
  • Documentation: Create and maintain clear and comprehensive documentation for data pipelines, infrastructure, and processes.

 

 Skills:

  • Strong proficiency in SQL and at least one programming language (e.g., Python, Java).
  • Experience with data warehousing and data lake technologies (e.g., Snowflake, AWS Redshift, Databricks).
  • Knowledge of cloud platforms (e.g., AWS, GCP, Azure) and cloud-based data services.
  • Understanding of data modeling and data architecture concepts.
  • Experience with ETL/ELT tools and frameworks.
  • Excellent problem-solving and analytical skills.
  • Ability to work independently and as part of a team.

Preferred Qualifications:

  • Experience with real-time data processing and streaming technologies (e.g., Kafka, Flink).
  • Knowledge of machine learning and artificial intelligence concepts.
  • Experience with data visualization tools (e.g., Tableau, Power BI).
  • Certification in cloud platforms or data engineering.
Read more
 Global Digital Transformation Solutions Provider

Global Digital Transformation Solutions Provider

Agency job
via Peak Hire Solutions by Dhara Thakkar
Bengaluru (Bangalore)
7 - 9 yrs
₹10L - ₹28L / yr
Artificial Intelligence (AI)
Natural Language Processing (NLP)
skill iconPython
skill iconData Science
Generative AI
+10 more

Job Details

Job Title: Lead II - Software Engineering- AI, NLP, Python, Data science

Industry: Technology

Domain - Information technology (IT)

Experience Required: 7-9 years

Employment Type: Full Time

Job Location: Bangalore

CTC Range: Best in Industry


Job Description:

Role Proficiency:

Act creatively to develop applications by selecting appropriate technical options optimizing application development maintenance and performance by employing design patterns and reusing proven solutions. Account for others' developmental activities; assisting Project Manager in day-to-day project execution.


Additional Comments:

Mandatory Skills Data Science Skill to Evaluate AI, Gen AI, RAG, Data Science

Experience 8 to 10 Years

Location Bengaluru

Job Description

Job Title AI Engineer Mandatory Skills Artificial Intelligence, Natural Language Processing, python, data science Position AI Engineer – LLM & RAG Specialization Company Name: Sony India Software Centre About the role: We are seeking a highly skilled AI Engineer with 8-10 years of experience to join our innovation-driven team. This role focuses on the design, development, and deployment of advanced enterprise-scale Large Language Models (eLLM) and Retrieval Augmented Generation (RAG) solutions. You will work on end-to-end AI pipelines, from data processing to cloud deployment, delivering impactful solutions that enhance Sony’s products and services. Key Responsibilities: Design, implement, and optimize LLM-powered applications, ensuring high performance and scalability for enterprise use cases. Develop and maintain RAG pipelines, including vector database integration (e.g., Pinecone, Weaviate, FAISS) and embedding model optimization. Deploy, monitor, and maintain AI/ML models in production, ensuring reliability, security, and compliance. Collaborate with product, research, and engineering teams to integrate AI solutions into existing applications and workflows. Research and evaluate the latest LLM and AI advancements, recommending tools and architectures for continuous improvement. Preprocess, clean, and engineer features from large datasets to improve model accuracy and efficiency. Conduct code reviews and enforce AI/ML engineering best practices. Document architecture, pipelines, and results; present findings to both technical and business stakeholders. Job Description: 8-10 years of professional experience in AI/ML engineering, with at least 4+ years in LLM development and deployment. Proven expertise in RAG architectures, vector databases, and embedding models. Strong proficiency in Python; familiarity with Java, R, or other relevant languages is a plus. Experience with AI/ML frameworks (PyTorch, TensorFlow, etc.) and relevant deployment tools. Hands-on experience with cloud-based AI platforms such as AWS SageMaker, AWS Q Business, AWS Bedrock or Azure Machine Learning. Experience in designing, developing, and deploying Agentic AI systems, with a focus on creating autonomous agents that can reason, plan, and execute tasks to achieve specific goals. Understanding of security concepts in AI systems, including vulnerabilities and mitigation strategies. Solid knowledge of data processing, feature engineering, and working with large-scale datasets. Experience in designing and implementing AI-native applications and agentic workflows using the Model Context Protocol (MCP) is nice to have. Strong problem-solving skills, analytical thinking, and attention to detail. Excellent communication skills with the ability to explain complex AI concepts to diverse audiences. Day-to-day responsibilities: Design and deploy AI-driven solutions to address specific security challenges, such as threat detection, vulnerability prioritization, and security automation. Optimize LLM-based models for various security use cases, including chatbot development for security awareness or automated incident response. Implement and manage RAG pipelines for enhanced LLM performance. Integrate AI models with existing security tools, including Endpoint Detection and Response (EDR), Threat and Vulnerability Management (TVM) platforms, and Data Science/Analytics platforms. This will involve working with APIs and understanding data flows. Develop and implement metrics to evaluate the performance of AI models. Monitor deployed models for accuracy and performance and retrain as needed. Adhere to security best practices and ensure that all AI solutions are developed and deployed securely. Consider data privacy and compliance requirements. Work closely with other team members to understand security requirements and translate them into AI-driven solutions. Communicate effectively with stakeholders, including senior management, to present project updates and findings. Stay up to date with the latest advancements in AI/ML and security and identify opportunities to leverage new technologies to improve our security posture. Maintain thorough documentation of AI models, code, and processes. What We Offer Opportunity to work on cutting-edge LLM and RAG projects with global impact. A collaborative environment fostering innovation, research, and skill growth. Competitive salary, comprehensive benefits, and flexible work arrangements. The chance to shape AI-powered features in Sony’s next-generation products. Be able to function in an environment where the team is virtual and geographically dispersed

Education Qualification: Graduate


Skills: AI, NLP, Python, Data science


Must-Haves

Skills

AI, NLP, Python, Data science

NP: Immediate – 30 Days

 

Read more
Kanerika Software

at Kanerika Software

3 candid answers
2 recruiters
Mounami J
Posted by Mounami J
Hyderabad, Indore, Ahmedabad
6 - 18 yrs
₹18L - ₹60L / yr
skill icon.NET
skill iconData Science
skill iconMongoDB
skill iconAngular (2+)

Job Location: Hyderabad, India.

Roles and Responsibilities:

  • The Sr .NET Data Engineer will be responsible for designing and developing scalable backend systems using .NET Core, Web API, and Azure-based data engineering tools like Databricks, MS Fabric, or Snowflake.
  • They will build and maintain data pipelines, optimize SQL/NoSQL databases, and ensure high-performance systems through design patterns and microservices architecture.
  • Strong communication skills and the ability to collaborate with US counterparts in an Agile environment are essential. Experience with Azure DevOps, Angular, and MongoDB is a plus.

Technical skills:

  • Strong hands-on experience on C#, SQL Server, OOPS Concepts, Micro Services Architecture. · At least one-year hands-on experience on .NET Core, ASP.NET Core, Web API, SQL, No SQL, Entity Framework 6 or above, Azure, Database performance tuning, Applying Design Patterns, Agile.
  • Net back-end development with data engineering expertise.
  • Must have experience with Azure Data Engineering, Azure Databricks, MS Fabric as data platform/ Snowflake or similar tools.
  • Skill for writing reusable libraries.
  • Excellent Communication skills both oral & written.
  • Excellent troubleshooting and communication skills, ability to communicate clearly with US counter parts

What we need?

  • Educational Qualification: B.Tech, B.E, MCA, M.Tech.
  • Experience: Minimum 6+ Years.
  • Work Mode: Must be willing to work from the office (onsite only).

Nice to Have:

  • Knowledge on Angular, Mongo DB, NPM and Azure Devops Build/ Release configuration.
  • Self – Starter with solid analytical and problem- solving skills.
  • This is an experienced level position, and we train the qualified candidate in the required applications.
  • Willingness to work extra hours to meet deliverables.


Read more
KGISL MICROCOLLEGE
skillryt hr
Posted by skillryt hr
Remote only
5 - 8 yrs
₹10L - ₹15L / yr
Training and Development
Artificial Intelligence (AI)
DS
skill iconData Science
trainer

Job Title: Freelance AI & Data Science Trainer | 5+ Years Experience | Tamil Nadu

Location: Coimbatore / Tamil Nadu (Remote or Hybrid)

Engagement: Freelance / Contract-only

Experience: Minimum 5+ years (Industry + Training)

About the Role:

We are looking for an experienced Freelance AI & Data Science Trainer to deliver project-based, industry-relevant training sessions. The trainer should have deep expertise in Machine Learning, Deep Learning, and Python for Data Science, with the ability to guide learners through real-world use cases.

Requirements:

  • Minimum 5 years of experience in AI / Data Science (training or real-world projects).
  • Strong hands-on skills in Python, Pandas, NumPy, Scikit-learn, TensorFlow/PyTorch.
  • Expertise in data analysis, ML algorithms, and deployment workflows.
  • Excellent communication and mentoring skills.
  • Freelancers only (no full-time employment).
  • Must be based in Tamil Nadu (preferably Coimbatore).

Compensation:

  • Per session / per batch payment (competitive, based on experience).
Read more
VyTCDC
Gobinath Sundaram
Posted by Gobinath Sundaram
Bengaluru (Bangalore), Pune, Hyderabad
6 - 12 yrs
₹5L - ₹28L / yr
skill iconData Science
skill iconPython
Large Language Models (LLM)

Job Description:

 

Role: Data Scientist

 

Responsibilities:

 

 Lead data science and machine learning projects, contributing to model development, optimization and evaluation. 

 Perform data cleaning, feature engineering, and exploratory data analysis.  

 

Translate business requirements into technical solutions, document and communicate project progress, manage non-technical stakeholders.

 

Collaborate with other DS and engineers to deliver projects.

 

Technical Skills – Must have:

 

Experience in and understanding of the natural language processing (NLP) and large language model (LLM) landscape.

 

Proficiency with Python for data analysis, supervised & unsupervised learning ML tasks.

 

Ability to translate complex machine learning problem statements into specific deliverables and requirements.

 

Should have worked with major cloud platforms such as AWS, Azure or GCP.

 

Working knowledge of SQL and no-SQL databases.

 

Ability to create data and ML pipelines for more efficient and repeatable data science projects using MLOps principles.

 

Keep abreast with new tools, algorithms and techniques in machine learning and works to implement them in the organization.

 

Strong understanding of evaluation and monitoring metrics for machine learning projects.

Read more
Gyansys Infotech
Bengaluru (Bangalore)
4 - 8 yrs
₹14L - ₹15L / yr
skill iconMachine Learning (ML)
skill iconDeep Learning
TensorFlow
Keras
PyTorch
+5 more

Role: Sr. Data Scientist

Exp: 4 -8 Years

CTC: up to 28 LPA


Technical Skills:

o Strong programming skills in Python, with hands-on experience in deep learning frameworks like TensorFlow, PyTorch, or Keras.

o Familiarity with Databricks notebooks, MLflow, and Delta Lake for scalable machine learning workflows.

o Experience with MLOps best practices, including model versioning, CI/CD pipelines, and automated deployment.

o Proficiency in data preprocessing, augmentation, and handling large-scale image/video datasets.

o Solid understanding of computer vision algorithms, including CNNs, transfer learning, and transformer-based vision models (e.g., ViT).

o Exposure to natural language processing (NLP) techniques is a plus.


Cloud & Infrastructure:

o Strong expertise in Azure cloud ecosystem,

o Experience working in UNIX/Linux environments and using command-line tools for automation and scripting.


If interested kindly share your updated resume at 82008 31681

Read more
GyanSys Inc.
Bengaluru (Bangalore)
4 - 8 yrs
₹14L - ₹15L / yr
skill iconMachine Learning (ML)
skill iconData Science
skill iconPython
PyTorch
TensorFlow
+5 more

Role: Sr. Data Scientist

Exp: 4-8 Years

CTC: up to 25 LPA



Technical Skills:

● Strong programming skills in Python, with hands-on experience in deep learning frameworks like TensorFlow, PyTorch, or Keras.

● Familiarity with Databricks notebooks, MLflow, and Delta Lake for scalable machine learning workflows.

● Experience with MLOps best practices, including model versioning, CI/CD pipelines, and automated deployment.

● Proficiency in data preprocessing, augmentation, and handling large-scale image/video datasets.

● Solid understanding of computer vision algorithms, including CNNs, transfer learning, and transformer-based vision models (e.g., ViT).

● Exposure to natural language processing (NLP) techniques is a plus.



• Educational Qualifications:

  • B.E./B.Tech/M.Tech/MCA in Computer Science, Electronics & Communication, Electrical Engineering, or a related field.
  • A master’s degree in computer science, Artificial Intelligence, or a specialization in Deep Learning or Computer Vision is highly preferred



If interested share your resume on 82008 31681

Read more
GyanSys Inc.
Bengaluru (Bangalore)
4 - 8 yrs
₹14L - ₹15L / yr
skill iconData Science
CI/CD
Natural Language Processing (NLP)
skill iconMachine Learning (ML)
TensorFlow
+5 more

Role: Sr. Data Scientist

Exp: 4-8 Years

CTC: up to 25 LPA



Technical Skills:

● Strong programming skills in Python, with hands-on experience in deep learning frameworks like TensorFlow, PyTorch, or Keras.

● Familiarity with Databricks notebooks, MLflow, and Delta Lake for scalable machine learning workflows.

● Experience with MLOps best practices, including model versioning, CI/CD pipelines, and automated deployment.

● Proficiency in data preprocessing, augmentation, and handling large-scale image/video datasets.

● Solid understanding of computer vision algorithms, including CNNs, transfer learning, and transformer-based vision models (e.g., ViT).

● Exposure to natural language processing (NLP) techniques is a plus.



• Educational Qualifications:

  • B.E./B.Tech/M.Tech/MCA in Computer Science, Electronics & Communication, Electrical Engineering, or a related field.
  • A master’s degree in computer science, Artificial Intelligence, or a specialization in Deep Learning or Computer Vision is highly preferred


Read more
Mumbai
2.5 - 4 yrs
₹5L - ₹10L / yr
Artificial Intelligence (AI)
skill iconMachine Learning (ML)
skill iconData Science
skill iconPython
TensorFlow
+14 more

Job Title: AI / Machine Learning Engineer

 Company: Apprication Pvt Ltd

 Location: Goregaon East

 Employment Type: Full-time

 Experience: 2.5-4 Years




 About the Role

We’re seeking a highly motivated AI / Machine Learning Engineer to join our growing engineering team. You will design, build, and deploy AI-powered solutions for web and application platforms, bringing cutting-edge machine learning research into real-world production systems.




This role blends applied machine learning, backend engineering, and cloud deployment, with opportunities to work on NLP, computer vision, generative AI, and intelligent automation across diverse industries.




Key Responsibilities

  • Design, train, and deploy machine learning models for NLP, computer vision, recommendation systems, and other AI-driven use cases.
  • Integrate ML models into production-ready web and mobile applications, ensuring scalability and reliability.
  • Collaborate with data scientists to optimize algorithms, pipelines, and inference performance.
  • Build APIs and microservices for model serving, monitoring, and scaling.
  • Leverage cloud platforms (AWS, Azure, GCP) for ML workflows, containerization (Docker/Kubernetes), and CI/CD pipelines.
  • Implement AI-powered features such as chatbots, personalization engines, predictive analytics, or automation systems.
  • Develop and maintain ETL pipelines, data preprocessing workflows, and feature engineering processes.
  • Ensure solutions meet security, compliance, and performance standards.
  • Stay updated with the latest research and trends in deep learning, generative AI, and LLMs.

Skills & Qualifications

  • Bachelor’s or Master’s in Computer Science, Machine Learning, Data Science, or related field.
  • Proven experience of 4 years as an AI/ML Engineer, Data Scientist, or AI Application Developer.
  • Strong programming skills in Python (TensorFlow, PyTorch, Scikit-learn); familiarity with LangChain, Hugging Face, OpenAI API is a plus.
  • Experience in model deployment, serving, and optimization (FastAPI, Flask, Django, or Node.js).
  • Proficiency with databases (SQL and NoSQL: MySQL, PostgreSQL, MongoDB).
  • Hands-on experience with cloud ML services (SageMaker, Vertex AI, Azure ML) and DevOps tools (Docker, Kubernetes, CI/CD).
  • Knowledge of MLOps practices: model versioning, monitoring, retraining, experiment tracking.
  • Familiarity with frontend frameworks (React.js, Angular, Vue.js) for building AI-driven interfaces (nice to have).
  • Strong understanding of data structures, algorithms, APIs, and distributed systems.
  • Excellent problem-solving, analytical, and communication skills.
Read more
Mumbai
4 - 8 yrs
₹3L - ₹7L / yr
Artificial Intelligence (AI)
skill iconMachine Learning (ML)
skill iconData Science
skill iconPython
TensorFlow
+15 more

Job Title: AI / Machine Learning Engineer

 Company: Apprication Pvt Ltd

 Location: Goregaon East

 Employment Type: Full-time

 Experience: 4 Years




 About the Role

We’re seeking a highly motivated AI / Machine Learning Engineer to join our growing engineering team. You will design, build, and deploy AI-powered solutions for web and application platforms, bringing cutting-edge machine learning research into real-world production systems.




This role blends applied machine learning, backend engineering, and cloud deployment, with opportunities to work on NLP, computer vision, generative AI, and intelligent automation across diverse industries.




Key Responsibilities

  • Design, train, and deploy machine learning models for NLP, computer vision, recommendation systems, and other AI-driven use cases.
  • Integrate ML models into production-ready web and mobile applications, ensuring scalability and reliability.
  • Collaborate with data scientists to optimize algorithms, pipelines, and inference performance.
  • Build APIs and microservices for model serving, monitoring, and scaling.
  • Leverage cloud platforms (AWS, Azure, GCP) for ML workflows, containerization (Docker/Kubernetes), and CI/CD pipelines.
  • Implement AI-powered features such as chatbots, personalization engines, predictive analytics, or automation systems.
  • Develop and maintain ETL pipelines, data preprocessing workflows, and feature engineering processes.
  • Ensure solutions meet security, compliance, and performance standards.
  • Stay updated with the latest research and trends in deep learning, generative AI, and LLMs.

Skills & Qualifications

  • Bachelor’s or Master’s in Computer Science, Machine Learning, Data Science, or related field.
  • Proven experience of 4 years as an AI/ML Engineer, Data Scientist, or AI Application Developer.
  • Strong programming skills in Python (TensorFlow, PyTorch, Scikit-learn); familiarity with LangChain, Hugging Face, OpenAI API is a plus.
  • Experience in model deployment, serving, and optimization (FastAPI, Flask, Django, or Node.js).
  • Proficiency with databases (SQL and NoSQL: MySQL, PostgreSQL, MongoDB).
  • Hands-on experience with cloud ML services (SageMaker, Vertex AI, Azure ML) and DevOps tools (Docker, Kubernetes, CI/CD).
  • Knowledge of MLOps practices: model versioning, monitoring, retraining, experiment tracking.
  • Familiarity with frontend frameworks (React.js, Angular, Vue.js) for building AI-driven interfaces (nice to have).
  • Strong understanding of data structures, algorithms, APIs, and distributed systems.
  • Excellent problem-solving, analytical, and communication skills.
Read more
Moative

at Moative

3 candid answers
Eman Khan
Posted by Eman Khan
Chennai
7 - 10 yrs
₹25L - ₹60L / yr
skill iconData Science
skill iconMachine Learning (ML)
skill iconDeep Learning
Neural networks
Natural Language Processing (NLP)
+2 more

About Moative

Moative, an Applied AI Services company, designs AI roadmaps, builds co-pilots, and develops agentic AI solutions for companies across industries including energy and utilities.

Through Moative Labs, we aspire to build AI-led products and launch AI startups in vertical markets.


Our Past: We have built and sold two companies, one of which was an AI company. Our founders and leaders are Math PhDs, Ivy League alumni, ex-Googlers, and successful entrepreneurs.


Business Context

Moative is looking for a Data Science Project Manager to lead a long-term engagement for a Houston-based utilities company. As part of this engagement, we will develop advanced AI/ML models for load forecasting, energy pricing, trading strategies, and related areas.

We have a high-performing team of data scientists and ML engineers based in Chennai, India, along with an on-site project manager in Houston, TX.


Work You’ll Do

As a Data Science Project Manager, you’ll wear two hats. On one hand, you’ll act as a project manager — engaging with clients, understanding business priorities, and discussing solution or algorithm approaches. You will coordinate with the on-site project manager to manage timelines and ensure quality delivery. You’ll also handle client communication, setting expectations, gathering feedback, and keeping the engagement in good health.

On the other hand, you’ll act as a senior data scientist — overseeing junior data scientists and analysts, guiding the offshore team on data science, engineering, and business challenges, and providing expertise in statistical and mathematical concepts, algorithms, and model development.

The ideal candidate has a strong background in statistics, machine learning, and programming, along with business acumen and project management skills.


Responsibilities

  • Client Engagement: Act as the primary point of contact for clients on data science requirements. Support the on-site PM in managing client needs and build strong stakeholder relationships.
  • Project Coordination: Lead the offshore team, ensure alignment on project goals, and work with clients to define scope and deliverables.
  • Team Management: Supervise the offshore AI/ML team, ensure milestones are met, and provide domain/technical guidance.
  • Data Science Leadership: Mentor teams, create frameworks for scalable solutions, and drive adoption of best practices in AI/ML lifecycle.
  • Quality Assurance: Work with the offshore PM to implement QA processes that ensure accuracy and reliability.
  • Risk Management: Identify risks and develop mitigation strategies.
  • Stakeholder Communication: Provide regular updates on progress, challenges, and achievements.


Who You Are

You are a Project Manager passionate about delivering high-quality, data-driven solutions through robust project management practices. You have experience managing data-heavy projects in an onsite-offshore model with significant client engagement. You also bring some hands-on experience in data science and analytics, preferably in energy/utilities or financial risk/trading. You thrive in ambiguity, take initiative, and can confidently defend your decisions.


Requirements & Skills

  • 8+ years of experience applying data science methods to real-world data, ideally in Energy & Utilities or financial risk/commodities trading.
  • 3+ years of experience leading data science teams delivering AI/ML solutions.
  • Deep familiarity with a range of methods and algorithms: time-series analysis, regression, experimental design, optimization, etc.
  • Strong understanding of ML algorithms, including deep learning, neural networks, NLP, and more.
  • Proficient in cloud platforms (AWS, Azure, GCP), ML frameworks (TensorFlow, PyTorch), and MLOps platforms (MLflow, etc.).
  • Broad understanding of data structures, data engineering, and architectures.
  • Strong interpersonal skills: result-oriented, proactive, and capable of handling multiple projects.
  • Ability to collaborate effectively, take accountability, and stay composed under stress.
  • Excellent verbal and written communication skills for both technical and non-technical stakeholders.
  • Proven ability to identify and resolve issues quickly and efficiently.


Working at Moative

Moative is a young company, but we believe in thinking long-term while acting with urgency. Our ethos is rooted in innovation, efficiency, and high-quality outcomes. We believe the future of work is AI-augmented and boundaryless.

Guiding Principles

  • Think in decades. Act in hours. Decisions for the long-term, execution in hours/days.
  • Own the canvas. Fix or improve anything not done right, regardless of who did it.
  • Use data or don’t. Avoid political “cover-my-back” use of data; balance intuition with data-driven approaches.
  • Avoid work about work. Keep processes lean; meetings should be rare and purposeful.
  • High revenue per person. Default to automation, multi-skilling, and high-quality output instead of unnecessary hiring.


Additional Details

The position is based out of Chennai and involves significant in-person collaboration. Applicants should demonstrate being in the 90th percentile or above, whether through top institutions, awards/accolades, or consistent outstanding performance.

If this role excites you, we encourage you to apply — even if you don’t check every box.

Read more
Oneture Technologies

at Oneture Technologies

1 recruiter
Eman Khan
Posted by Eman Khan
Pune, Mumbai, Bengaluru (Bangalore)
4 - 8 yrs
Upto ₹28L / yr (Varies
)
skill iconData Science
Demand forecasting
Predictive modelling
Forecasting
Time series
+6 more

About the Role

We are looking for a highly skilled Machine Learning Lead with proven expertise in demand forecasting to join our team. The ideal candidate will have 4-8 years of experience building and deploying ML models, strong knowledge of AWS ML services and MLOps practices, and the ability to lead a team while working directly with clients. This is a client-facing role that requires strong communication skills, technical depth, and leadership ability.


Key Responsibilities

  • Lead end-to-end design, development, and deployment of demand forecasting models.
  • Collaborate with clients to gather requirements, define KPIs, and translate business needs into ML solutions.
  • Architect and implement ML workflows using AWS ML ecosystem (SageMaker, Bedrock, Lambda, S3, Step Functions, etc.).
  • Establish and enforce MLOps best practices for scalable, reproducible, and automated model deployment and monitoring.
  • Mentor and guide a team of ML engineers and data scientists, ensuring technical excellence and timely delivery.
  • Partner with cross-functional teams (engineering, data, business) to integrate forecasting insights into client systems.
  • Present results, methodologies, and recommendations to both technical and non-technical stakeholders.
  • Stay updated with the latest advancements in forecasting algorithms, time-series modeling, and AWS ML offerings.


Required Skills & Experience

  • 4-8 years of experience in machine learning with a strong focus on time-series forecasting and demand prediction.
  • Hands-on experience with AWS ML stack (Amazon SageMaker, Step Functions, Lambda, S3, Athena, CloudWatch, etc.).
  • Strong understanding of MLOps pipelines (CI/CD for ML, model monitoring, retraining workflows).
  • Proficiency in Python, SQL, and ML libraries (TensorFlow, PyTorch, Scikit-learn, Prophet, GluonTS, etc.).
  • Experience working directly with clients and stakeholders to understand business requirements and deliver ML solutions.
  • Strong leadership and team management skills, with the ability to mentor and guide junior team members.
  • Excellent communication and presentation skills for both technical and business audiences.


Preferred Qualifications

  • Experience with retail, FMCG, or supply chain demand forecasting use cases.
  • Exposure to generative AI and LLMs for augmenting forecasting solutions.
  • AWS Certification (e.g., AWS Certified Machine Learning – Specialty).


What We Offer

  • Opportunity to lead impactful demand forecasting projects with global clients.
  • Exposure to cutting-edge ML, AI, and AWS technologies.
  • Collaborative, fast-paced, and growth-oriented environment.
  • Competitive compensation and benefits.
Read more
Internshala

at Internshala

5 recruiters
Gayatri Mudgil
Posted by Gayatri Mudgil
Gurugram
4 - 7 yrs
₹10L - ₹15L / yr
Natural Language Processing (NLP)
SQL
MS-Excel
skill iconMachine Learning (ML)
PowerBI
+3 more

💯What will you do?

  • Create and conduct engaging and informative Data Science classes that incorporate real-world examples and hands-on activities to ensure student engagement and retention.
  • Evaluate student projects to ensure they meet industry standards and provide personalised, constructive feedback to students to help them improve their skills and understanding.
  • Conduct viva sessions to assess student understanding and comprehension of the course materials. You will evaluate each student's ability to apply the concepts they have learned in real-world scenarios and provide feedback on their performance.
  • Conduct regular assessments to evaluate student progress, provide feedback to students, and identify areas for improvement in the curriculum.
  • Stay up-to-date with industry developments, best practices, and trends in Data Science, and incorporate this knowledge into course materials and instruction.
  • Work with the placements team to provide guidance and support to students as they navigate their job search, including resume and cover letter reviews, mock interviews, and career coaching.
  • Train the TAs to take the doubt sessions and for project evaluations


💯Who are we looking for?

We are looking for someone who has:

  • A minimum of 1-2 years of industry work experience in data science or a related field. Teaching experience is a plus.
  • In-depth knowledge of various aspects of data science like Python, MYSQL, Power BI, Excel, Machine Learning with statistics, NLP, DL.
  • Knowledge of AI tools like ChatGPT (latest versions as well), debugcode.ai, etc.
  • Passion for teaching and a desire to impart practical knowledge to students.
  • Excellent communication and interpersonal skills, with the ability to engage and motivate students of all levels.
  • Experience with curriculum development, lesson planning, and instructional design is a plus.
  • Familiarity with learning management systems (LMS) and digital teaching tools will be an added advantage.
  • Ability to work independently and as part of a team in a fast-paced, dynamic environment.


💯What do we offer in return?

  • Awesome colleagues & a great work environment - Internshala is known for its culture (see for yourself) and has twice been recognized as a Great Place To Work in the last 3 years
  • A massive learning opportunity to be an early member of a new initiative and experience building it from scratch
  • Competitive remuneration


💰 Compensation - Competitive remuneration based on your experience and skills

📅 Start date - Immediately

Read more
MindCrew Technologies

at MindCrew Technologies

3 recruiters
Agency job
via TIGI HR Solution Pvt. Ltd. by Vaidehi Sarkar
Pune
10 - 14 yrs
₹10L - ₹15L / yr
Snowflake
ETL
SQL
Snow flake schema
Data modeling
+3 more

Exp: 10+ Years

CTC: 1.7 LPM

Location: Pune

SnowFlake Expertise Profile


Should hold 10 + years of experience with strong skills with core understanding of cloud data warehouse principles and extensive experience in designing, building, optimizing, and maintaining robust and scalable data solutions on the Snowflake platform.

Possesses a strong background in data modelling, ETL/ELT, SQL development, performance tuning, scaling, monitoring and security handling.


Responsibilities:

* Collaboration with Data and ETL team to review code, understand current architecture and help improve it based on Snowflake offerings and experience

* Review and implement best practices to design, develop, maintain, scale, efficiently monitor data pipelines and data models on the Snowflake platform for

ETL or BI.

* Optimize complex SQL queries for data extraction, transformation, and loading within Snowflake.

* Ensure data quality, integrity, and security within the Snowflake environment.

* Participate in code reviews and contribute to the team's development standards.

Education:

* Bachelor’s degree in computer science, Data Science, Information Technology, or anything equivalent.

* Relevant Snowflake certifications are a plus (e.g., Snowflake certified Pro / Architecture / Advanced).

Read more
Remote only
0 - 1 yrs
₹5000 - ₹5500 / mo
skill iconPython
Artificial Intelligence (AI)
skill iconMachine Learning (ML)
skill iconData Science

Job description

Job Title: AI-Driven Data Science Automation Intern – Machine Learning Research Specialist

Location: Remote (Global)

Compensation: $50 USD per month

Company: Meta2 Labs

www.meta2labs.com

About Meta2 Labs:

Meta2 Labs is a next-gen innovation studio building products, platforms, and experiences at the convergence of AI, Web3, and immersive technologies. We are a lean, mission-driven collective of creators, engineers, designers, and futurists working to shape the internet of tomorrow. We believe the next wave of value will come from decentralized, intelligent, and user-owned digital ecosystems—and we’re building toward that vision.

As we scale our roadmap and ecosystem, we're looking for a driven, aligned, and entrepreneurial AI-Driven Data Science Automation Intern – Machine Learning Research Specialist to join us on this journey.

The Opportunity:

We’re seeking a part-time AI-Driven Data Science Automation Intern – Machine Learning Research Specialist to join Meta2 Labs at a critical early stage. This is a high-impact role designed for someone who shares our vision and wants to actively shape the future of tech. You’ll be an equal voice at the table and help drive the direction of our ventures, partnerships, and product strategies.

Responsibilities:

  • Collaborate on the vision, strategy, and execution across Meta2 Labs' portfolio and initiatives.
  • Drive innovation in areas such as AI applications, Web3 infrastructure, and experiential product design.
  • Contribute to go-to-market strategies, business development, and partnership opportunities.
  • Help shape company culture, structure, and team expansion.
  • Be a thought partner and problem-solver in all key strategic discussions.
  • Lead or support verticals based on your domain expertise (e.g., product, technology, growth, design, etc.).
  • Act as a representative and evangelist for Meta2 Labs in public or partner-facing contexts.

Ideal Profile:

  • Passion for emerging technologies (AI, Web3, XR, etc.).
  • Comfortable operating in ambiguity and working lean.
  • Strong strategic thinking, communication, and collaboration skills.
  • Open to wearing multiple hats and learning as you build.
  • Driven by purpose and eager to gain experience in a cutting-edge tech environment.

Commitment:

  • Flexible, part-time involvement.
  • Remote-first and async-friendly culture.

Why Join Meta2 Labs:

  • Join a purpose-led studio at the frontier of tech innovation.
  • Help build impactful ventures with real-world value and long-term potential.
  • Shape your own role, focus, and future within a decentralized, founder-friendly structure.
  • Be part of a collaborative, intellectually curious, and builder-centric culture.

Job Types: Part-time, Internship

Pay: $50 USD per month

Work Location: Remote

Job Types: Full-time, Part-time, Internship

Contract length: 3 months

Pay: Up to ₹5,000.00 per month

Benefits:

  • Flexible schedule
  • Health insurance
  • Work from home

Work Location: Remote


Read more
QAgile Services

at QAgile Services

1 recruiter
Radhika Chotai
Posted by Radhika Chotai
Bengaluru (Bangalore)
4 - 6 yrs
₹12L - ₹25L / yr
skill iconData Science
skill iconPython
skill iconMachine Learning (ML)
PowerBI
SQL
+5 more

Proven experience as a Data Scientist or similar role with relevant experience of at least 4 years and total experience 6-8 years.


· Technical expertiseregarding data models, database design development, data mining and segmentation techniques


· Strong knowledge of and experience with reporting packages (Business Objects and likewise), databases, programming in ETL frameworks


· Experience with data movement and management in the Cloud utilizing a combination of Azure or AWS features


· Hands on experience in data visualization tools – Power BI preferred


· Solid understanding of machine learning


· Knowledge of data management and visualization techniques


· A knack for statistical analysis and predictive modeling


· Good knowledge of Python and Matlab


· Experience with SQL and NoSQL databases including ability to write complex queries and procedures

Read more
NeoGenCode Technologies Pvt Ltd
Ritika Verma
Posted by Ritika Verma
Gurugram
4 - 6 yrs
₹5L - ₹14L / yr
Large Language Models (LLM)
skill iconData Science
Natural Language Processing (NLP)
Recurrent neural network (RNN)


We’re searching for an experienced Data Scientist with a strong background in NLP and large language models to join our innovative team! If you thrive on solving complex language problems and are hands-on with spaCy, NER, RNN, LSTM, Transformers, and LLMs (like GPT), we want to connect.

What You’ll Do:

  • Build & deploy advanced NLP solutions: entity recognition, text classification, and more.
  • Fine-tune and train state-of-the-art deep learning models (RNN, LSTM, Transformer, GPT).
  • Apply libraries like spaCy for NER and text processing.
  • Collaborate across teams to integrate AI-driven features.
  • Preprocess, annotate, and manage data workflows.
  • Analyze model performance and drive continuous improvement.
  • Stay current with AI/NLP breakthroughs and advocate innovation.

What You Bring:

  • 4-5+ years of industry experience in data science/NLP.
  • Strong proficiency in Python, spaCy, NLTK, PyTorch or TensorFlow.
  • Hands-on with NER, custom pipelines, and prompt engineering.
  • Deep understanding and experience with RNN, LSTM, Transformer, and LLMs/GPT.
  • Collaborative and independent problem solver.

Nice to Have:

  • Experience deploying NLP models (Docker, cloud).
  • MLOps, vector databases, RAG, semantic search.
  • Annotation tools and team management.

Why Join Us?

  • Work with cutting-edge technology and real-world impact.
  • Flexible hours, remote options, and a supportive, inclusive culture.
  • Competitive compensation and benefits.

Ready to push the boundaries of AI with us? Apply now or DM for more info!

Read more
NeoGenCode Technologies Pvt Ltd
Akshay Patil
Posted by Akshay Patil
Gurugram
4 - 6 yrs
₹10L - ₹18L / yr
skill iconData Science
Natural Language Processing (NLP)
Large Language Models (LLM)
spaCy
Named-entity recognition
+4 more

Job Title : Data Scientist – NLP & LLM

Experience Required : 4 to 5+ Years

Location : Gurugram

Notice Period : Immediate Joiner Preferred

Employment Type : Full-Time


Job Summary :

We are seeking a highly skilled Data Scientist with strong expertise in Natural Language Processing (NLP) and modern deep learning techniques. The ideal candidate will have hands-on experience working with NER, RNN, LSTM, Transformers, GPT models, and Large Language Models (LLMs), including frameworks such as spaCy.


Mandatory Skills : NLP, spaCy, NER (Named-entity Recognition), RNN (Recurrent Neural Network), LSTM (Long Short-Term Memory), Transformers, GPT, LLMs, Python


Key Responsibilities :

  • Design and implement NLP models for tasks like text classification, entity recognition, summarization, and question answering.
  • Develop and optimize models using deep learning architectures such as RNN, LSTM, and Transformer-based models.
  • Fine-tune or build models using pre-trained LLMs such as GPT, BERT, etc.
  • Work with tools and libraries including spaCy, Hugging Face Transformers, and other relevant frameworks.
  • Perform data preprocessing, feature extraction, and training pipeline development.
  • Evaluate model performance and iterate with scalable solutions.
  • Collaborate with engineering and product teams to integrate models into production.

Required Skills :

  • 4 to 5+ years of hands-on experience in Data Science and NLP.
  • Strong understanding of NER, RNN, LSTM, Transformers, GPT, and LLM architectures.
  • Experience with spaCy, TensorFlow, PyTorch, and Hugging Face Transformers.
  • Proficient in Python and its ML ecosystem (NumPy, pandas, scikit-learn, etc.).
  • Familiarity with prompt engineering and fine-tuning LLMs is a plus.
  • Excellent problem-solving and communication skills.
Read more
Zolvit (formerly Vakilsearch)

at Zolvit (formerly Vakilsearch)

1 video
2 recruiters
Lakshmi J
Posted by Lakshmi J
Bengaluru (Bangalore), Chennai
1 - 4 yrs
₹10L - ₹15L / yr
skill iconMachine Learning (ML)
skill iconData Science
Generative AI
Artificial Intelligence (AI)
Natural Language Processing (NLP)
+1 more

About the Role

We are seeking an innovative Data Scientist specializing in Natural Language Processing (NLP) to join our technology team in Bangalore. The ideal candidate will harness the power of language models and document extraction techniques to transform legal information into accessible, actionable insights for our clients.

Responsibilities

  • Develop and implement NLP solutions to automate legal document analysis and extraction
  • Create and optimize prompt engineering strategies for large language models
  • Design search functionality leveraging semantic understanding of legal documents
  • Build document extraction pipelines to process unstructured legal text data
  • Develop data visualizations using PowerBI and Tableau to communicate insights
  • Collaborate with product and legal teams to enhance our tech-enabled services
  • Continuously improve model performance and user experience

Requirements

  • Bachelor's degree in relevant field
  • 1-5 years of professional experience in data science, with focus on NLP applications
  • Demonstrated experience working with LLM APIs (e.g., OpenAI, Anthropic, )
  • Proficiency in prompt engineering and optimization techniques
  • Experience with document extraction and information retrieval systems
  • Strong skills in data visualization tools, particularly PowerBI and Tableau
  • Excellent programming skills in Python and familiarity with NLP libraries
  • Strong understanding of legal terminology and document structures (preferred)
  • Excellent communication skills in English

What We Offer

  • Competitive salary and benefits package
  • Opportunity to work at India's largest legal tech company
  • Professional growth in the fast-evolving legal technology sector
  • Collaborative work environment with industry experts
  • Modern office located in Bangalore
  • Flexible work arrangements


Qualified candidates are encouraged to apply with a resume highlighting relevant experience with NLP, prompt engineering, and data visualization tools.

Location: Bangalore, India



Read more
TNQ Tech Pvt Ltd
Ramprasad Balasubramanian (TNQ Tech)
Posted by Ramprasad Balasubramanian (TNQ Tech)
Chennai
4 - 8 yrs
₹20L - ₹30L / yr
skill iconData Science

Company Description

TNQTech is a publishing technology and services company. Our AI-enabled technology and products deliver content services to some of the largest commercial publishers, prestigious learned societies, associations, and university presses. These services reach millions of authors through our clientele. We are dedicated to advancing publishing technology and providing innovative solutions to our users.


Role Description

This is a full-time on-site role for a Senior Data Scientist - Lead Role located in Chennai. The Senior Data Scientist will lead the data science team, conduct statistical analyses, develop data models, and interpret complex data to provide actionable insights. Additionally, responsibilities include overseeing data analytics projects, creating data visualizations, and ensuring the accuracy and quality of data analysis. Collaboration with cross-functional teams to understand data needs and drive data-driven decision-making is also key.


Qualifications

  • Proficiency in Data Science and Data Analysis
  • Strong background in Statistics and Data Analytics
  • Experience with Data Visualization tools and techniques
  • Excellent problem-solving and analytical skills
  • Ability to lead and mentor a team of data scientists
  • Strong communication and collaboration skills
  • Master's or Ph.D. in Data Science, Statistics, Computer Science, or a related field
Read more
AI Startup company

AI Startup company

Agency job
via People Impact by Ranjita Shrivastava
Bengaluru (Bangalore)
8 - 20 yrs
₹15L - ₹30L / yr
doctoral
skill iconData Science
Mathematics
Teaching
  1. Curriculum Development: Collaborate with Academic Advisory Committee and Marketing team members to design and develop comprehensive curriculum for data science programs at undergraduate and graduate levels. Ensure alignment with industry trends, emerging technologies, and best practices in data science education.
  2. Faculty Recruitment and Development: Lead the recruitment, selection, and development of faculty members with expertise in data science. Provide mentorship, support, and professional development opportunities to faculty to enhance teaching effectiveness and academic performance.
  3. Quality Assurance: Establish and maintain robust mechanisms for quality assurance and academic oversight. Implement assessment strategies, evaluation criteria, and continuous improvement processes to ensure the delivery of high-quality education and student outcomes.
  4. Student Engagement: Foster a culture of student engagement, innovation, and success. Develop initiatives to support student learning, retention, and career readiness in the field of data science. Provide academic counselling and support services to students as needed.
  5. Industry Collaboration: Collaborate with industry partners, employers, and professional organizations to enhance experiential learning opportunities, internships, and job placement prospects for students. Organize industry events, guest lectures, and networking opportunities to facilitate knowledge exchange and industry engagement.
  6. Research and Innovation: Encourage research and innovation in data science education. Facilitate faculty research projects, interdisciplinary collaborations, and scholarly activities to advance knowledge and contribute to the academic community.

Budget Management: Develop and manage the academic budget in collaboration with the finance department. Ensure efficient allocation of resources to support academic programs, faculty development, and student services.

Read more
KGISL MICROCOLLEGE
Agency job
via EWU by Pavasshrie Muruganandham
Thrissur
2 - 5 yrs
₹2L - ₹6L / yr
skill iconData Analytics
skill iconData Science
trainer
PowerBI
Tableau
+3 more

We are seeking a dynamic and experienced Data Analytics and Data Science Trainer to deliver high-quality training sessions, mentor learners, and design engaging course content. The ideal candidate will have a strong foundation in statistics, programming, and data visualization tools, and should be passionate about teaching and guiding aspiring professionals.

Read more
QuaXigma IT solutions Private Limited
Tirupati
3 - 5 yrs
₹6L - ₹10L / yr
skill iconPython
skill iconMachine Learning (ML)
SQL
EDA
skill iconData Analytics
+3 more

Data Scientist

Job Id: QX003

About Us:

QX impact was launched with a mission to make AI accessible and affordable and deliver AI Products/Solutions at scale for enterprises by bringing the power of Data, AI, and Engineering to drive digital transformation. We believe without insights, businesses will continue to face challenges to better understand their customers and even lose them; Secondly, without insights businesses won't’ be able to deliver differentiated products/services; and finally, without insights, businesses can’t achieve a new level of “Operational Excellence” is crucial to remain competitive, meeting rising customer expectations, expanding markets, and digitalization.

Position Overview:

We are seeking a collaborative and analytical Data Scientist who can bridge the gap between business needs and data science capabilities. In this role, you will lead and support projects that apply machine learning, AI, and statistical modeling to generate actionable insights and drive business value.

Key Responsibilities:

  • Collaborate with stakeholders to define and translate business challenges into data science solutions.
  • Conduct in-depth data analysis on structured and unstructured datasets.
  • Build, validate, and deploy machine learning models to solve real-world problems.
  • Develop clear visualizations and presentations to communicate insights.
  • Drive end-to-end project delivery, from exploration to production.
  • Contribute to team knowledge sharing and mentorship activities.

Must-Have Skills:

  • 3+ years of progressive experience in data science, applied analytics, or a related quantitative role, demonstrating a proven track record of delivering impactful data-driven solutions.
  • Exceptional programming proficiency in Python, including extensive experience with core libraries such as Pandas, NumPy, Scikit-learn, NLTK and XGBoost. 
  • Expert-level SQL skills for complex data extraction, transformation, and analysis from various relational databases.
  • Deep understanding and practical application of statistical modeling and machine learning techniques, including but not limited to regression, classification, clustering, time series analysis, and dimensionality reduction.
  • Proven expertise in end-to-end machine learning model development lifecycle, including robust feature engineering, rigorous model validation and evaluation (e.g., A/B testing), and model deployment strategies.
  • Demonstrated ability to translate complex business problems into actionable analytical frameworks and data science solutions, driving measurable business outcomes.
  • Proficiency in advanced data analysis techniques, including Exploratory Data Analysis (EDA), customer segmentation (e.g., RFM analysis), and cohort analysis, to uncover actionable insights.
  • Experience in designing and implementing data models, including logical and physical data modeling, and developing source-to-target mappings for robust data pipelines.
  • Exceptional communication skills, with the ability to clearly articulate complex technical findings, methodologies, and recommendations to diverse business stakeholders (both technical and non-technical audiences).
  • Experience in designing and implementing data models, including logical and physical data modeling, and developing source-to-target mappings for robust data pipelines.
  • Exceptional communication skills, with the ability to clearly articulate complex technical findings, methodologies, and recommendations to diverse business stakeholders (both technical and non-technical audiences).

Good-to-Have Skills:

  • Experience with cloud platforms (Azure, AWS, GCP) and specific services like Azure ML, Synapse, Azure Kubernetes and Databricks.
  • Familiarity with big data processing tools like Apache Spark or Hadoop.
  • Exposure to MLOps tools and practices (e.g., MLflow, Docker, Kubeflow) for model lifecycle management.
  • Knowledge of deep learning libraries (TensorFlow, PyTorch) or experience with Generative AI (GenAI) and Large Language Models (LLMs).
  • Proficiency with business intelligence and data visualization tools such as Tableau, Power BI, or Plotly.
  • Experience working within Agile project delivery methodologies.

Competencies:

·        Tech Savvy - Anticipating and adopting innovations in business-building digital and technology applications.

·        Self-Development - Actively seeking new ways to grow and be challenged using both formal and informal development channels.

·        Action Oriented - Taking on new opportunities and tough challenges with a sense of urgency, high energy, and enthusiasm.

·        Customer Focus - Building strong customer relationships and delivering customer-centric solutions.

·        Optimizes Work Processes - Knowing the most effective and efficient processes to get things done, with a focus on continuous improvement.

Why Join Us?

  • Be part of a collaborative and agile team driving cutting-edge AI and data engineering solutions.
  • Work on impactful projects that make a difference across industries.
  • Opportunities for professional growth and continuous learning.
  • Competitive salary and benefits package.

 


Read more
KJBN labs

at KJBN labs

2 candid answers
sakthi ganesh
Posted by sakthi ganesh
Bengaluru (Bangalore)
4 - 7 yrs
₹10L - ₹30L / yr
Hadoop
Apache Kafka
Spark
skill iconPython
skill iconJava
+8 more

Senior Data Engineer Job Description

Overview

The Senior Data Engineer will design, develop, and maintain scalable data pipelines and

infrastructure to support data-driven decision-making and advanced analytics. This role requires deep

expertise in data engineering, strong problem-solving skills, and the ability to collaborate with

cross-functional teams to deliver robust data solutions.

Key Responsibilities


Data Pipeline Development: Design, build, and optimize scalable, secure, and reliable data

pipelines to ingest, process, and transform large volumes of structured and unstructured data.

Data Architecture: Architect and maintain data storage solutions, including data lakes, data

warehouses, and databases, ensuring performance, scalability, and cost-efficiency.

Data Integration: Integrate data from diverse sources, including APIs, third-party systems,

and streaming platforms, ensuring data quality and consistency.

Performance Optimization: Monitor and optimize data systems for performance, scalability,

and cost, implementing best practices for partitioning, indexing, and caching.

Collaboration: Work closely with data scientists, analysts, and software engineers to

understand data needs and deliver solutions that enable advanced analytics, machine

learning, and reporting.

Data Governance: Implement data governance policies, ensuring compliance with data

security, privacy regulations (e.g., GDPR, CCPA), and internal standards.

Automation: Develop automated processes for data ingestion, transformation, and validation

to improve efficiency and reduce manual intervention.

Mentorship: Guide and mentor junior data engineers, fostering a culture of technical

excellence and continuous learning.

Troubleshooting: Diagnose and resolve complex data-related issues, ensuring high

availability and reliability of data systems.

Required Qualifications

Education: Bachelor’s or Master’s degree in Computer Science, Engineering, Data Science,

or a related field.

Experience: 5+ years of experience in data engineering or a related role, with a proven track

record of building scalable data pipelines and infrastructure.

Technical Skills:

Proficiency in programming languages such as Python, Java, or Scala.

Expertise in SQL and experience with NoSQL databases (e.g., MongoDB, Cassandra).

Strong experience with cloud platforms (e.g., AWS, Azure, GCP) and their data services

(e.g., Redshift, BigQuery, Snowflake).

Hands-on experience with ETL/ELT tools (e.g., Apache Airflow, Talend, Informatica) and

data integration frameworks.

Familiarity with big data technologies (e.g., Hadoop, Spark, Kafka) and distributed

systems.

Knowledge of containerization and orchestration tools (e.g., Docker, Kubernetes) is a

plus.

Soft Skills:

Excellent problem-solving and analytical skills.

Strong communication and collaboration abilities.

Ability to work in a fast-paced, dynamic environment and manage multiple priorities.

Certifications (optional but preferred): Cloud certifications (e.g., AWS Certified Data Analytics,

Google Professional Data Engineer) or relevant data engineering certifications.

Preferred Qualifica

Experience with real-time data processing and streaming architectures.

Familiarity with machine learning pipelines and MLOps practices.

Knowledge of data visualization tools (e.g., Tableau, Power BI) and their integration with data

pipelines.

Experience in industries with high data complexity, such as finance, healthcare, or

e-commerce.

Work Environment

Location: Hybrid/Remote/On-site (depending on company policy).

Team: Collaborative, cross-functional team environment with data scientists, analysts, and

business stakeholders.

Hours: Full-time, with occasional on-call responsibilities for critical data systems.

Read more
HaystackAnalytics
Careers Hr
Posted by Careers Hr
Navi Mumbai
1 - 4 yrs
₹6L - ₹12L / yr
skill iconRust
skill iconPython
Artificial Intelligence (AI)
skill iconMachine Learning (ML)
skill iconData Science
+2 more

Position – Python Developer

Location – Navi Mumbai


Who are we

Based out of IIT Bombay, HaystackAnalytics is a HealthTech company creating clinical genomics products, which enable diagnostic labs and hospitals to offer accurate and personalized diagnostics. Supported by India's most respected science agencies (DST, BIRAC, DBT), we created and launched a portfolio of products to offer genomics in infectious diseases. Our genomics-based diagnostic solution for Tuberculosis was recognized as one of the top innovations supported by BIRAC in the past 10 years, and was launched by the Prime Minister of India in the BIRAC Showcase event in Delhi, 2022.


Objectives of this Role:

  • Design and implement efficient, scalable backend services using Python.
  • Work closely with healthcare domain experts to create innovative and accurate diagnostics solutions.
  • Build APIs, services, and scripts to support data processing pipelines and front-end applications.
  • Automate recurring tasks and ensure robust integration with cloud services.
  • Maintain high standards of software quality and performance using clean coding principles and testing practices.
  • Collaborate within the team to upskill and unblock each other for faster and better outcomes.





Primary Skills – Python Development

  • Proficient in Python 3 and its ecosystem
  • Frameworks: Flask / Django / FastAPI
  • RESTful API development
  • Understanding of OOPs and SOLID design principles
  • Asynchronous programming (asyncio, aiohttp)
  • Experience with task queues (Celery, RQ)
  • Rust programming experience for systems-level or performance-critical components

Testing & Automation

  • Unit Testing: PyTest / unittest
  • Automation tools: Ansible / Terraform (good to have)
  • CI/CD pipelines

DevOps & Cloud

  • Docker, Kubernetes (basic knowledge expected)
  • Cloud platforms: AWS / Azure / GCP
  • GIT and GitOps workflows
  • Familiarity with containerized deployment & serverless architecture

Bonus Skills

  • Data handling libraries: Pandas / NumPy
  • Experience with scripting: Bash / PowerShell
  • Functional programming concepts
  • Familiarity with front-end integration (REST API usage, JSON handling)

 Other Skills

  • Innovation and thought leadership
  • Interest in learning new tools, languages, workflows
  • Strong communication and collaboration skills
  • Basic understanding of UI/UX principles


To know more about ushttps://haystackanalytics.in




Read more
Client based at Bangalore location.

Client based at Bangalore location.

Agency job
Remote only
4 - 20 yrs
₹12L - ₹30L / yr
clinical trial data
EMR
electronics medical record
claims
registry data
+5 more

Position Overview:

We are seeking a highly motivated and skilled Real-World Evidence (RWE) Analyst to join growing team. The successful candidate will be instrumental in generating crucial insights from real-world healthcare data to inform decision-making, improve patient outcomes, and advance medical understanding. This role offers an exciting opportunity to work with diverse healthcare datasets and contribute to impactful research that drives real-world change.

Key Responsibilities:

For Both RWE Analyst (Junior) & Senior RWE Analyst:

  • Data Expertise: Work extensively with real-world healthcare data, including Electronic Medical Records (EMR), claims data, and/or patient registries. (Experience with clinical trial data does not fulfill this requirement.)
  • Methodology: Apply appropriate statistical and epidemiological methodologies to analyze complex healthcare datasets.
  • Communication: Clearly communicate findings through presentations, reports, and data visualizations to both technical and non-technical audiences.
  • Collaboration: Collaborate effectively with cross-functional teams, including clinicians, epidemiologists, statisticians, and data scientists.
  • Quality Assurance: Ensure the accuracy, reliability, and validity of all analyses and reports.
  • Ethical Conduct: Adhere to all relevant data privacy regulations and ethical guidelines in real-world data research.

Specific Responsibilities for RWE Analyst (Junior):

  • Perform statistical analysis on real-world healthcare datasets under guidance.
  • Contribute to the development of analysis plans, often by implementing predefined methodologies or refining existing approaches.
  • Prepare and clean data for analysis, identifying and addressing data quality issues.
  • Assist in the interpretation of study results and the drafting of reports or presentations.
  • Support the preparation of journal publication materials based on RWE studies.

Specific Responsibilities for Senior RWE Analyst:

  • Analysis Design & Leadership: Independently design and develop comprehensive analysis plans from inception for RWE studies, identifying appropriate methodologies, data sources, and analytical approaches. This role requires a "thinker" who can conceptualize and drive the analytical strategy, not just execute pre-defined requests.
  • Project Management: Lead and manage RWE projects from conception to completion, ensuring timely delivery and high-quality outputs.
  • Mentorship: Mentor and guide junior RWE analysts, fostering their development in real-world data analysis and research.
  • Methodological Innovation: Proactively identify and evaluate new methodologies and technologies to enhance RWE capabilities.
  • Strategic Input: Provide strategic input on study design, data acquisition, and evidence generation strategies.

Qualifications:

For Both RWE Analyst (Junior) & Senior RWE Analyst:

  • Bachelor's or Master's degree in Epidemiology, Biostatistics, Public Health, Health Economics, Data Science, or a related quantitative field. (PhD preferred for Senior RWE Analyst).
  • Demonstrable hands-on experience working with real-world healthcare data, specifically EMR, claims, and/or registry data. Clinical trial data experience will not be considered as meeting this requirement.
  • Proficiency in at least one statistical programming language (e.g., R, Python, SAS, SQL).
  • Strong understanding of epidemiological study designs and statistical methods relevant to RWE.
  • Excellent analytical, problem-solving, and critical thinking skills.
  • Strong written and verbal communication skills.

Specific Qualifications for RWE Analyst (Junior):

  • 4+ years of experience in real-world data analysis in a healthcare or pharmaceutical setting.
  • Involvement with journal publications is highly desirable. (e.g., co-authorship, contribution to manuscript preparation).

Specific Qualifications for Senior RWE Analyst:

  • 5+ years of progressive experience in real-world data analysis, with a significant portion dedicated to independent study design and leadership.
  • A strong track record of journal publications is essential. (e.g., lead author, significant contribution to multiple peer-reviewed publications).
  • Proven ability to translate complex analytical findings into actionable insights for diverse stakeholders.
  • Experience with advanced analytical techniques (e.g., machine learning, causal inference) is a plus.

Preferred Skills (for both roles, but more emphasized for Senior):

  • Experience with large healthcare databases (e.g., IQVIA, Optum, IBM MarketScan, SEER, NDHM).
  • Knowledge of common data models (e.g., OMOP CDM).
  • Familiarity with regulatory guidelines and best practices for RWE generation.


Read more
VyTCDC
Gobinath Sundaram
Posted by Gobinath Sundaram
Hyderabad, Bengaluru (Bangalore), Pune
6 - 11 yrs
₹8L - ₹26L / yr
skill iconData Science
skill iconPython
Large Language Models (LLM)
Natural Language Processing (NLP)

POSITION / TITLE: Data Science Lead

Location: Offshore – Hyderabad/Bangalore/Pune

Who are we looking for?

Individuals with 8+ years of experience implementing and managing data science projects. Excellent working knowledge of traditional machine learning and LLM techniques. 

‎ The candidate must demonstrate the ability to navigate and advise on complex ML ecosystems from a model building and evaluation perspective. Experience in NLP and chatbots domains is preferred.

We acknowledge the job market is blurring the line between data roles: while software skills are necessary, the emphasis of this position is on data science skills, not on data-, ML- nor software-engineering.

Responsibilities:

· Lead data science and machine learning projects, contributing to model development, optimization and evaluation. 

· Perform data cleaning, feature engineering, and exploratory data analysis.  

· Translate business requirements into technical solutions, document and communicate project progress, manage non-technical stakeholders.

· Collaborate with other DS and engineers to deliver projects.

Technical Skills – Must have:

· Experience in and understanding of the natural language processing (NLP) and large language model (LLM) landscape.

· Proficiency with Python for data analysis, supervised & unsupervised learning ML tasks.

· Ability to translate complex machine learning problem statements into specific deliverables and requirements.

· Should have worked with major cloud platforms such as AWS, Azure or GCP.

· Working knowledge of SQL and no-SQL databases.

· Ability to create data and ML pipelines for more efficient and repeatable data science projects using MLOps principles.

· Keep abreast with new tools, algorithms and techniques in machine learning and works to implement them in the organization.

· Strong understanding of evaluation and monitoring metrics for machine learning projects.

Technical Skills – Good to have:

· Track record of getting ML models into production

· Experience building chatbots.

· Experience with closed and open source LLMs.

· Experience with frameworks and technologies like scikit-learn, BERT, langchain, autogen…

· Certifications or courses in data science.

Education:

· Master’s/Bachelors/PhD Degree in Computer Science, Engineering, Data Science, or a related field. 

Process Skills:

· Understanding of  Agile and Scrum  methodologies.  

· Ability to follow SDLC processes and contribute to technical documentation.  

Behavioral Skills :

· Self-motivated and capable of working independently with minimal management supervision.

· Well-developed design, analytical & problem-solving skills

· Excellent communication and interpersonal skills.  

· Excellent team player, able to work with virtual teams in several time zones.

Read more
Client based at Bangalore location.

Client based at Bangalore location.

Agency job
Remote only
4 - 12 yrs
₹12L - ₹40L / yr
Data scientist
skill iconData Science
Prompt engineering
skill iconPython
Artificial Intelligence (AI)
+4 more

Role: Data Scientist

Location: Bangalore (Remote)

Experience: 4 - 15 years


Skills Required - Radiology, visual images, text, classical model, LLM multi model, Primarily Generative AI, Prompt Engineering, Large Language Models, Speech & Text Domain AI, Python coding, AI Skills, Real world evidence, Healthcare domain

 

JOB DESCRIPTION

We are seeking an experienced Data Scientist with a proven track record in Machine Learning, Deep Learning, and a demonstrated focus on Large Language Models (LLMs) to join our cutting-edge Data Science team. You will play a pivotal role in developing and deploying innovative AI solutions that drive real-world impact to patients and healthcare providers.

Responsibilities

• LLM Development and Fine-tuning: fine-tune, customize, and adapt large language models (e.g., GPT, Llama2, Mistral, etc.) for specific business applications and NLP tasks such as text classification, named entity recognition, sentiment analysis, summarization, and question answering. Experience in other transformer-based NLP models such as BERT, etc. will be an added advantage.

• Data Engineering: collaborate with data engineers to develop efficient data pipelines, ensuring the quality and integrity of large-scale text datasets used for LLM training and fine-tuning

• Experimentation and Evaluation: develop rigorous experimentation frameworks to evaluate model performance, identify areas for improvement, and inform model selection. Experience in LLM testing frameworks such as TruLens will be an added advantage.

• Production Deployment: work closely with MLOps and Data Engineering teams to integrate models into scalable production systems.

• Predictive Model Design and Implementation: leverage machine learning/deep learning and LLM methods to design, build, and deploy predictive models in oncology (e.g., survival models)

• Cross-functional Collaboration: partner with product managers, domain experts, and stakeholders to understand business needs and drive the successful implementation of data science solutions

• Knowledge Sharing: mentor junior team members and stay up to date with the latest advancements in machine learning and LLMs

Qualifications Required

• Doctoral or master’s degree in computer science, Data Science, Artificial Intelligence, or related field

• 5+ years of hands-on experience in designing, implementing, and deploying machine learning and deep learning models

• 12+ months of in-depth experience working with LLMs. Proficiency in Python and NLP-focused libraries (e.g., spaCy, NLTK, Transformers, TensorFlow/PyTorch).

• Experience working with cloud-based platforms (AWS, GCP, Azure)

Additional Skills

• Excellent problem-solving and analytical abilities

• Strong communication skills, both written and verbal

• Ability to thrive in a collaborative and fast-paced environment


Read more
NeoGenCode Technologies Pvt Ltd
Gurugram
3 - 6 yrs
₹2L - ₹12L / yr
skill iconPython
skill iconDjango
skill iconPostgreSQL
Payment gateways
skill iconRedis
+16 more

Job Title : Python Django Developer

Experience : 3+ Years

Location : Gurgaon Sector - 48

Working Days : 6 Days WFO (Monday to Saturday)


Job Summary :

We are looking for a skilled Python Django Developer with strong foundational knowledge in backend development, data structures, and operating system concepts.

The ideal candidate should have experience in Django and PostgreSQL, along with excellent logical thinking and multithreading knowledge.


Main Technical Skills : Python, Django (or Flask), PostgreSQL/MySQL, SQL & NoSQL ORM, Microservice Architecture, Third-party API integrations (e.g., payment gateways, SMS/email APIs), REST API development, JSON/XML, strong knowledge of data structures, multithreading, and OS concepts.


Key Responsibilities :

  • Write efficient, reusable, testable, and scalable code using the Django framework
  • Develop backend components, server-side logic, and statistical models
  • Design and implement high-availability, low-latency applications with robust data protection and security
  • Contribute to the development of highly responsive web applications
  • Collaborate with cross-functional teams on system design and integration

Mandatory Skills :

  • Strong programming skills in Python and Django (or similar frameworks like Flask).
  • Proficiency with PostgreSQL / MySQL and experience in writing complex queries.
  • Strong understanding of SQL and NoSQL ORM.
  • Solid grasp of data structures, multithreading, and operating system concepts.
  • Experience with RESTful API development and implementation of API security.
  • Knowledge of JSON/XML and their use in data exchange.

Good-to-Have Skills :

  • Experience with Redis, MQTT, and message queues like RabbitMQ or Kafka.
  • Understanding of microservice architecture and third-party API integrations (e.g., payment gateways, SMS/email APIs).
  • Familiarity with MongoDB and other NoSQL databases.
  • Exposure to data science libraries such as Pandas, NumPy, Scikit-learn.
  • Knowledge in building and integrating statistical learning models.
Read more
NeoGenCode Technologies Pvt Ltd
Gurugram
3 - 6 yrs
₹2L - ₹12L / yr
skill iconPython
skill iconDjango
skill iconPostgreSQL
MySQL
SQL
+17 more

Job Title : Python Django Developer

Experience : 3+ Years

Location : Gurgaon

Working Days : 6 Days (Monday to Saturday)


Job Summary :

We are looking for a skilled Python Django Developer with strong foundational knowledge in backend development, data structures, and operating system concepts.

The ideal candidate should have experience in Django and PostgreSQL, along with excellent logical thinking and multithreading knowledge.


Technical Skills : Python, Django (or Flask), PostgreSQL/MySQL, SQL & NoSQL ORM, REST API development, JSON/XML, strong knowledge of data structures, multithreading, and OS concepts.


Key Responsibilities :

  • Write efficient, reusable, testable, and scalable code using the Django framework.
  • Develop backend components, server-side logic, and statistical models.
  • Design and implement high-availability, low-latency applications with robust data protection and security.
  • Contribute to the development of highly responsive web applications.
  • Collaborate with cross-functional teams on system design and integration.

Mandatory Skills :

  • Strong programming skills in Python and Django (or similar frameworks like Flask).
  • Proficiency with PostgreSQL / MySQL and experience in writing complex queries.
  • Strong understanding of SQL and NoSQL ORM.
  • Solid grasp of data structures, multithreading, and operating system concepts.
  • Experience with RESTful API development and implementation of API security.
  • Knowledge of JSON/XML and their use in data exchange.

Good-to-Have Skills :

  • Experience with Redis, MQTT, and message queues like RabbitMQ or Kafka
  • Understanding of microservice architecture and third-party API integrations (e.g., payment gateways, SMS/email APIs)
  • Familiarity with MongoDB and other NoSQL databases
  • Exposure to data science libraries such as Pandas, NumPy, Scikit-learn
  • Knowledge in building and integrating statistical learning models.
Read more
KGISL EDU

at KGISL EDU

2 recruiters
Dhivya V
Posted by Dhivya V
Coimbatore, Tamil nadu, Bengaluru (Bangalore), Mumbai, Delhi, Gurugram, Noida, Ghaziabad, Faridabad, Pune, Hyderabad
12 - 15 yrs
₹12L - ₹20L / yr
Bachelor of Computer Science
Management Information System (MIS)
Artificial Intelligence (AI)
skill iconData Science

Head of the Department

AI and Data Science

12 to 15 years of Experience

Salary negotiable for immediate Joiners

hrkiteatkgkitedotacdotin

Read more
TechMynd Consulting

at TechMynd Consulting

2 candid answers
Suraj N
Posted by Suraj N
Bengaluru (Bangalore), Gurugram, Mumbai
4 - 8 yrs
₹10L - ₹24L / yr
skill iconData Science
skill iconPostgreSQL
skill iconPython
Apache
skill iconAmazon Web Services (AWS)
+5 more

Senior Data Engineer


Location: Bangalore, Gurugram (Hybrid)


Experience: 4-8 Years


Type: Full Time | Permanent


Job Summary:


We are looking for a results-driven Senior Data Engineer to join our engineering team. The ideal candidate will have hands-on expertise in data pipeline development, cloud infrastructure, and BI support, with a strong command of modern data stacks. You’ll be responsible for building scalable ETL/ELT workflows, managing data lakes and marts, and enabling seamless data delivery to analytics and business intelligence teams.


This role requires deep technical know-how in PostgreSQL, Python scripting, Apache Airflow, AWS or other cloud environments, and a working knowledge of modern data and BI tools.


Key Responsibilities:


PostgreSQL & Data Modeling


· Design and optimize complex SQL queries, stored procedures, and indexes


· Perform performance tuning and query plan analysis


· Contribute to schema design and data normalization


Data Migration & Transformation


· Migrate data from multiple sources to cloud or ODS platforms


· Design schema mapping and implement transformation logic


· Ensure consistency, integrity, and accuracy in migrated data


Python Scripting for Data Engineering


· Build automation scripts for data ingestion, cleansing, and transformation


· Handle file formats (JSON, CSV, XML), REST APIs, cloud SDKs (e.g., Boto3)


· Maintain reusable script modules for operational pipelines


Data Orchestration with Apache Airflow


· Develop and manage DAGs for batch/stream workflows


· Implement retries, task dependencies, notifications, and failure handling


· Integrate Airflow with cloud services, data lakes, and data warehouses


Cloud Platforms (AWS / Azure / GCP)


· Manage data storage (S3, GCS, Blob), compute services, and data pipelines


· Set up permissions, IAM roles, encryption, and logging for security


· Monitor and optimize cost and performance of cloud-based data operations


Data Marts & Analytics Layer


· Design and manage data marts using dimensional models


· Build star/snowflake schemas to support BI and self-serve analytics


· Enable incremental load strategies and partitioning


Modern Data Stack Integration


· Work with tools like DBT, Fivetran, Redshift, Snowflake, BigQuery, or Kafka


· Support modular pipeline design and metadata-driven frameworks


· Ensure high availability and scalability of the stack


BI & Reporting Tools (Power BI / Superset / Supertech)


· Collaborate with BI teams to design datasets and optimize queries


· Support development of dashboards and reporting layers


· Manage access, data refreshes, and performance for BI tools




Required Skills & Qualifications:


· 4–6 years of hands-on experience in data engineering roles


· Strong SQL skills in PostgreSQL (tuning, complex joins, procedures)


· Advanced Python scripting skills for automation and ETL


· Proven experience with Apache Airflow (custom DAGs, error handling)


· Solid understanding of cloud architecture (especially AWS)


· Experience with data marts and dimensional data modeling


· Exposure to modern data stack tools (DBT, Kafka, Snowflake, etc.)


· Familiarity with BI tools like Power BI, Apache Superset, or Supertech BI


· Version control (Git) and CI/CD pipeline knowledge is a plus


· Excellent problem-solving and communication skills

Read more
Vector Labs Tech
Victoria Gomez
Posted by Victoria Gomez
Remote only
0 - 15 yrs
₹7L - ₹10L / yr
skill iconData Science

We are seeking a talented and passionate Data Scientist to join our dynamic team. The ideal candidate will leverage their expertise in statistical analysis, machine learning, and data visualization to extract actionable insights from complex datasets. You will play a key role in developing and deploying predictive models, contributing to strategic business decisions, and driving innovation through data.

Responsibilities:

  • Data Analysis and Exploration:
  • Collect, clean, and preprocess large datasets from various sources.
  • Conduct exploratory data analysis to identify patterns, trends, and anomalies.
  • Visualize data using appropriate tools and techniques to communicate findings effectively.
  • Machine Learning and Modeling:
  • Develop and implement machine learning models for predictive analytics, classification, regression, and clustering.
  • Evaluate model performance and optimize algorithms for accuracy and efficiency.
  • Stay up-to-date with the latest advancements in machine learning and artificial intelligence.
  • Statistical Analysis:
  • Perform statistical hypothesis testing and A/B testing to validate findings.
  • Apply statistical methods to analyze data and draw meaningful conclusions.
  • Develop statistical models to forecast future trends and outcomes.
  • Data Visualization and Reporting:
  • Create compelling visualizations and dashboards to present data insights to stakeholders.
  • Prepare clear and concise reports summarizing findings and recommendations.
  • Effectively communicate complex data concepts to non-technical audiences.
  • Collaboration and Communication:
  • Collaborate with cross-functional teams, including engineers, product managers, and business analysts.
  • Present findings and recommendations to stakeholders at all levels of the organization.
  • Contribute to the development of data-driven strategies and initiatives.


Read more
Vector Labs Tech
Victoria Gomez
Posted by Victoria Gomez
Remote only
0 - 30 yrs
₹9L - ₹25L / yr
skill iconData Analytics
skill iconData Science
Data management

Key Responsibilities:

  • Data Collection and Preparation:
  • Gathering data from various sources (databases, APIs, files).
  • Cleaning and preprocessing data (handling missing values, outliers, inconsistencies).
  • Transforming data into a suitable format for analysis.
  • Data Analysis:
  • Performing exploratory data analysis (EDA) to identify patterns and trends.
  • Applying statistical techniques and machine learning algorithms.
  • Creating data visualizations (charts, graphs) to communicate findings.
  • Model Development and Evaluation:
  • Assisting in the development and training of machine learning models.
  • Evaluating model performance using appropriate metrics.
  • Contributing to model tuning and optimization.


Read more
Vector Labs Tech
Victoria Gomez
Posted by Victoria Gomez
Remote only
0 - 50 yrs
₹10L - ₹20L / yr
skill iconData Science

Data Collection and Preprocessing:

  • Gather and clean data from various sources (e.g., databases, APIs, web scraping).
  • Perform data validation and ensure data quality.
  • Transform and prepare data for analysis and modeling.

Data Analysis and Modeling:

  • Conduct exploratory data analysis (EDA) to identify patterns and trends.
  • Develop and implement machine learning models (e.g., regression, classification, clustering).
  • Evaluate model performance and optimize for accuracy and efficiency.
  • Apply statistical techniques and algorithms to solve business problems.

Data Visualization and Reporting:

  • Create compelling visualizations to communicate insights to stakeholders.
  • Develop dashboards and reports to track key performance indicators (KPIs).
  • Present findings to technical and non-technical audiences.


Read more
HyperNovas Tech
Michael Reyes
Posted by Michael Reyes
Remote only
0 - 50 yrs
₹5L - ₹10L / yr
Data Structures
skill iconData Analytics
skill iconData Science

Data Scientists are analytical experts who use their skills in mathematics, statistics, and computer science to solve complex business problems. They collect, analyze, and interpret large datasets to extract meaningful insights and develop data-driven solutions. Their work helps organizations make informed decisions, improve processes, and gain a competitive edge.

Key Responsibilities:

  • Data Collection and Preprocessing:
  • Gathering data from various sources (databases, APIs, web scraping, etc.).
  • Cleaning and transforming data to ensure quality and consistency.
  • Handling missing values, outliers, and inconsistencies.
  • Integrating data from multiple sources.
  • Data Analysis and Exploration:
  • Conducting exploratory data analysis (EDA) to identify patterns, trends, and relationships.
  • Applying statistical methods and hypothesis testing to validate findings.
  • Visualizing data using charts, graphs, and other tools to communicate insights.
  • Model Development and Deployment:
  • Developing and implementing machine learning models (e.g., regression, classification, clustering, deep learning).
  • Evaluating model performance and fine-tuning parameters.
  • Deploying models into production environments.
  • Creating and maintaining machine learning pipelines.
  • Communication and Collaboration:
  • Presenting findings and recommendations to stakeholders in a clear and concise manner.
  • Collaborating with cross-functional teams (e.g., engineers, product managers, business analysts).
  • Documenting code, models, and processes.
  • Translating business requirements into technical implementations.


Read more
Quenzo Tech
Aaron Diaz
Posted by Aaron Diaz
Remote only
0 - 50 yrs
₹10L - ₹20L / yr
Data Visualization
skill iconData Analytics
skill iconData Science

We are seeking an experienced Data Scientist to join our data-driven team. As a Data Scientist, you will work with large datasets, apply advanced analytics techniques, and build machine learning models to provide actionable insights that drive business decisions. You will collaborate with various teams to translate complex data into clear recommendations and innovative solutions.

Key Responsibilities:

  • Analyze large datasets to identify trends, patterns, and insights that can inform business strategy.
  • Develop, implement, and maintain machine learning models and algorithms to solve complex problems.
  • Work closely with stakeholders to understand business objectives and translate them into data science tasks.
  • Preprocess, clean, and organize raw data from various sources for analysis.
  • Conduct statistical analysis and build predictive models to support data-driven decision-making.
  • Create data visualizations and reports to communicate findings clearly and effectively to both technical and non-technical teams.
  • Design experiments and A/B testing to evaluate business initiatives.
  • Ensure the scalability and performance of data pipelines and machine learning models.
  • Collaborate with engineering teams to integrate data science solutions into production systems.
  • Continuously stay updated with the latest developments in data science, machine learning, and analytics technologies.


Read more
Unicornis AI
Sachin Anbhule
Posted by Sachin Anbhule
Navi Mumbai
5 - 7 yrs
₹9L - ₹15L / yr
skill iconPython
skill iconData Science
OpenAI
Retrieval Augmented Generation (RAG)
Large Language Models (LLM)

Note: We are looking for immediate joiners with 6+ years of experience.


Job Description

UnicornisAI is seeking a Senior Data Scientist with expertise in chatbot development using Retrieval-Augmented Generation (RAG) and OpenAI. This role is ideal for someone with a strong background in machine learning, natural language processing (NLP), and AI model deployment. If you are passionate about developing cutting-edge AI-driven solutions, we’d love to have you on our team.


Key Responsibilities

- Design and develop AI-powered chatbots using Retrieval-Augmented Generation (RAG), OpenAI models (GPT-4, etc.), and vector databases

- Build and fine-tune large language models (LLMs) to improve chatbot performance

- Implement document retrieval and knowledge management systems for chatbot responses

- Optimize NLP pipelines and model performance using state-of-the-art techniques

- Work with structured and unstructured data to enhance chatbot intelligence

- Deploy and maintain AI models in cloud environments such as AWS, Azure, or GCP

- Collaborate with engineering teams to integrate AI solutions into products

- Stay updated with the latest advancements in AI, NLP, and RAG-based architectures


Required Skills & Qualifications

- 6+ years of experience in data science, AI, or a related field

- Strong knowledge of RAG, OpenAI APIs (GPT-4, GPT-3.5, etc.), LLM fine-tuning, and embeddings

- Proficiency in Python, TensorFlow, PyTorch, and other ML frameworks

- Experience with vector databases such as FAISS, Pinecone, or Weaviate

- Expertise in NLP techniques such as Named Entity Recognition (NER), text summarization, and semantic search

- Hands-on experience in building and deploying AI models in production

- Knowledge of cloud platforms like AWS Sagemaker, Azure AI, or Google Vertex AI

- Strong problem-solving and analytical skills


Nice-to-Have Skills

- Experience with MLOps tools for model monitoring and retraining

- Understanding of prompt engineering and LLM chaining techniques

- Exposure to LangChain or similar frameworks for RAG-based chatbots


Location & Work Mode

- Open to remote or hybrid work, based on location


Interested candidates can email their resumes to Sachin at unicornisai.com

Read more
Highfly Sourcing

at Highfly Sourcing

2 candid answers
Highfly Hr
Posted by Highfly Hr
Augsburg, Germany, Málaga (Spain), Singapore, Dubai, Kuwait, Qatar, New Zealand, RIYADH (Saudi Arabia), Bengaluru (Bangalore), Mumbai, Delhi, Gurugram, Noida, Ghaziabad, Faridabad, Pune, Hyderabad
5 - 15 yrs
₹15L - ₹30L / yr
skill iconData Analytics
skill iconData Science
Data management
Product Management
Customer Relationship Management (CRM)
+5 more

As a B2B Marketer, you will handle marketing efforts for brand awareness. As well as, lead generation, web site development and content curation/creation, and sales collateral.

Roles and Responsibilities

  • Design content marketing strategies and set short-term goals.
  • Online and offline marketing: hands-on experience with digital media, including SEM, display, social, email, and affiliate channels.
  • Experience working with a sales team in both Sales Enablement and Account-Based Approach.
  • Knowledge of Demand Generation tactics and Lead Conversion principles.
  • Understanding of A/B and multivariate testing, user segmentation, and reporting processes.
  • Ability to create value propositions that communicate clearly to targeted audiences.
  • Extensive experience creating audience segments and developing marketing campaigns that deliver a targeted message and create affinity with brands.
  • Accomplished at leveraging full value from marketing automation processes and tools.

Requirements

  • Degree in Marketing or relevant field.
  • 3+ years of hands-on marketing experience in a B2B role.
  • Excellent understanding of B2B marketing.
  • Demonstrated ability to manage the budgeting process and use of analytical skills.
  • Proven track record in delivering marketing campaigns that drive sales growth.
  • Ability to remotely create and manage marketing strategies.
  • An understanding of agile marketing, to identify and focus on high-value projects and then continuously and incrementally improve the results over time.
  • Excellent communication skills.


Read more
NeoGenCode Technologies Pvt Ltd
Akshay Patil
Posted by Akshay Patil
Noida
4 - 8 yrs
₹2L - ₹10L / yr
skill iconMachine Learning (ML)
skill iconData Science
Azure OpenAI
skill iconPython
pandas
+11 more

Job Title : Sr. Data Scientist

Experience : 5+ Years

Location : Noida (Hybrid – 3 Days in Office)

Shift Timing : 2 PM to 11 PM

Availability : Immediate


Job Description :

We are seeking a Senior Data Scientist to develop and implement machine learning models, predictive analytics, and data-driven solutions.

The role involves data analysis, dashboard development (Looker Studio), NLP, Generative AI (LLMs, Prompt Engineering), and statistical modeling.

Strong expertise in Python (Pandas, NumPy), Cloud Data Science (AWS SageMaker, Azure OpenAI), Agile (Jira, Confluence), and stakeholder collaboration is essential.


Mandatory skills : Machine Learning, Cloud Data Science (AWS SageMaker, Azure OpenAI), Python (Pandas, NumPy), Data Visualization (Looker Studio), NLP & Generative AI (LLMs, Prompt Engineering), Statistical Modeling, Agile (Jira, Confluence), and strong stakeholder communication.

Read more
Wallero technologies

at Wallero technologies

5 recruiters
Hari krishna
Posted by Hari krishna
Hyderabad
4 - 6 yrs
₹20L - ₹25L / yr
skill iconData Science
Computer Vision
skill iconPython
skill iconMachine Learning (ML)
Natural Language Processing (NLP)
+1 more
  • Data Scientist with 4+ yrs of experience
  • Good working experience in Computer vision and ML engineering
  • Strong knowledge of statistical modeling, hypothesis testing, and regression analysis
  • Should be developing APIs
  • Proficiency in Python, SQL
  • Should have Azure knowledge
  • Basic knowledge of NLP
  • Analytical thinking and problem-solving abilities
  • Excellent communication, Strong collaboration skills
  • Should be able to work independently
  • Attention to detail and commitment to high-quality results
  • Adaptability to fast-paced, dynamic environments


Read more
DCB Bank
Agency job
via Pluginlive by Harsha Saggi
Mumbai
4 - 10 yrs
₹10L - ₹20L / yr
skill iconMachine Learning (ML)
skill iconR Language
Banking
NBFC
ECL
+10 more

About the company


DCB Bank is a new generation private sector bank with 442 branches across India.It is a scheduled commercial bank regulated by the Reserve Bank of India. DCB Bank’s business segments are Retail banking, Micro SME, SME, mid-Corporate, Agriculture, Government, Public Sector, Indian Banks, Co-operative Banks and Non-Banking Finance Companies.


Job Description


Department: Risk Analytics


CTC: Max 18 Lacs


Grade: Sr Manager/AVP


Experience: Min 4 years of relevant experience

We are looking for a Data Scientist to join our growing team of Data Science experts and manage the processes and people responsible for accurate data collection, processing, modelling, analysis, implementation, and maintenance.


Responsibilities

  • Understand, monitor and maintain existing financial scorecards (ML Based) and make changes to the model when required.
  • Perform Statistical analysis in R and assist IT team with deployment of ML model and analytical frameworks in Python.
  • Should be able to handle multiple tasks and must know how to prioritize the work.
  • Lead cross-functional projects using advanced data modelling and analysis techniques to discover insights that will guide strategic decisions and uncover optimization opportunities.
  • Develop clear, concise and actionable solutions and recommendations for client’s business needs and actively explore client’s business and formulate solutions/ideas which can help client in terms of efficient cost cutting or in achieving growth/revenue/profitability targets faster.
  • Build, develop and maintain data models, reporting systems, data automation systems, dashboards and performance metrics support that support key business decisions.
  • Design and build technical processes to address business issues.
  • Oversee the design and delivery of reports and insights that analyse business functions and key operations and performance metrics.
  • Manage and optimize processes for data intake, validation, mining, and engineering as well as modelling, visualization, and communication deliverables.
  • Communicate results and business impacts of insight initiatives to the Management of the company.

Requirements

  • Industry knowledge
  • 4 years or more of experience in financial services industry particularly retail credit industry is a must.
  • Candidate should have either worked in banking sector (banks/ HFC/ NBFC) or consulting organizations serving these clients.
  • Experience in credit risk model building such as application scorecards, behaviour scorecards, and/ or collection scorecards.
  • Experience in portfolio monitoring, model monitoring, model calibration
  • Knowledge of ECL/ Basel preferred.
  • Educational qualification: Advanced degree in finance, mathematics, econometrics, or engineering.
  • Technical knowledge: Strong data handling skills in databases such as SQL and Hadoop. Knowledge with data visualization tools, such as SAS VI/Tableau/PowerBI is preferred.
  • Expertise in either R or Python; SAS knowledge will be plus.

Soft skills:

  • Ability to quickly adapt to the analytical tools and development approaches used within DCB Bank
  • Ability to multi-task good communication and team working skills.
  • Ability to manage day-to-day written and verbal communication with relevant stakeholders.
  • Ability to think strategically and make changes to data when required.


Read more
Qultured Media Private Limited
Bengaluru (Bangalore)
1 - 5 yrs
₹10L - ₹12L / yr
skill iconData Science
Artificial Intelligence (AI)
Large Language Models (LLM)
Product Management
Product design
+2 more

AI Product Manager


About Us:


At Formi, we’re not just riding the wave of innovation—we’re the ones creating it. Our AI-powered solutions are revolutionizing how businesses operate, pushing boundaries, and opening up new realms of possibility. We’re building the future of intelligent operations, and whether you join us or not, we’re going to change the game. But with you on board? We’ll do it faster.


About the Role:


We’re looking for a Product Manager who’s not afraid to take risks, who questions everything, and who can transform bold, crazy ideas into products that reshape entire industries. You won’t just manage; you’ll build, you’ll innovate, and you’ll lead. The best ideas? They come from people who think differently. If that’s you, let’s get started.


Your Mission (If You’re Ready for It):


We’re not hiring you for the usual. This is not a role for anyone looking to play it safe. We need someone who’s ready to challenge the status quo and make radical changes. You’ll be building the future with us, so expect to be pushed—and to push back. We’re not here to settle for less.


Key Responsibilities:


  • Vision and Strategy: Forget business as usual. Your job is to craft strategies that disrupt markets and create competitive advantages that others can only dream of.
  • Cross-Functional Leadership: You’ll lead from the front, working closely with engineering, sales, and marketing to turn ideas into reality. From concept to launch, you’re the one guiding the ship.
  • Customer-Centric Innovation: Don’t just listen to customers—understand them at a deeper level. Then use that understanding to create products that make people say, “Why didn’t anyone think of this sooner?”
  • End-to-End Ownership: You own it all. From the first spark of an idea to delivering a product that changes the game.
  • Data-Driven Decision Making: We don’t guess here. Use data, trends, and insights to make decisions that propel our products forward. Let the numbers tell the story.
  • Stakeholder Communication: Communicate like a visionary. Show the people around you the path forward and how it all ties together.
  • Continuous Innovation: Complacency is the enemy. Keep pushing the envelope, iterating, and improving so we stay miles ahead of the competition.



Who Should Apply?


  • You’re not just a thinker, you’re a doer. You thrive in fast-paced, innovative environments where the status quo is meant to be broken.
  • You’ve got at least 2+ years of experience in product management, preferably in tech or SaaS, and you’ve led teams to deliver products that actually make an impact.
  • You’re data-obsessed. Your decisions are backed by analytics, and you know how to measure success.
  • You’re not just a communicator; you’re an inspirer. You can articulate a vision so clearly, people can’t help but rally behind it.


This Role is NOT for You If:


  • You’re just looking for another job.
  • You think “good enough” is, well, good enough.
  • You’d rather play it safe than take bold, calculated risks.


Why Join Us?


  • Impact: You’ll be part of a company that’s literally changing how businesses operate. Your work will have a direct impact on the future of creating a surplus economy.
  • Growth: We’re scaling fast, and with that comes insane opportunities for personal and professional development.
  • Culture: We move fast, we break things (intelligently), and we value big ideas. We’re building something massive, and you’ll be a key part of it.


Are you ready to disrupt entire industries and make a dent in the universe?

Apply now and let’s build the future together.



Read more
Euracle
Shikhar Agrawal
Posted by Shikhar Agrawal
Remote only
2 - 5 yrs
₹10L - ₹20L / yr
skill iconPython
Algorithms
skill iconData Analytics
skill iconData Science
Data Structures
+1 more

About the Role

We are actively seeking talented Senior Python Developers to join our ambitious team dedicated to pushing the frontiers of AI technology. This opportunity is tailored for professionals who thrive on developing innovative solutions and who aspire to be at the forefront of AI advancements. You will work with different companies in the US who are looking to develop both commercial and research AI solutions.


Required Skills:

  • Write effective Python code to tackle complex issues
  • Use business sense and analytical abilities to glean valuable insights from public databases 
  • Clearly express the reasoning and logic when writing code in Jupyter notebooks or other suitable mediums
  • Extensive experience working with Python 
  • Proficiency with the language's syntax and conventions
  • Previous experience tackling algorithmic problems
  • Nice to have some prior Software Quality Assurance and Test Planning experience
  • Excellent spoken and written English communication skills


The ideal candidates should be able to

  • Clearly explain their strategies for problem-solving.
  • Design practical solutions in code.
  • Develop test cases to validate their solutions.
  • Debug and refine their solutions for improvement.


Read more
Solar Secure
Saurabh Singh
Posted by Saurabh Singh
Remote only
0 - 1 yrs
₹8000 - ₹10000 / mo
skill iconData Science
skill iconPython
Jupyter Notebook

About the Company :

Nextgen Ai Technologies is at the forefront of innovation in artificial intelligence, specializing in developing cutting-edge AI solutions that transform industries. We are committed to pushing the boundaries of AI technology to solve complex challenges and drive business success.


Currently offering "Data Science Internship" for 2 months.


Data Science Projects details In which Intern’s Will Work :

Project 01 : Image Caption Generator Project in Python

Project 02 : Credit Card Fraud Detection Project

Project 03 : Movie Recommendation System

Project 04 : Customer Segmentation

Project 05 : Brain Tumor Detection with Data Science


Eligibility


A PC or Laptop with decent internet speed.

Good understanding of English language.

Any Graduate with a desire to become a web developer. Freshers are welcomed.

Knowledge of HTML, CSS and JavaScript is a plus but NOT mandatory.

Fresher are welcomed. You will get proper training also, so don't hesitate to apply if you don't have any coding background.


#please note that THIS IS AN INTERNSHIP , NOT A JOB.


We recruit permanent employees from inside our interns only (if needed).


Duration : 02 Months 

MODE: Work From Home (Online)


Responsibilities


Manage reports and sales leads in salesforce.com, CRM.

Develop content, manage design, and user access to SharePoint sites for customers and employees.

Build data driven reports, store procedures, query optimization using SQL and PL/SQL knowledge.

Learned the essentials to C++ and Java to refine code and build the exterior layer of web pages.

Configure and load xml data for the BVT tests.

Set up a GitHub page.

Develop spark scripts by using Scala shell as per requirements.

Develop and A/B test improvements to business survey questions on iOS.

Deploy statistical models to various company data streams using Linux shells.

Create monthly performance-base client billing reports using MySQL and NoSQL databases.

Utilize Hadoop and MapReduce to generate dynamic queries and extract data from HDFS.

Create source code utilizing JavaScript and PHP language to make web pages functional.

Excellent problem-solving skills and the ability to work independently or as part of a team.

Effective communication skills to convey complex technical concepts.


Benefits


Internship Certificate

Letter of recommendation

Stipend Performance Based

Part time work from home (2-3 Hrs per day)

5 days a week, Fully Flexible Shift


Read more
RAPTORX.AI

at RAPTORX.AI

2 candid answers
Pratyusha Vemuri
Posted by Pratyusha Vemuri
Hyderabad
3 - 6 yrs
₹5L - ₹15L / yr
Fraud
skill iconData Science
skill iconMachine Learning (ML)

Data engineers:


Designing and building optimized data pipelines using cutting-edge technologies in a cloud environment to drive analytical insights.This would also include develop and maintain scalable data pipelines and builds out new API integrations to support continuing increases in data volume and complexity

Constructing infrastructure for efficient ETL processes from various sources and storage systems.

Collaborating closely with Product Managers and Business Managers to design technical solutions aligned with business requirements.

Leading the implementation of algorithms and prototypes to transform raw data into useful information.

Architecting, designing, and maintaining database pipeline architectures, ensuring readiness for AI/ML transformations.

Creating innovative data validation methods and data analysis tools.

Ensuring compliance with data governance and security policies.

Interpreting data trends and patterns to establish operational alerts.

Developing analytical tools, utilities, and reporting mechanisms.

Conducting complex data analysis and presenting results effectively.

Preparing data for prescriptive and predictive modeling.

Continuously exploring opportunities to enhance data quality and reliability.

Applying strong programming and problem-solving skills to develop scalable solutions.

Writes unit/integration tests, contributes towards documentation work


Must have ....


6 to 8 years of hands-on experience designing, building, deploying, testing, maintaining, monitoring, and owning scalable, resilient, and distributed data pipelines.

High proficiency in Scala/Java/ Python API frameworks/ Swagger and Spark for applied large-scale data processing.

Expertise with big data technologies, API development (Flask,,including Spark, Data Lake, Delta Lake, and Hive.

Solid understanding of batch and streaming data processing techniques.

Proficient knowledge of the Data Lifecycle Management process, including data collection, access, use, storage, transfer, and deletion.

Expert-level ability to write complex, optimized SQL queries across extensive data volumes.

Experience with RDBMS and OLAP databases like MySQL, Redshift.

Familiarity with Agile methodologies.

Obsession for service observability, instrumentation, monitoring, and alerting.

Knowledge or experience in architectural best practices for building data pipelines.


Good to Have:


Passion for testing strategy, problem-solving, and continuous learning.

Willingness to acquire new skills and knowledge.

Possess a product/engineering mindset to drive impactful data solutions.

Experience working in distributed environments with teams scattered geographically.

Read more
Easiofy Solutions

at Easiofy Solutions

4 recruiters
Noor Fatma
Posted by Noor Fatma
Remote only
2 - 3 yrs
₹8L - ₹10L / yr
skill iconData Science
Artificial Intelligence (AI)

We are working on AI for medical images. We need someone who can run pre trained models and also train new ones

Read more
Remote only
5 - 8 yrs
₹10L - ₹15L / yr
skill iconVue.js
skill iconAngularJS (1.x)
skill iconAngular (2+)
skill iconReact.js
skill iconJavascript
+6 more

In this position, you will play a pivotal role in collaborating with our CFO, CTO, and our dedicated technical team to craft and develop cutting-edge AI-based products.


Role and Responsibilities:


- Develop and maintain Python-based software applications.

- Design and work with databases using SQL.

- Use Django, Streamlit, and front-end frameworks like Node.js and Svelte for web development.

- Create interactive data visualizations with charting libraries.

- Collaborate on scalable architecture and experimental tech. - Work with AI/ML frameworks and data analytics.

- Utilize Git, DevOps basics, and JIRA for project management. Skills and Qualifications:

- Strong Python programming


skills.


- Proficiency in OOP and SQL.

- Experience with Django, Streamlit, Node.js, and Svelte.

- Familiarity with charting libraries.

- Knowledge of AI/ML frameworks.

- Basic Git and DevOps understanding.

- Effective communication and teamwork.


Company details: We are a team of Enterprise Transformation Experts who deliver radically transforming products, solutions, and consultation services to businesses of any size. Our exceptional team of diverse and passionate individuals is united by a common mission to democratize the transformative power of AI.


Website: Blue Hex Software – AI | CRM | CXM & DATA ANALYTICS

Read more
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Find more jobs
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort