Cutshort logo
Nextalytics Software Services Pvt Ltd's logo

Data Scientist

Harshal Patni's profile picture
Posted by Harshal Patni
4 - 8 yrs
₹5L - ₹15L / yr
Navi Mumbai
Skills
R
Artificial Neural Networks
UIMA
skill iconPython
Big Data
Hadoop
skill iconMachine Learning (ML)
Natural Language Processing (NLP)
Nextalytics is an offshore research, development and consulting company based in India that focuses on high quality and cost effective software development and data science solutions. At Nextalytics, we have developed a culture that encourages employees to be creative, innovative, and playful. We reward intelligence, dedication and out-of-the-box thinking; if you have these, Nextalytics will be the perfect launch pad for your dreams. Nextalytics is looking for smart, driven and energetic new team members.
Read more
Users love Cutshort
Read about what our users have to say about finding their next opportunity on Cutshort.
Shubham Vishwakarma's profile image

Shubham Vishwakarma

Full Stack Developer - Averlon
I had an amazing experience. It was a delight getting interviewed via Cutshort. The entire end to end process was amazing. I would like to mention Reshika, she was just amazing wrt guiding me through the process. Thank you team.
Companies hiring on Cutshort
companies logos

About Nextalytics Software Services Pvt Ltd

Founded :
2014
Type :
Services
Size :
20-100
Stage :
Profitable

About

Founded in 2014, Nextalytics Software Services Pvt Ltd is a profitable company based in Navi Mumbai. It has 6-50 employees currently and works in the domain of IT Consultancy.
Read more

Connect with the team

Profile picture
Harshal Patni

Company social profiles

linkedintwitterfacebook

Similar jobs

Synorus
Synorus Admin
Posted by Synorus Admin
Remote only
0 - 1 yrs
₹0.2L - ₹1L / yr
Google colab
Retrieval Augmented Generation (RAG)
Large Language Models (LLM) tuning
skill iconPython
PyTorch
+3 more

About Synorus

Synorus is building a next-generation ecosystem of AI-first products. Our flagship legal-AI platform LexVault is redefining legal research, drafting, knowledge retrieval, and case intelligence using domain-tuned LLMs, private RAG pipelines, and secure reasoning systems.

If you are passionate about AI, legaltech, and training high-performance models — this internship will put you on the front line of innovation.


Role Overview

We are seeking passionate AI/LLM Engineering Interns who can:

  • Fine-tune LLMs for legal domain use-cases
  • Train and experiment with open-source foundation models
  • Work with large datasets efficiently
  • Build RAG pipelines and text-processing frameworks
  • Run model training workflows on Google Colab / Kaggle / Cloud GPUs

This is a hands-on engineering and research internship — you will work directly with senior founders & technical leadership.

Key Responsibilities

  • Fine-tune transformer-based models (Llama, Mistral, Gemma, etc.)
  • Build and preprocess legal datasets at scale
  • Develop efficient inference & training pipelines
  • Evaluate models for accuracy, hallucinations, and trustworthiness
  • Implement RAG architectures (vector DBs + embeddings)
  • Work with GPU environments (Colab/Kaggle/Cloud)
  • Contribute to model improvements, prompt engineering & safety tuning

Must-Have Skills

  • Strong knowledge of Python & PyTorch
  • Understanding of LLMs, Transformers, Tokenization
  • Hands-on experience with HuggingFace Transformers
  • Familiarity with LoRA/QLoRA, PEFT training
  • Data wrangling: Pandas, NumPy, tokenizers
  • Ability to handle multi-GB datasets efficiently

Bonus Skills

(Not mandatory — but a strong plus)

  • Experience with RAG / vector DBs (Chroma, Qdrant, LanceDB)
  • Familiarity with vLLM, llama.cpp, GGUF
  • Worked on summarization, Q&A or document-AI projects
  • Knowledge of legal texts (Indian laws/case-law/statutes)
  • Open-source contributions or research work

What You Will Gain

  • Real-world training on LLM fine-tuning & legal AI
  • Exposure to production-grade AI pipelines
  • Direct mentorship from engineering leadership
  • Research + industry project portfolio
  • Letter of experience + potential full-time offer

Ideal Candidate

  • You experiment with models on weekends
  • You love pushing GPUs to their limits
  • You prefer research + implementation over theory alone
  • You want to build AI that matters — not just demos


Location - Remote

Stipend - 5K - 10K

Read more
VyTCDC
Gobinath Sundaram
Posted by Gobinath Sundaram
Chennai, Bengaluru (Bangalore), Hyderabad, Mumbai, Pune, Noida
4 - 6 yrs
₹3L - ₹21L / yr
AWS Data Engineer
skill iconAmazon Web Services (AWS)
skill iconPython
PySpark
databricks
+1 more

 Key Responsibilities

  • Design and implement ETL/ELT pipelines using Databricks, PySpark, and AWS Glue
  • Develop and maintain scalable data architectures on AWS (S3, EMR, Lambda, Redshift, RDS)
  • Perform data wrangling, cleansing, and transformation using Python and SQL
  • Collaborate with data scientists to integrate Generative AI models into analytics workflows
  • Build dashboards and reports to visualize insights using tools like Power BI or Tableau
  • Ensure data quality, governance, and security across all data assets
  • Optimize performance of data pipelines and troubleshoot bottlenecks
  • Work closely with stakeholders to understand data requirements and deliver actionable insights

🧪 Required Skills

Skill AreaTools & TechnologiesCloud PlatformsAWS (S3, Lambda, Glue, EMR, Redshift)Big DataDatabricks, Apache Spark, PySparkProgrammingPython, SQLData EngineeringETL/ELT, Data Lakes, Data WarehousingAnalyticsData Modeling, Visualization, BI ReportingGen AI IntegrationOpenAI, Hugging Face, LangChain (preferred)DevOps (Bonus)Git, Jenkins, Terraform, Docker

📚 Qualifications

  • Bachelor's or Master’s degree in Computer Science, Data Science, or related field
  • 3+ years of experience in data engineering or data analytics
  • Hands-on experience with Databricks, PySpark, and AWS
  • Familiarity with Generative AI tools and frameworks is a strong plus
  • Strong problem-solving and communication skills

🌟 Preferred Traits

  • Analytical mindset with attention to detail
  • Passion for data and emerging technologies
  • Ability to work independently and in cross-functional teams
  • Eagerness to learn and adapt in a fast-paced environment


Read more
Mphasis
Agency job
via Rigel Networks Pvt Ltd by Minakshi Soni
Bengaluru (Bangalore), Hyderabad
6 - 11 yrs
₹10L - ₹15L / yr
Software Testing (QA)
Test Automation (QA)
API Testing
UFT
skill iconJava
+11 more

Dear Candidate,

We are Urgently hiring QA Automation Engineers and Test leads At Hyderabad and Bangalore

Exp: 6-10 yrs

Locations: Hyderabad ,Bangalore


JD:

we are Hiring Automation Testers with 6-10 years of Automation testing experience using QA automation tools like Java, UFT, Selenium, API Testing, ETL & others

 

Must Haves:

·        Experience in Financial Domain is a must

·        Extensive Hands-on experience in Design, implement and maintain automation framework using Java, UFT, ETL, Selenium tools and automation concepts.

·        Experience with AWS concept and framework design/ testing.

·        Experience in Data Analysis, Data Validation, Data Cleansing, Data Verification and identifying data mismatch.

·        Experience with Databricks, Python, Spark, Hive, Airflow, etc.

·        Experience in validating and analyzing kubernetics log files.

·        API testing experience

·        Backend testing skills with ability to write SQL queries in Databricks and in Oracle databases

·        Experience in working with globally distributed Agile project teams

·        Ability to work in a fast-paced, globally structured and team-based environment, as well as independently

·        Experience in test management tools like Jira

·        Good written and verbal communication skills

Good To have:

  • Business and finance knowledge desirable

 

Best Regards,

Minakshi Soni

Executive - Talent Acquisition (L2)

Worldwide Locations: USA | HK | IN 

Read more
Personal Care Product Manufacturing
Personal Care Product Manufacturing
Agency job
via Qrata by Rayal Rajan
Mumbai
3 - 8 yrs
₹12L - ₹30L / yr
Spark
Hadoop
Big Data
Data engineering
PySpark
+9 more

DATA ENGINEER


Overview

They started with a singular belief - what is beautiful cannot and should not be defined in marketing meetings. It's defined by the regular people like us, our sisters, our next-door neighbours, and the friends we make on the playground and in lecture halls. That's why we stand for people-proving everything we do. From the inception of a product idea to testing the final formulations before launch, our consumers are a part of each and every process. They guide and inspire us by sharing their stories with us. They tell us not only about the product they need and the skincare issues they face but also the tales of their struggles, dreams and triumphs. Skincare goes deeper than skin. It's a form of self-care for many. Wherever someone is on this journey, we want to cheer them on through the products we make, the content we create and the conversations we have. What we wish to build is more than a brand. We want to build a community that grows and glows together - cheering each other on, sharing knowledge, and ensuring people always have access to skincare that really works.

 

Job Description:

We are seeking a skilled and motivated Data Engineer to join our team. As a Data Engineer, you will be responsible for designing, developing, and maintaining the data infrastructure and systems that enable efficient data collection, storage, processing, and analysis. You will collaborate with cross-functional teams, including data scientists, analysts, and software engineers, to implement data pipelines and ensure the availability, reliability, and scalability of our data platform.


Responsibilities:

Design and implement scalable and robust data pipelines to collect, process, and store data from various sources.

Develop and maintain data warehouse and ETL (Extract, Transform, Load) processes for data integration and transformation.

Optimize and tune the performance of data systems to ensure efficient data processing and analysis.

Collaborate with data scientists and analysts to understand data requirements and implement solutions for data modeling and analysis.

Identify and resolve data quality issues, ensuring data accuracy, consistency, and completeness.

Implement and maintain data governance and security measures to protect sensitive data.

Monitor and troubleshoot data infrastructure, perform root cause analysis, and implement necessary fixes.

Stay up-to-date with emerging technologies and industry trends in data engineering and recommend their adoption when appropriate.


Qualifications:

Bachelor’s or higher degree in Computer Science, Information Systems, or a related field.

Proven experience as a Data Engineer or similar role, working with large-scale data processing and storage systems.

Strong programming skills in languages such as Python, Java, or Scala.

Experience with big data technologies and frameworks like Hadoop, Spark, or Kafka.

Proficiency in SQL and database management systems (e.g., MySQL, PostgreSQL, or Oracle).

Familiarity with cloud platforms like AWS, Azure, or GCP, and their data services (e.g., S3, Redshift, BigQuery).

Solid understanding of data modeling, data warehousing, and ETL principles.

Knowledge of data integration techniques and tools (e.g., Apache Nifi, Talend, or Informatica).

Strong problem-solving and analytical skills, with the ability to handle complex data challenges.

Excellent communication and collaboration skills to work effectively in a team environment.


Preferred Qualifications:

Advanced knowledge of distributed computing and parallel processing.

Experience with real-time data processing and streaming technologies (e.g., Apache Kafka, Apache Flink).

Familiarity with machine learning concepts and frameworks (e.g., TensorFlow, PyTorch).

Knowledge of containerization and orchestration technologies (e.g., Docker, Kubernetes).

Experience with data visualization and reporting tools (e.g., Tableau, Power BI).

Certification in relevant technologies or data engineering disciplines.



Read more
Opcito Technologies
at Opcito Technologies
1 video
4 recruiters
Aniket Bangale
Posted by Aniket Bangale
Pune
5 - 8 yrs
₹10L - ₹15L / yr
skill iconDocker
skill iconKubernetes
DevOps
skill iconAmazon Web Services (AWS)
Ansible
+2 more
We are looking for an DevOps Engineer with 4+ years of experience. He/she should be selfmotivated, go-getter, out of the box thinker, and ready to work in a high-energy
environment. He/she must demonstrate a high level of ownership, integrity, and leadership
skills and be flexible and adaptive with a strong desire to learn & excel.

Required Skills:

  • Strong experience working with tools and platforms like Helm charts, Circle CI, Jenkins,
  • and/or Codefresh
  • Excellent knowledge of AWS offerings around Cloud and DevOps
  • Strong expertise in containerization platforms like Docker and container orchestration platforms like Kubernetes & Rancher
  • Should be familiar with leading Infrastructure as Code tools such as Terraform, CloudFormation, etc.
  • Strong experience in Python, Shell Scripting, Ansible, and Terraform
  • Good command over monitoring tools like Datadog, Zabbix, Elk, Grafana, CloudWatch, Stackdriver, Prometheus, JFrog, Nagios, etc.
  • Experience with Linux/Unix systems administration.
Read more
Bullhorn Consultants
at Bullhorn Consultants
4 recruiters
vidya venugopal
Posted by vidya venugopal
Bengaluru (Bangalore)
5 - 7 yrs
₹1L - ₹20L / yr
skill iconRuby
skill iconRuby on Rails (ROR)
skill iconPython
Automation
Linux/Unix

Job Requirements:       

·        Bachelor’s degree (minimum) in Computer Science or Engineering.

·        Minimum 5 years of experience working as a senior–level Software Engineer

·        Excellent programming and debugging skills in RoR and Python

·        Experience in web development and automation

·        Experience developing on Windows and Linux systems 

Although not required, the following are a plus: 

·        Experience working with Build scripts, Shell scripts, Makefiles

·        Experience with Jenkins and other CI/CD tools

·        Knowledge of RESTful web services and docker

Read more
DataMetica
at DataMetica
1 video
7 recruiters
Nikita Aher
Posted by Nikita Aher
Pune, Hyderabad
7 - 12 yrs
₹12L - ₹33L / yr
Big Data
Hadoop
Spark
Apache Spark
Apache Hive
+3 more

Job description

Role : Lead Architecture (Spark, Scala, Big Data/Hadoop, Java)

Primary Location : India-Pune, Hyderabad

Experience : 7 - 12 Years

Management Level: 7

Joining Time: Immediate Joiners are preferred


  • Attend requirements gathering workshops, estimation discussions, design meetings and status review meetings
  • Experience of Solution Design and Solution Architecture for the data engineer model to build and implement Big Data Projects on-premises and on cloud.
  • Align architecture with business requirements and stabilizing the developed solution
  • Ability to build prototypes to demonstrate the technical feasibility of your vision
  • Professional experience facilitating and leading solution design, architecture and delivery planning activities for data intensive and high throughput platforms and applications
  • To be able to benchmark systems, analyses system bottlenecks and propose solutions to eliminate them
  • Able to help programmers and project managers in the design, planning and governance of implementing projects of any kind.
  • Develop, construct, test and maintain architectures and run Sprints for development and rollout of functionalities
  • Data Analysis, Code development experience, ideally in Big Data Spark, Hive, Hadoop, Java, Python, PySpark,
  • Execute projects of various types i.e. Design, development, Implementation and migration of functional analytics Models/Business logic across architecture approaches
  • Work closely with Business Analysts to understand the core business problems and deliver efficient IT solutions of the product
  • Deployment sophisticated analytics program of code using any of cloud application.


Perks and Benefits we Provide!


  • Working with Highly Technical and Passionate, mission-driven people
  • Subsidized Meals & Snacks
  • Flexible Schedule
  • Approachable leadership
  • Access to various learning tools and programs
  • Pet Friendly
  • Certification Reimbursement Policy
  • Check out more about us on our website below!

www.datametica.com

Read more
Forage AI
at Forage AI
1 recruiter
Khushboo Pahuja
Posted by Khushboo Pahuja
Remote only
1 - 6 yrs
₹3L - ₹15L / yr
Automation
Kofax
Automation anywhere
uipath
skill iconPython
+1 more
Join Forage AI as a Data Automation Engineer! In this role, you will work with an
amazingly passionate team of data scientists and engineers who are striving to achieve
a pioneering vision.

Responsibilities:
Our web crawling team is very unique in the industry - while we have many “single-site”
crawlers, our unique proposition and technical efforts are all geared towards building
“generic” bots that can crawl and parse data from thousands of websites, all using the
same code. This requires a whole different level of thinking, planning, and coding.
Here’s what you’ll do -
● Build, improve, and run our generic robots to extract data from both the web and
documents – handling critical information among a wide variety of structures and
formats without error.
● Derive common patterns from semi-structured data, build code to handle them,
and be able to deal with exceptions as well.
● Be responsible for the live execution of our robots, managing turnaround times,
exceptions, QA, and delivery, and building a bleeding-edge infrastructure to
handle volume and scope.
Requirements:
● Either hands-on experience with, or relevant certifications in the Robotic Process
Automation tools, preferably Kofax Kapow, Automation Anywhere, UIPath,
BluePrism, OR crawling purely in Python and relevant libraries.
● 1-3 years of experience with building, running, and maintaining crawlers.
● Successfully worked on projects that delivered 99+% accuracy despite a wide
variety of formats.
● Excellent SQL or MongoDB/ElasticSearch skills and familiarity with Regular
Expressions, Data Mining, Cloud Infrastructure, etc.

Other Infrastructure Requirements

● High-speed internet connectivity for video calls and efficient work.
● Capable business-grade computer (e.g., modern processor, 8 GB+ of RAM, and
no other obstacles to interrupted, efficient work).
● Headphones with clear audio quality.
● Stable power connection and backups in case of internet/power failure.
Read more
Utiliex
at Utiliex
1 recruiter
Aziz Adnan
Posted by Aziz Adnan
Remote, Bengaluru (Bangalore), Mumbai, NCR (Delhi | Gurgaon | Noida), Ahmedabad, Chennai, Hyderabad, Nagpur
0 - 3 yrs
₹3L - ₹4L / yr
skill iconJavascript
skill iconPython
skill iconPHP
skill iconLaravel
skill iconRuby
+2 more

 

We are looking for a full stack developer to produce scalable software solutions.

 

As a full stack developer, you should be comfortable around both front-end and back-end coding languages, development frameworks and third-party libraries. You should also be a team player with a knack for visual design and utility.

 

If you’re also familiar with Agile methodologies, we’d like to meet you.

 

Responsibilities:

  • Developing front end website architecture.
  • Designing user interactions on web pages.
  • Developing back end website applications.
  • Creating servers and databases for functionality.
  • Ensuring cross-platform optimization for mobile phones.
  • Ensuring responsiveness of applications.
  • Working alongside graphic designers for web design features.
  • Seeing through a project from conception to finished product.
  • Designing and developing APIs.
  • Meeting both technical and consumer needs.
  • Staying abreast of developments in web applications and programming languages.

Requirements:

  • Degree in Computer Science or related field
  • Strong organizational and project management skills.
  • Proficiency with fundamental front end languages such as HTML, CSS and JavaScript.
  • Familiarity with JavaScript frameworks such as Angular JS, React and Amber.
  • Proficiency with server side languages such as Python, Ruby, Java, PHP and .Net.
  • Familiarity with database technology such as MySQL, Oracle and MongoDB.
  • Excellent verbal communication skills.
  • Good problem solving skills.
  • Attention to detail.

 

Reporting directly to the Founder

 

The job requires a great deal of responsibility early on, but we're working on something exciting and there's lots of opportunity of growth and learning.

 

The job is full-time, remotely based, and with flexible hours.

Read more
GeakMinds Technologies Pvt Ltd
John Richardson
Posted by John Richardson
Chennai
1 - 5 yrs
₹1L - ₹6L / yr
Hadoop
Big Data
HDFS
Apache Sqoop
Apache Flume
+2 more
• Looking for Big Data Engineer with 3+ years of experience. • Hands-on experience with MapReduce-based platforms, like Pig, Spark, Shark. • Hands-on experience with data pipeline tools like Kafka, Storm, Spark Streaming. • Store and query data with Sqoop, Hive, MySQL, HBase, Cassandra, MongoDB, Drill, Phoenix, and Presto. • Hands-on experience in managing Big Data on a cluster with HDFS and MapReduce. • Handle streaming data in real time with Kafka, Flume, Spark Streaming, Flink, and Storm. • Experience with Azure cloud, Cognitive Services, Databricks is preferred.
Read more
Why apply to jobs via Cutshort
people_solving_puzzle
Personalized job matches
Stop wasting time. Get matched with jobs that meet your skills, aspirations and preferences.
people_verifying_people
Verified hiring teams
See actual hiring teams, find common social connections or connect with them directly.
ai_chip
Move faster with AI
We use AI to get you faster responses, recommendations and unmatched user experience.
Did not find a job you were looking for?
icon
Search for relevant jobs from 10000+ companies such as Google, Amazon & Uber actively hiring on Cutshort.
companies logo
companies logo
companies logo
companies logo
companies logo
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Users love Cutshort
Read about what our users have to say about finding their next opportunity on Cutshort.
Shubham Vishwakarma's profile image

Shubham Vishwakarma

Full Stack Developer - Averlon
I had an amazing experience. It was a delight getting interviewed via Cutshort. The entire end to end process was amazing. I would like to mention Reshika, she was just amazing wrt guiding me through the process. Thank you team.
Companies hiring on Cutshort
companies logos