

About Nextalytics Software Services Pvt Ltd
Similar jobs
About Synorus
Synorus is building a next-generation ecosystem of AI-first products. Our flagship legal-AI platform LexVault is redefining legal research, drafting, knowledge retrieval, and case intelligence using domain-tuned LLMs, private RAG pipelines, and secure reasoning systems.
If you are passionate about AI, legaltech, and training high-performance models — this internship will put you on the front line of innovation.
Role Overview
We are seeking passionate AI/LLM Engineering Interns who can:
- Fine-tune LLMs for legal domain use-cases
- Train and experiment with open-source foundation models
- Work with large datasets efficiently
- Build RAG pipelines and text-processing frameworks
- Run model training workflows on Google Colab / Kaggle / Cloud GPUs
This is a hands-on engineering and research internship — you will work directly with senior founders & technical leadership.
Key Responsibilities
- Fine-tune transformer-based models (Llama, Mistral, Gemma, etc.)
- Build and preprocess legal datasets at scale
- Develop efficient inference & training pipelines
- Evaluate models for accuracy, hallucinations, and trustworthiness
- Implement RAG architectures (vector DBs + embeddings)
- Work with GPU environments (Colab/Kaggle/Cloud)
- Contribute to model improvements, prompt engineering & safety tuning
Must-Have Skills
- Strong knowledge of Python & PyTorch
- Understanding of LLMs, Transformers, Tokenization
- Hands-on experience with HuggingFace Transformers
- Familiarity with LoRA/QLoRA, PEFT training
- Data wrangling: Pandas, NumPy, tokenizers
- Ability to handle multi-GB datasets efficiently
Bonus Skills
(Not mandatory — but a strong plus)
- Experience with RAG / vector DBs (Chroma, Qdrant, LanceDB)
- Familiarity with vLLM, llama.cpp, GGUF
- Worked on summarization, Q&A or document-AI projects
- Knowledge of legal texts (Indian laws/case-law/statutes)
- Open-source contributions or research work
What You Will Gain
- Real-world training on LLM fine-tuning & legal AI
- Exposure to production-grade AI pipelines
- Direct mentorship from engineering leadership
- Research + industry project portfolio
- Letter of experience + potential full-time offer
Ideal Candidate
- You experiment with models on weekends
- You love pushing GPUs to their limits
- You prefer research + implementation over theory alone
- You want to build AI that matters — not just demos
Location - Remote
Stipend - 5K - 10K
Key Responsibilities
- Design and implement ETL/ELT pipelines using Databricks, PySpark, and AWS Glue
- Develop and maintain scalable data architectures on AWS (S3, EMR, Lambda, Redshift, RDS)
- Perform data wrangling, cleansing, and transformation using Python and SQL
- Collaborate with data scientists to integrate Generative AI models into analytics workflows
- Build dashboards and reports to visualize insights using tools like Power BI or Tableau
- Ensure data quality, governance, and security across all data assets
- Optimize performance of data pipelines and troubleshoot bottlenecks
- Work closely with stakeholders to understand data requirements and deliver actionable insights
🧪 Required Skills
Skill AreaTools & TechnologiesCloud PlatformsAWS (S3, Lambda, Glue, EMR, Redshift)Big DataDatabricks, Apache Spark, PySparkProgrammingPython, SQLData EngineeringETL/ELT, Data Lakes, Data WarehousingAnalyticsData Modeling, Visualization, BI ReportingGen AI IntegrationOpenAI, Hugging Face, LangChain (preferred)DevOps (Bonus)Git, Jenkins, Terraform, Docker
📚 Qualifications
- Bachelor's or Master’s degree in Computer Science, Data Science, or related field
- 3+ years of experience in data engineering or data analytics
- Hands-on experience with Databricks, PySpark, and AWS
- Familiarity with Generative AI tools and frameworks is a strong plus
- Strong problem-solving and communication skills
🌟 Preferred Traits
- Analytical mindset with attention to detail
- Passion for data and emerging technologies
- Ability to work independently and in cross-functional teams
- Eagerness to learn and adapt in a fast-paced environment
Dear Candidate,
We are Urgently hiring QA Automation Engineers and Test leads At Hyderabad and Bangalore
Exp: 6-10 yrs
Locations: Hyderabad ,Bangalore
JD:
we are Hiring Automation Testers with 6-10 years of Automation testing experience using QA automation tools like Java, UFT, Selenium, API Testing, ETL & others
Must Haves:
· Experience in Financial Domain is a must
· Extensive Hands-on experience in Design, implement and maintain automation framework using Java, UFT, ETL, Selenium tools and automation concepts.
· Experience with AWS concept and framework design/ testing.
· Experience in Data Analysis, Data Validation, Data Cleansing, Data Verification and identifying data mismatch.
· Experience with Databricks, Python, Spark, Hive, Airflow, etc.
· Experience in validating and analyzing kubernetics log files.
· API testing experience
· Backend testing skills with ability to write SQL queries in Databricks and in Oracle databases
· Experience in working with globally distributed Agile project teams
· Ability to work in a fast-paced, globally structured and team-based environment, as well as independently
· Experience in test management tools like Jira
· Good written and verbal communication skills
Good To have:
- Business and finance knowledge desirable
Best Regards,
Minakshi Soni
Executive - Talent Acquisition (L2)
Worldwide Locations: USA | HK | IN
DATA ENGINEER
Overview
They started with a singular belief - what is beautiful cannot and should not be defined in marketing meetings. It's defined by the regular people like us, our sisters, our next-door neighbours, and the friends we make on the playground and in lecture halls. That's why we stand for people-proving everything we do. From the inception of a product idea to testing the final formulations before launch, our consumers are a part of each and every process. They guide and inspire us by sharing their stories with us. They tell us not only about the product they need and the skincare issues they face but also the tales of their struggles, dreams and triumphs. Skincare goes deeper than skin. It's a form of self-care for many. Wherever someone is on this journey, we want to cheer them on through the products we make, the content we create and the conversations we have. What we wish to build is more than a brand. We want to build a community that grows and glows together - cheering each other on, sharing knowledge, and ensuring people always have access to skincare that really works.
Job Description:
We are seeking a skilled and motivated Data Engineer to join our team. As a Data Engineer, you will be responsible for designing, developing, and maintaining the data infrastructure and systems that enable efficient data collection, storage, processing, and analysis. You will collaborate with cross-functional teams, including data scientists, analysts, and software engineers, to implement data pipelines and ensure the availability, reliability, and scalability of our data platform.
Responsibilities:
Design and implement scalable and robust data pipelines to collect, process, and store data from various sources.
Develop and maintain data warehouse and ETL (Extract, Transform, Load) processes for data integration and transformation.
Optimize and tune the performance of data systems to ensure efficient data processing and analysis.
Collaborate with data scientists and analysts to understand data requirements and implement solutions for data modeling and analysis.
Identify and resolve data quality issues, ensuring data accuracy, consistency, and completeness.
Implement and maintain data governance and security measures to protect sensitive data.
Monitor and troubleshoot data infrastructure, perform root cause analysis, and implement necessary fixes.
Stay up-to-date with emerging technologies and industry trends in data engineering and recommend their adoption when appropriate.
Qualifications:
Bachelor’s or higher degree in Computer Science, Information Systems, or a related field.
Proven experience as a Data Engineer or similar role, working with large-scale data processing and storage systems.
Strong programming skills in languages such as Python, Java, or Scala.
Experience with big data technologies and frameworks like Hadoop, Spark, or Kafka.
Proficiency in SQL and database management systems (e.g., MySQL, PostgreSQL, or Oracle).
Familiarity with cloud platforms like AWS, Azure, or GCP, and their data services (e.g., S3, Redshift, BigQuery).
Solid understanding of data modeling, data warehousing, and ETL principles.
Knowledge of data integration techniques and tools (e.g., Apache Nifi, Talend, or Informatica).
Strong problem-solving and analytical skills, with the ability to handle complex data challenges.
Excellent communication and collaboration skills to work effectively in a team environment.
Preferred Qualifications:
Advanced knowledge of distributed computing and parallel processing.
Experience with real-time data processing and streaming technologies (e.g., Apache Kafka, Apache Flink).
Familiarity with machine learning concepts and frameworks (e.g., TensorFlow, PyTorch).
Knowledge of containerization and orchestration technologies (e.g., Docker, Kubernetes).
Experience with data visualization and reporting tools (e.g., Tableau, Power BI).
Certification in relevant technologies or data engineering disciplines.
environment. He/she must demonstrate a high level of ownership, integrity, and leadership
skills and be flexible and adaptive with a strong desire to learn & excel.
Required Skills:
- Strong experience working with tools and platforms like Helm charts, Circle CI, Jenkins,
- and/or Codefresh
- Excellent knowledge of AWS offerings around Cloud and DevOps
- Strong expertise in containerization platforms like Docker and container orchestration platforms like Kubernetes & Rancher
- Should be familiar with leading Infrastructure as Code tools such as Terraform, CloudFormation, etc.
- Strong experience in Python, Shell Scripting, Ansible, and Terraform
- Good command over monitoring tools like Datadog, Zabbix, Elk, Grafana, CloudWatch, Stackdriver, Prometheus, JFrog, Nagios, etc.
- Experience with Linux/Unix systems administration.
Job Requirements:
· Bachelor’s degree (minimum) in Computer Science or Engineering.
· Minimum 5 years of experience working as a senior–level Software Engineer
· Excellent programming and debugging skills in RoR and Python
· Experience in web development and automation
· Experience developing on Windows and Linux systems
Although not required, the following are a plus:
· Experience working with Build scripts, Shell scripts, Makefiles
· Experience with Jenkins and other CI/CD tools
· Knowledge of RESTful web services and docker
Job description
Role : Lead Architecture (Spark, Scala, Big Data/Hadoop, Java)
Primary Location : India-Pune, Hyderabad
Experience : 7 - 12 Years
Management Level: 7
Joining Time: Immediate Joiners are preferred
- Attend requirements gathering workshops, estimation discussions, design meetings and status review meetings
- Experience of Solution Design and Solution Architecture for the data engineer model to build and implement Big Data Projects on-premises and on cloud.
- Align architecture with business requirements and stabilizing the developed solution
- Ability to build prototypes to demonstrate the technical feasibility of your vision
- Professional experience facilitating and leading solution design, architecture and delivery planning activities for data intensive and high throughput platforms and applications
- To be able to benchmark systems, analyses system bottlenecks and propose solutions to eliminate them
- Able to help programmers and project managers in the design, planning and governance of implementing projects of any kind.
- Develop, construct, test and maintain architectures and run Sprints for development and rollout of functionalities
- Data Analysis, Code development experience, ideally in Big Data Spark, Hive, Hadoop, Java, Python, PySpark,
- Execute projects of various types i.e. Design, development, Implementation and migration of functional analytics Models/Business logic across architecture approaches
- Work closely with Business Analysts to understand the core business problems and deliver efficient IT solutions of the product
- Deployment sophisticated analytics program of code using any of cloud application.
Perks and Benefits we Provide!
- Working with Highly Technical and Passionate, mission-driven people
- Subsidized Meals & Snacks
- Flexible Schedule
- Approachable leadership
- Access to various learning tools and programs
- Pet Friendly
- Certification Reimbursement Policy
- Check out more about us on our website below!
www.datametica.com
amazingly passionate team of data scientists and engineers who are striving to achieve
a pioneering vision.
Responsibilities:
Our web crawling team is very unique in the industry - while we have many “single-site”
crawlers, our unique proposition and technical efforts are all geared towards building
“generic” bots that can crawl and parse data from thousands of websites, all using the
same code. This requires a whole different level of thinking, planning, and coding.
Here’s what you’ll do -
● Build, improve, and run our generic robots to extract data from both the web and
documents – handling critical information among a wide variety of structures and
formats without error.
● Derive common patterns from semi-structured data, build code to handle them,
and be able to deal with exceptions as well.
● Be responsible for the live execution of our robots, managing turnaround times,
exceptions, QA, and delivery, and building a bleeding-edge infrastructure to
handle volume and scope.
Requirements:
● Either hands-on experience with, or relevant certifications in the Robotic Process
Automation tools, preferably Kofax Kapow, Automation Anywhere, UIPath,
BluePrism, OR crawling purely in Python and relevant libraries.
● 1-3 years of experience with building, running, and maintaining crawlers.
● Successfully worked on projects that delivered 99+% accuracy despite a wide
variety of formats.
● Excellent SQL or MongoDB/ElasticSearch skills and familiarity with Regular
Expressions, Data Mining, Cloud Infrastructure, etc.
Other Infrastructure Requirements
● High-speed internet connectivity for video calls and efficient work.
● Capable business-grade computer (e.g., modern processor, 8 GB+ of RAM, and
no other obstacles to interrupted, efficient work).
● Headphones with clear audio quality.
● Stable power connection and backups in case of internet/power failure.
We are looking for a full stack developer to produce scalable software solutions.
As a full stack developer, you should be comfortable around both front-end and back-end coding languages, development frameworks and third-party libraries. You should also be a team player with a knack for visual design and utility.
If you’re also familiar with Agile methodologies, we’d like to meet you.
Responsibilities:
- Developing front end website architecture.
- Designing user interactions on web pages.
- Developing back end website applications.
- Creating servers and databases for functionality.
- Ensuring cross-platform optimization for mobile phones.
- Ensuring responsiveness of applications.
- Working alongside graphic designers for web design features.
- Seeing through a project from conception to finished product.
- Designing and developing APIs.
- Meeting both technical and consumer needs.
- Staying abreast of developments in web applications and programming languages.
Requirements:
- Degree in Computer Science or related field
- Strong organizational and project management skills.
- Proficiency with fundamental front end languages such as HTML, CSS and JavaScript.
- Familiarity with JavaScript frameworks such as Angular JS, React and Amber.
- Proficiency with server side languages such as Python, Ruby, Java, PHP and .Net.
- Familiarity with database technology such as MySQL, Oracle and MongoDB.
- Excellent verbal communication skills.
- Good problem solving skills.
- Attention to detail.
Reporting directly to the Founder
The job requires a great deal of responsibility early on, but we're working on something exciting and there's lots of opportunity of growth and learning.
The job is full-time, remotely based, and with flexible hours.











