Cutshort logo

50+ Python Jobs in India

Apply to 50+ Python Jobs on CutShort.io. Find your next job, effortlessly. Browse Python Jobs and apply today!

icon
evoqins
Sethulakshmi Manoj
Posted by Sethulakshmi Manoj
Kochi (Cochin)
2 - 4 yrs
₹3L - ₹7L / yr
skill iconPython
FastAPI
skill iconAmazon Web Services (AWS)
RESTful APIs
SQL
+2 more

Company Description

Evoqins is an end-to-end digital product development team focused on maximizing the scalability and reliability of global businesses. We specialize in a wide range of domains including fintech, banking, e-commerce, supply chain, enterprises, logistics, healthcare, and hospitality. With ISO 9001 certification and a 4.9-star Google rating, we are proud to have 120+ satisfied customers and an 87% customer retention rate. Our services include UX/UI design, mobile app development, web app development, custom software development, and team augmentation. 


Role Description

We are looking for a passionate Senior Backend Developer.  You will be responsible for designing, developing, and maintaining scalable backend services and APIs using Python. 

  • Role: Senor Backend Developer
  • Location: Kochi
  • Employment Type: Full Time

Key Responsibilities

  • Design, develop, and maintain scalable Python-based applications and APIs.
  • Build and optimize backend systems using FastAPI/Django/Flask.
  • Work with PostgreSQL/MySQL databases, ensuring efficiency and reliability.
  • Develop and maintain REST APIs (GraphQL experience is a plus).
  • Collaborate using Git-based version control.
  • Deploy and manage applications on AWS cloud infrastructure.
  • Ensure best practices in performance optimization, testing, and security.

Required Skills & Experience

  • 2– 5 years of hands-on Python development experience.
  • Experience in Fintech projects is an advantage
  • Proven experience in FastAPI and REST API development.
  • Strong database skills with PostgreSQL (preferred) and MySQL.
  • Practical exposure to API integrations and third-party services.
  • Experience deploying and maintaining applications in production.
  • Familiarity with AWS cloud services.


Read more
NeoGenCode Technologies Pvt Ltd
Shivank Bhardwaj
Posted by Shivank Bhardwaj
Bengaluru (Bangalore)
3 - 5 yrs
₹14L - ₹22L / yr
skill iconPython
Artificial Intelligence (AI)
Prompt engineering
skill iconJavascript
Open-source LLMs

We’re partnering with a fast-growing AI-first enterprise transforming how organizations handle documents, decisioning, and workflows — starting with BFSI and healthcare. Their platforms are redefining document intelligence, credit analysis, and underwriting automation using cutting-edge AI and human-in-the-loop systems.


As an AI Engineer, you’ll work with a high-caliber engineering team building next-gen AI systems that:

  • Power robust APIs and platforms used by underwriters, analysts, and financial institutions.
  • Build and integrate GenAI-powered agents.
  • Enable “human-in-the-loop” workflows for high-assurance decisions in real-world conditions.


Key Responsibilities

  • Build and optimize ML/DL models for document understanding, classification, and summarization.
  • Apply LLMs and RAG techniques for validation, search, and question-answering tasks.
  • Design and maintain data pipelines for structured and unstructured inputs (PDFs, OCR text, JSON, etc.).
  • Package and deploy models as REST APIs or microservices in production environments.
  • Collaborate with engineering teams to integrate models into existing products and workflows.
  • Monitor, retrain, and fine-tune models to ensure reliability and performance.
  • Stay updated on emerging AI frameworks, architectures, and open-source tools; propose system improvements.


Required Skills & Experience

  • 2–5 years of hands-on experience in AI/ML model development, fine-tuning, and deployment.
  • Strong Python proficiency (NumPy, Pandas, scikit-learn, PyTorch, TensorFlow).
  • Solid understanding of transformers, embeddings, and NLP pipelines.
  • Experience with LLMs (OpenAI, Claude, Gemini, etc.) and frameworks like LangChain.
  • Exposure to OCR, document parsing, and unstructured text analytics.
  • Familiarity with FastAPI/Flask, Docker, and cloud environments (AWS/GCP/Azure).
  • Working knowledge of CI/CD pipelines, model validation, and evaluation workflows.
  • Strong problem-solving skills, structured thinking, and production-quality coding practices.


Bonus Skills

  • Domain exposure to Fintech/BFSI or Healthcare (e.g., credit underwriting, claims automation, KYC).
  • Experience with vector databases (FAISS, Pinecone, Weaviate) and semantic search.
  • Knowledge of MLOps tools (MLflow, Airflow, Kubeflow).
  • Experience integrating GenAI into SaaS or enterprise products.


Education

  • B.Tech / M.Tech / MS in Computer Science, Data Science, or related field.
  • (Equivalent hands-on experience will also be considered.)


Why Join

  • Build AI systems from prototype to production for live enterprise use.
  • Work with a senior AI and product team that values ownership, innovation, and impact.
  • Exposure to LLMs, GenAI, and Document AI in large-scale enterprise environments.
  • Competitive compensation, career growth, and a flexible work culture.


Read more
Jaipur
6 - 12 yrs
₹5L - ₹17L / yr
Data Operations
Data Visualization
Data collection
Data validation
Data integration
+18 more

About the Role

We are seeking an experienced Data Operations Lead to oversee and manage the data operations team responsible for data analysis, query development, and data-driven initiatives. This role plays a key part in ensuring the effective management, organization, and delivery of high-quality data across projects while driving process efficiency, data accuracy, and business impact.


Key Responsibilities

  • Lead and mentor the Data Operations team focused on data collection, enrichment, validation, and delivery.
  • Define and monitor data quality metrics, identify discrepancies, and implement process improvements.
  • Collaborate with Engineering for data integration, automation, and scalability initiatives.
  • Partner with Product and Business teams to ensure data alignment with strategic objectives.
  • Manage vendor relationships for external data sources and enrichment platforms.
  • Promote automation using tools such as SQL, Python, and BI platforms.


Qualifications

  • Bachelor’s or Master’s degree in Computer Science, Data Analytics, Information Systems, or related field.
  • 6–12 years of experience in data operations, management, or analytics, with at least 4 years in a leadership capacity.
  • Strong understanding of data governance, ETL processes, and quality control frameworks.
  • Proficiency with SQL, Excel/Google Sheets, and data visualization tools.
  • Exposure to automation and scripting (Python preferred).
  • Excellent communication, leadership, and project management skills.
  • Proven ability to manage teams and maintain high data quality under tight deadlines.


Preferred Skills

  • Experience in SaaS, B2B data, or lead intelligence environments.
  • Familiarity with GDPR, CCPA, and data privacy compliance.
  • Ability to work effectively in cross-functional and global teams.


About the Company

We are a leading revenue intelligence platform that combines advanced automation with a dedicated research team to achieve industry-leading data accuracy. Our platform offers millions of verified contact and company records, continuously re-verified to ensure reliability. With a commitment to quality, scalability, and exceptional customer experience, we empower organizations to make smarter, data-driven decisions.

We pride ourselves on a diverse, growth-oriented workplace that values continuous learning, collaboration, and excellence. Our team members enjoy competitive benefits, including paid leaves, bonuses, incentives, medical coverage, and training opportunities.

Read more
Borderless Access

at Borderless Access

4 candid answers
1 video
Reshika Mendiratta
Posted by Reshika Mendiratta
Bengaluru (Bangalore)
13yrs+
Upto ₹35L / yr (Varies
)
skill iconPython
skill iconJava
skill iconNodeJS (Node.js)
skill iconSpring Boot
skill iconJavascript
+13 more

About Borderless Access

Borderless Access is a company that believes in fostering a culture of innovation and collaboration to build and deliver digital-first products for market research methodologies. This enables our customers to stay ahead of their competition.

We are committed to becoming the global leader in providing innovative digital offerings for consumers backed by advanced analytics, AI, ML, and cutting-edge technological capabilities.

Our Borderless Product Innovation and Operations team is dedicated to creating a top-tier market research platform that will drive our organization's growth. To achieve this, we're embracing modern technologies and a cutting-edge tech stack for faster, higher-quality product development.

The Product Development team is the core of our strategy, fostering collaboration and efficiency. If you're passionate about innovation and eager to contribute to our rapidly evolving market research domain, we invite you to join our team.


Key Responsibilities

  • Lead, mentor, and grow a cross-functional team of engineers specializing.
  • Foster a culture of collaboration, accountability, and continuous learning.
  • Oversee the design and development of robust platform architecture with a focus on scalability, security, and maintainability.
  • Establish and enforce engineering best practices including code reviews, unit testing, and CI/CD pipelines.
  • Promote clean, maintainable, and well-documented code across the team.
  • Lead architectural discussions and technical decision-making, with clear and concise documentation for software components and systems.
  • Collaborate with Product, Design, and other stakeholders to define and prioritize platform features.
  • Track and report on key performance indicators (KPIs) such as velocity, code quality, deployment frequency, and incident response times.
  • Ensure timely delivery of high-quality software aligned with business goals.
  • Work closely with DevOps to ensure platform reliability, scalability, and observability.
  • Conduct regular 1:1s, performance reviews, and career development planning.
  • Conduct code reviews and provide constructive feedback to ensure code quality and maintainability.
  • Participate in the entire software development lifecycle, from requirements gathering to deployment and maintenance.


Added Responsibilities

  • Defining and adhering to the development process.
  • Taking part in regular external audits and maintaining artifacts.
  • Identify opportunities for automation to reduce repetitive tasks.
  • Mentor and coach team members in the teams.
  • Continuously optimize application performance and scalability.
  • Collaborate with the Marketing team to understand different user journeys.


Growth and Development

The following are some of the growth and development activities that you can look forward to at Borderless Access as an Engineering Manager:

  • Develop leadership skills – Enhance your leadership abilities through workshops or coaching from Senior Leadership and Executive Leadership.
  • Foster innovation – Become part of a culture of innovation and experimentation within the product development and operations team.
  • Drive business objectives – Become part of defining and taking actions to meet the business objectives.


About You

  • Bachelor's degree in Computer Science, Engineering, or a related field.
  • 8+ years of experience in software development.
  • Experience with microservices architecture and container orchestration.
  • Excellent problem-solving and analytical skills.
  • Strong communication and collaboration skills.
  • Solid understanding of data structures, algorithms, and software design patterns.
  • Solid understanding of enterprise system architecture patterns.
  • Experience in managing a small to medium-sized team with varied experiences.
  • Strong proficiency in back-end development, including programming languages like Python, Java, or Node.js, and frameworks like Spring or Express.
  • Strong proficiency in front-end development, including HTML, CSS, JavaScript, and popular frameworks like React or Angular.
  • Experience with databases (e.g., MySQL, PostgreSQL, MongoDB).
  • Experience with cloud platforms AWS, Azure, or GCP (preferred is Azure).
  • Knowledge of containerization technologies Docker and Kubernetes.


Read more
Agentic AI Platform

Agentic AI Platform

Agency job
via Peak Hire Solutions by Dhara Thakkar
Gurugram
4 - 7 yrs
₹25L - ₹50L / yr
Microservices
API
Cloud Computing
skill iconJava
skill iconPython
+18 more

ROLES AND RESPONSIBILITIES:

We are looking for a Software Engineering Manager to lead a high-performing team focused on building scalable, secure, and intelligent enterprise software. The ideal candidate is a strong technologist who enjoys coding, mentoring, and driving high-quality software delivery in a fast-paced startup environment.


KEY RESPONSIBILITIES:

  • Lead and mentor a team of software engineers across backend, frontend, and integration areas.
  • Drive architectural design, technical reviews, and ensure scalability and reliability.
  • Collaborate with Product, Design, and DevOps teams to deliver high-quality releases on time.
  • Establish best practices in agile development, testing automation, and CI/CD pipelines.
  • Build reusable frameworks for low-code app development and AI-driven workflows.
  • Hire, coach, and develop engineers to strengthen technical capabilities and team culture.


IDEAL CANDIDATE:

  • B.Tech/B.E. in Computer Science from a Tier-1 Engineering College.
  • 3+ years of professional experience as a software engineer, with at least 1 year mentoring or managing engineers.
  • Strong expertise in backend development (Java / Node.js / Go / Python) and familiarity with frontend frameworks (React / Angular / Vue).
  • Solid understanding of microservices, APIs, and cloud architectures (AWS/GCP/Azure).
  • Experience with Docker, Kubernetes, and CI/CD pipelines.
  • Excellent communication and problem-solving skills.



PREFERRED QUALIFICATIONS:

  • Experience building or scaling SaaS or platform-based products.
  • Exposure to GenAI/LLM, data pipelines, or workflow automation tools.
  • Prior experience in a startup or high-growth product environment.
Read more
Synorus
Synorus Admin
Posted by Synorus Admin
Remote only
0 - 2 yrs
₹0.2L - ₹1.2L / yr
skill iconPython
Large Language Models (LLM)
Retrieval Augmented Generation (RAG)
colab
Large Language Models (LLM) tuning
+10 more

About Synorus

Synorus is building a next-generation ecosystem of AI-first products. Our flagship legal-AI platform LexVault is redefining legal research, drafting, knowledge retrieval, and case intelligence using domain-tuned LLMs, private RAG pipelines, and secure reasoning systems.

If you are passionate about AI, legaltech, and training high-performance models — this internship will put you on the front line of innovation.


Role Overview

We are seeking passionate AI/LLM Engineering Interns who can:

  • Fine-tune LLMs for legal domain use-cases
  • Train and experiment with open-source foundation models
  • Work with large datasets efficiently
  • Build RAG pipelines and text-processing frameworks
  • Run model training workflows on Google Colab / Kaggle / Cloud GPUs

This is a hands-on engineering and research internship — you will work directly with senior founders & technical leadership.

Key Responsibilities

  • Fine-tune transformer-based models (Llama, Mistral, Gemma, etc.)
  • Build and preprocess legal datasets at scale
  • Develop efficient inference & training pipelines
  • Evaluate models for accuracy, hallucinations, and trustworthiness
  • Implement RAG architectures (vector DBs + embeddings)
  • Work with GPU environments (Colab/Kaggle/Cloud)
  • Contribute to model improvements, prompt engineering & safety tuning

Must-Have Skills

  • Strong knowledge of Python & PyTorch
  • Understanding of LLMsTransformersTokenization
  • Hands-on experience with HuggingFace Transformers
  • Familiarity with LoRA/QLoRA, PEFT training
  • Data wrangling: Pandas, NumPy, tokenizers
  • Ability to handle multi-GB datasets efficiently

Bonus Skills

(Not mandatory — but a strong plus)

  • Experience with RAG / vector DBs (Chroma, Qdrant, LanceDB)
  • Familiarity with vLLMllama.cppGGUF
  • Worked on summarization, Q&A or document-AI projects
  • Knowledge of legal texts (Indian laws/case-law/statutes)
  • Open-source contributions or research work

What You Will Gain

  • Real-world training on LLM fine-tuning & legal AI
  • Exposure to production-grade AI pipelines
  • Direct mentorship from engineering leadership
  • Research + industry project portfolio
  • Letter of experience + potential full-time offer

Ideal Candidate

  • You experiment with models on weekends
  • You love pushing GPUs to their limits
  • You prefer research + implementation over theory alone
  • You want to build AI that matters — not just demos


Location - Remote

Stipend - 5K - 10K

Read more
appscrip

at appscrip

2 recruiters
Nilam Surti
Posted by Nilam Surti
Bengaluru (Bangalore)
0 - 0 yrs
₹1.2L - ₹4L / yr
skill iconPython
skill iconMongoDB
FastAPI
Artificial Intelligence (AI)
skill iconGo Programming (Golang)

The requirements are as follows:


1) Familiar with the the Django REST API Framework.


2) Experience with the FAST API framework will be a plus


3) Strong grasp of basic python programming concepts ( We do ask a lot of questions on this on our interviews :) )


4) Experience with databases like MongoDB , Postgres , Elasticsearch , REDIS will be a plus


5) Experience with any ML library will be a plus.


6) Familiarity with using git , writing unit test cases for all code written and CI/CD concepts will be a plus as well.


7) Familiar with basic code patterns like MVC.


8) Grasp on basic data structures.


You can contact me on nine three one six one two zero one three two

Read more
Intensity Global Technologies

at Intensity Global Technologies

4 candid answers
2 recruiters
Bisman Gill
Posted by Bisman Gill
Delhi
3yrs+
Upto ₹10L / yr (Varies
)
skill iconAmazon Web Services (AWS)
Microsoft Windows Azure
skill iconMachine Learning (ML)
skill iconPython
PyTorch
+1 more

Job Summary:

We are seeking a skilled and forward-thinking Cloud AI Professional to join our technology team. The ideal candidate will have expertise in designing, deploying, and managing artificial intelligence and machine learning solutions in cloud environments (AWS, Azure, or Google Cloud). You will work at the intersection of cloud computing and AI, helping to build scalable, secure, and high-performance AI-driven applications and services.


Key Responsibilities:

  • Design, develop, and deploy AI/ML models in cloud environments (AWS, GCP, Azure).
  • Build and manage end-to-end ML pipelines using cloud-native tools (e.g., SageMaker, Vertex AI, Azure ML).
  • Collaborate with data scientists, engineers, and stakeholders to define AI use cases and deliver solutions.
  • Automate model training, testing, and deployment using MLOps practices.
  • Optimize performance and cost of AI/ML workloads in the cloud.
  • Ensure security, compliance, and scalability of deployed AI services.
  • Monitor model performance in production and retrain models as needed.
  • Stay current with new developments in AI/ML and cloud technologies.


Required Qualifications:

  • Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field.
  • 3+ years of experience in AI/ML and cloud computing.
  • Hands-on experience with cloud platforms (AWS, GCP, or Azure).
  • Proficient in Python, TensorFlow, PyTorch, or similar frameworks.
  • Strong understanding of MLOps tools and CI/CD for machine learning.
  • Experience with containerization (Docker, Kubernetes).
  • Familiarity with cloud-native data services (e.g., BigQuery, S3, Cosmos DB).


Preferred Qualifications:

  • Certifications in Cloud (e.g., AWS Certified Machine Learning, Google Cloud Professional ML Engineer).
  • Experience with generative AI, LLMs, or real-time inferencing.
  • Knowledge of data governance and ethical AI practices.
  • Experience with REST APIs and microservices architecture.


Soft Skills:

  • Strong problem-solving and analytical skills.
  • Excellent communication and collaboration abilities.
  • Ability to work in a fast-paced, agile environment.
Read more
IAI solution
Anajli Kanojiya
Posted by Anajli Kanojiya
Bengaluru (Bangalore)
5 - 8 yrs
₹30L - ₹35L / yr
skill iconPython
skill iconReact.js
skill iconDocker
skill iconMongoDB

Location: Bengaluru, India

Experience: 5 to 8 Years

Employment Type: Full-time


About the Role

We’re looking for an experienced Full Stack Developer with strong expertise across modern frontend frameworks, scalable backend systems, and cloud-native DevOps environments.

The ideal candidate will play a key role in designing, developing, and deploying end-to-end solutions that power high-performance, data-driven applications.


Key Responsibilities

  • Design, develop, and maintain scalable frontend applications using React.js and Next.js.
  • Build robust backend services and APIs using FastAPI (Python), Node.js, or Java.
  • Implement database design, queries, and optimization using PostgreSQL, MongoDB, and Redis.
  • Develop, test, and deploy cloud-native solutions on Azure (preferred) or AWS.
  • Manage containerized environments using Docker and Kubernetes.
  • Automate deployments and workflows with Terraform, GitHub Actions, or Azure DevOps.
  • Ensure application security, performance, and reliability across the stack.
  • Collaborate closely with cross-functional teams (designers, product managers, data engineers) to deliver quality software.


Required Skills

Frontend: Next.js, React.js, TypeScript, HTML, CSS, Tailwind (preferred)

Backend: Python (FastAPI), Node.js, Java, REST APIs, GraphQL (optional)

Databases: PostgreSQL, MongoDB, Redis

Cloud & DevOps: Azure (preferred), AWS, Docker, Kubernetes, Terraform

CI/CD: GitHub Actions, Azure DevOps, Jenkins (nice to have)

Version Control: Git, GitHub/GitLab


Qualifications

  • Bachelor’s degree in Computer Science, Engineering, or related field.
  • 4+ years of hands-on experience in full-stack development.
  • Strong problem-solving skills and ability to architect scalable solutions.
  • Familiarity with Agile development and code review processes.
  • Excellent communication and collaboration abilities.


Nice to Have

  • Experience with microservices architecture.
  • Exposure to API security and authentication (OAuth2, JWT).
  • Experience in setting up observability tools (Grafana, Prometheus, etc.).


Compensation

Competitive salary based on experience and technical proficiency.

Read more
Metron Security Private Limited
Prathamesh Shinde
Posted by Prathamesh Shinde
Bengaluru (Bangalore), Pune
2 - 5 yrs
₹4L - ₹10L / yr
skill iconPython

Job Description:


We are looking for a skilled Backend Developer with 2–5 years of experience in software development, specializing in Python and/or Golang. If you have strong programming skills, enjoy solving problems, and want to work on secure and scalable systems, we'd love to hear from you!


Location - Pune, Baner.

Interview Rounds - In Office


Key Responsibilities:

Design, build, and maintain efficient, reusable, and reliable backend services using Python and/or Golang

Develop and maintain clean and scalable code following best practices

Apply Object-Oriented Programming (OOP) concepts in real-world development

Collaborate with front-end developers, QA, and other team members to deliver high-quality features

Debug, optimize, and improve existing systems and codebase

Participate in code reviews and team discussions

Work in an Agile/Scrum development environment


Required Skills: Strong experience in Python or Golang (working knowledge of both is a plus)


Good understanding of OOP principles

Familiarity with RESTful APIs and back-end frameworks

Experience with databases (SQL or NoSQL)

Excellent problem-solving and debugging skills

Strong communication and teamwork abilities


Good to Have:

Prior experience in the security industry

Familiarity with cloud platforms like AWS, Azure, or GCP

Knowledge of Docker, Kubernetes, or CI/CD tools

Read more
Vola Finance

at Vola Finance

1 video
2 recruiters
Reshika Mendiratta
Posted by Reshika Mendiratta
Bengaluru (Bangalore)
3yrs+
Upto ₹14L / yr (Varies
)
skill iconPython
SQL
Statistical Analysis
A/B Testing
MS-Excel
+4 more

Business Analyst

Domain: Product / Fintech / Credit Cards


Mandatory Technical Skill Set

  • Previous experience in a product-based company is mandatory
  • Churn analysis and strategy building on subscription management experience
  • BNPL or credit cards growth strategy building experience
  • ML model development experience is a plus
  • Python
  • Statistical analysis and A/B testing
  • Excel
  • SQL
  • Visualization tools such as Redash / Grafana / Tableau / Power BI
  • Bitbucket, GitHub, and other versioning tools


Roles and Responsibilities

  • Work on product integrations, data collection, and data sanity checks
  • Improve product features for sustainable churn management
  • Cohort analysis and strategy building for credit card usage growth
  • Conduct A/B testing for better subscription conversion and offers
  • Monitor key business metrics
  • Track changes and perform impact analysis
Read more
Vola Finance

at Vola Finance

1 video
2 recruiters
Reshika Mendiratta
Posted by Reshika Mendiratta
Bengaluru (Bangalore)
4yrs+
Upto ₹20L / yr (Varies
)
skill iconPython
FastAPI
RESTful APIs
GraphQL
skill iconAmazon Web Services (AWS)
+7 more

Python Backend Developer

We are seeking a skilled Python Backend Developer responsible for managing the interchange of data between the server and the users. Your primary focus will be on developing server-side logic to ensure high performance and responsiveness to requests from the front end. You will also be responsible for integrating front-end elements built by your coworkers into the application, as well as managing AWS resources.


Roles & Responsibilities

  • Develop and maintain scalable, secure, and robust backend services using Python
  • Design and implement RESTful APIs and/or GraphQL endpoints
  • Integrate user-facing elements developed by front-end developers with server-side logic
  • Write reusable, testable, and efficient code
  • Optimize components for maximum performance and scalability
  • Collaborate with front-end developers, DevOps engineers, and other team members
  • Troubleshoot and debug applications
  • Implement data storage solutions (e.g., PostgreSQL, MySQL, MongoDB)
  • Ensure security and data protection

Mandatory Technical Skill Set

  • Implementing optimal data storage (e.g., PostgreSQL, MySQL, MongoDB, S3)
  • Python backend development experience
  • Design, implement, and maintain CI/CD pipelines using tools such as Jenkins, GitLab CI/CD, or GitHub Actions
  • Implemented and managed containerization platforms such as Docker and orchestration tools like Kubernetes
  • Previous hands-on experience in:
  • EC2, S3, ECS, EMR, VPC, Subnets, SQS, CloudWatch, CloudTrail, Lambda, SageMaker, RDS, SES, SNS, IAM, S3, Backup, AWS WAF
  • SQL
Read more
Remote only
2 - 4 yrs
₹4L - ₹8L / yr
skill iconPython
JSON
LLMS
oops
skill iconJava
+4 more

Role Overview

We are seeking a Junior Developer with 1-3 year’s experience with strong foundations in Python, databases, and AI technologies. The ideal candidate will support the development of AI-powered solutions, focusing on LLM integration, prompt engineering, and database-driven workflows. This is a hands-on role with opportunities to learn and grow into advanced AI engineering responsibilities.

Key Responsibilities

  • Develop, test, and maintain Python-based applications and APIs.
  • Design and optimize prompts for Large Language Models (LLMs) to improve accuracy and performance.
  • Work with JSON-based data structures for request/response handling.
  • Integrate and manage PostgreSQL (pgSQL) databases, including writing queries and handling data pipelines.
  • Collaborate with the product and AI teams to implement new features.
  • Debug, troubleshoot, and optimize performance of applications and workflows.
  • Stay updated on advancements in LLMs, AI frameworks, and generative AI tools.

Required Skills & Qualifications

  • Strong knowledge of Python (scripting, APIs, data handling).
  • Basic understanding of Large Language Models (LLMs) and prompt engineering techniques.
  • Experience with JSON data parsing and transformations.
  • Familiarity with PostgreSQL or other relational databases.
  • Ability to write clean, maintainable, and well-documented code.
  • Strong problem-solving skills and eagerness to learn.
  • Bachelor’s degree in Computer Science, Engineering, or related field (or equivalent practical experience).

Nice-to-Have (Preferred)

  • Exposure to AI/ML frameworks (e.g., LangChain, Hugging Face, OpenAI APIs).
  • Experience working in startups or fast-paced environments.
  • Familiarity with version control (Git/GitHub) and cloud platforms (AWS, GCP, or Azure).

What We Offer

  • Opportunity to work on cutting-edge AI applications in permitting & compliance.
  • Collaborative, growth-focused, and innovation-driven work culture.
  • Mentorship and learning opportunities in AI/LLM development.
  • Competitive compensation with performance-based growth.


Read more
Proximity Works

at Proximity Works

1 video
5 recruiters
Eman Khan
Posted by Eman Khan
Remote only
5 - 10 yrs
₹30L - ₹60L / yr
skill iconPython
skill iconData Science
pandas
Scikit-Learn
TensorFlow
+9 more

We’re seeking a highly skilled, execution-focused Senior Data Scientist with a minimum of 5 years of experience. This role demands hands-on expertise in building, deploying, and optimizing machine learning models at scale, while working with big data technologies and modern cloud platforms. You will be responsible for driving data-driven solutions from experimentation to production, leveraging advanced tools and frameworks across Python, SQL, Spark, and AWS. The role requires strong technical depth, problem-solving ability, and ownership in delivering business impact through data science.


Responsibilities

  • Design, build, and deploy scalable machine learning models into production systems.
  • Develop advanced analytics and predictive models using Python, SQL, and popular ML/DL frameworks (Pandas, Scikit-learn, TensorFlow, PyTorch).
  • Leverage Databricks, Apache Spark, and Hadoop for large-scale data processing and model training.
  • Implement workflows and pipelines using Airflow and AWS EMR for automation and orchestration.
  • Collaborate with engineering teams to integrate models into cloud-based applications on AWS.
  • Optimize query performance, storage usage, and data pipelines for efficiency.
  • Conduct end-to-end experiments, including data preprocessing, feature engineering, model training, validation, and deployment.
  • Drive initiatives independently with high ownership and accountability.
  • Stay up to date with industry best practices in machine learning, big data, and cloud-native deployments.



Requirements:

  • Minimum 5 years of experience in Data Science or Applied Machine Learning.
  • Strong proficiency in Python, SQL, and ML libraries (Pandas, Scikit-learn, TensorFlow, PyTorch).
  • Proven expertise in deploying ML models into production systems.
  • Experience with big data platforms (Hadoop, Spark) and distributed data processing.
  • Hands-on experience with Databricks, Airflow, and AWS EMR.
  • Strong knowledge of AWS cloud services (S3, Lambda, SageMaker, EC2, etc.).
  • Solid understanding of query optimization, storage systems, and data pipelines.
  • Excellent problem-solving skills, with the ability to design scalable solutions.
  • Strong communication and collaboration skills to work in cross-functional teams.



Benefits:

  • Best in class salary: We hire only the best, and we pay accordingly.
  • Proximity Talks: Meet other designers, engineers, and product geeks — and learn from experts in the field.
  • Keep on learning with a world-class team: Work with the best in the field, challenge yourself constantly, and learn something new every day.


About Us:

Proximity is the trusted technology, design, and consulting partner for some of the biggest Sports, Media, and Entertainment companies in the world! We’re headquartered in San Francisco and have offices in Palo Alto, Dubai, Mumbai, and Bangalore. Since 2019, Proximity has created and grown high-impact, scalable products used by 370 million daily users, with a total net worth of $45.7 billion among our client companies.


Today, we are a global team of coders, designers, product managers, geeks, and experts. We solve complex problems and build cutting-edge tech, at scale. Our team of Proxonauts is growing quickly, which means your impact on the company’s success will be huge. You’ll have the chance to work with experienced leaders who have built and led multiple tech, product, and design teams.

Read more
Reltio

at Reltio

2 candid answers
1 video
Reshika Mendiratta
Posted by Reshika Mendiratta
Bengaluru (Bangalore)
4 - 7 yrs
Upto ₹50L / yr (Varies
)
skill iconPython
skill iconMachine Learning (ML)
Retrieval Augmented Generation (RAG)
Large Language Models (LLM) tuning
Artificial Intelligence (AI)
+5 more

Job Title: Senior AI Engineer

Location: Bengaluru, India – (Hybrid)


About Reltio

At Reltio®, we believe data should fuel business success. Reltio’s AI-powered data unification and management capabilities—encompassing entity resolution, multi-domain master data management (MDM), and data products—transform siloed data from disparate sources into unified, trusted, and interoperable data.

Reltio Data Cloud™ delivers interoperable data where and when it's needed, empowering data and analytics leaders with unparalleled business responsiveness. Leading enterprise brands across multiple industries around the globe rely on our award-winning data unification and cloud-native MDM capabilities to improve efficiency, manage risk, and drive growth.

At Reltio, our values guide everything we do. With an unyielding commitment to prioritizing our “Customer First”, we strive to ensure their success. We embrace our differences and are “Better Together” as One Reltio. We “Simplify and Share” our knowledge to remove obstacles for each other. We “Own It”, holding ourselves accountable for our actions and outcomes. Every day, we innovate and evolve so that today is “Always Better Than Yesterday.”

If you share and embody these values, we invite you to join our team at Reltio and contribute to our mission of excellence.

Reltio has earned numerous awards and top rankings for our technology, our culture, and our people. Founded on a distributed workforce, Reltio offers flexible work arrangements to help our people manage their personal and professional lives. If you’re ready to work on unrivaled technology as part of a collaborative team on a mission to enable digital transformation with connected data, let’s talk!


Job Summary

As a Senior AI Engineer at Reltio, you will be a core part of the team responsible for building intelligent systems that enhance data quality, automate decision-making, and drive entity resolution at scale.

You will work with cross-functional teams to design and deploy advanced AI/ML solutions that are production-ready, scalable, and embedded into our flagship data platform.

This is a high-impact engineering role with exposure to cutting-edge problems in entity resolution, deduplication, identity stitching, record linking, and metadata enrichment.


Job Duties and Responsibilities

  • Design, implement, and optimize state-of-the-art AI/ML models for solving real-world data management challenges such as entity resolution, classification, similarity matching, and anomaly detection.
  • Work with structured, semi-structured, and unstructured data to extract signals and engineer intelligent features for large-scale ML pipelines.
  • Develop scalable ML workflows using Spark, MLlib, PyTorch, TensorFlow, or MLFlow, with seamless integration into production systems.
  • Translate business needs into technical design and collaborate with data scientists, product managers, and platform engineers to operationalize models.
  • Continuously monitor and improve model performance using feedback loops, A/B testing, drift detection, and retraining strategies.
  • Conduct deep dives into customer data challenges and apply innovative machine learning algorithms to address accuracy, speed, and bias.
  • Actively contribute to research and experimentation efforts, staying updated with the latest AI trends in graph learning, NLP, probabilistic modeling, etc.
  • Document designs and present outcomes to both technical and non-technical stakeholders, fostering transparency and knowledge sharing.


Skills You Must Have

  • Bachelor’s or Master’s degree in Computer Science, Machine Learning, Artificial Intelligence, or related field. PhD is a plus.
  • 4+ years of hands-on experience in developing and deploying machine learning models in production environments.
  • Proficiency in Python (NumPy, scikit-learn, pandas, PyTorch/TensorFlow) and experience with large-scale data processing tools (Spark, Kafka, Airflow).
  • Strong understanding of ML fundamentals, including classification, clustering, feature selection, hyperparameter tuning, and evaluation metrics.
  • Demonstrated experience working with entity resolution, identity graphs, or data deduplication.
  • Familiarity with containerized environments (Docker, Kubernetes) and cloud platforms (AWS, GCP, Azure).
  • Strong debugging, analytical, and communication skills with a focus on delivery and impact.
  • Attention to detail, ability to work independently, and a passion for staying updated with the latest advancements in the field of data science.


Skills Good to Have

  • Experience with knowledge graphs, graph-based ML, or embedding techniques.
  • Exposure to deep learning applications in data quality, record matching, or information retrieval.
  • Experience building explainable AI solutions in regulated domains.
  • Prior work in SaaS, B2B enterprise platforms, or data infrastructure companies.
Read more
Versatile Commerce LLP

at Versatile Commerce LLP

2 candid answers
Burugupally Shailaja
Posted by Burugupally Shailaja
Hyderabad
3 - 6 yrs
₹4L - ₹6L / yr
Selenium
skill iconJava
skill iconPython
skill iconJenkins
TestNG
+6 more

We’re Hiring – Automation Test Engineer!

We at Versatile Commerce are looking for passionate Automation Testing Professionals to join our growing team!

📍 Location: Gachibowli, Hyderabad (Work from Office)

💼 Experience: 3 – 5 Years

Notice Period: Immediate Joiners Preferred

What we’re looking for:

✅ Strong experience in Selenium / Cypress / Playwright

✅ Proficient in Java / Python / JavaScript

✅ Hands-on with TestNG / JUnit / Maven / Jenkins

✅ Experience in API Automation (Postman / REST Assured)

✅ Good understanding of Agile Testing & Defect Management Tools (JIRA, Zephyr)

Read more
Appiness Interactive Pvt. Ltd.
S Suriya Kumar
Posted by S Suriya Kumar
Bengaluru (Bangalore)
3 - 6 yrs
₹4L - ₹30L / yr
skill iconPython
Retrieval Augmented Generation (RAG)
Vector database
skill iconNodeJS (Node.js)
skill iconPostgreSQL
+5 more

Company Description

Appiness Interactive Pvt. Ltd. is a Bangalore-based product development and UX firm that specializes in digital services for startups to fortune-500s. We work closely with our clients to

create a comprehensive soul for their brand in the online world, engaged through multiple

platforms of digital media. Our team is young, passionate, and aggressive, not afraid to think out

of the box or tread the un-trodden path in order to deliver the best results for our clients. We

pride ourselves on Practical Creativity where the idea is only as good as the returns it fetches for

our clients.


Role Overview

We are hiring a Founding Backend Engineer to architect and build the core backend

infrastructure for our enterprise AI chat platform. This role involves creating everything from

secure chat APIs and data pipelines to document embeddings, vector search, and RAG

(Retrieval-Augmented Generation) workflows. You will work directly with the CTO and play a

pivotal role in shaping the platform’s architecture, performance, and scalability as we onboard

enterprise customers. This is a high-ownership role where you’ll influence product direction, tech

decisions, and long-term engineering culture.


Key Responsibilities

● Architect, develop, and scale backend systems and APIs powering AI chat and knowledge

retrieval.

● Build data ingestion & processing pipelines for structured and unstructured enterprise

data.

● Implement multi-tenant security, user access control (RBAC), encryption, and

compliance-friendly design.

● Integrate and orchestrate LLMs (OpenAI, Anthropic, etc.) with vector databases

(Pinecone, Qdrant, OpenSearch) to support advanced AI and RAG workflows.

● Ensure platform reliability, performance, and fault tolerance from day one.

● Own end-to-end CI/CD, observability, and deployment pipelines.

● Collaborate directly with leadership on product strategy, architecture, and scaling

roadmap.


Required Skills

● Strong hands-on experience in Python (Django/FastAPI) or Node.js (TypeScript) — Python

preferred.

● Deep understanding of PostgreSQL, Redis, Docker, and modern API design patterns.

● Experience with LLM integration, RAG pipelines, and vector search technologies.

● Strong exposure to cloud platforms (AWS or GCP), CI/CD, and microservice architecture.

● Solid foundation in security best practices — authentication, RBAC, encryption, data

isolation.

● Ability to independently design and deliver high-performance distributed systems.

Read more
Deqode

at Deqode

1 recruiter
Shraddha Katare
Posted by Shraddha Katare
Pune
2 - 3 yrs
₹7L - ₹15L / yr
DevOps
skill iconAmazon Web Services (AWS)
skill iconPython
Bash
Powershell
+2 more

Role: DevOps Engineer

Experience: 2–3+ years

Location: Pune

Work Mode: Hybrid (3 days Work from office)

Mandatory Skills:

  • Strong hands-on experience with CI/CD tools like Jenkins, GitHub Actions, or AWS CodePipeline
  • Proficiency in scripting languages (Bash, Python, PowerShell)
  • Hands-on experience with containerization (Docker) and container management
  • Proven experience managing infrastructure (On-premise or AWS/VMware)
  • Experience with version control systems (Git/Bitbucket/GitHub)
  • Familiarity with monitoring and logging tools for system performance tracking
  • Knowledge of security best practices and compliance standards
  • Bachelor's degree in Computer Science, Engineering, or related field
  • Willingness to support production issues during odd hours when required

Preferred Qualifications:

  • Certifications in AWS, Docker, or VMware
  • Experience with configuration management tools like Ansible
  • Exposure to Agile and DevOps methodologies
  • Hands-on experience with Virtual Machines and Container orchestration


Read more
ZestFindz Private Limited

at ZestFindz Private Limited

2 candid answers
ZestFindz Info Desk
Posted by ZestFindz Info Desk
Hyderabad
3 - 7 yrs
₹10L - ₹20L / yr
skill iconReact.js
skill iconNodeJS (Node.js)
skill iconExpress
skill iconJavascript
TypeScript
+17 more

We are looking for a highly skilled Senior Full Stack Developer / Tech Lead to drive end-to-end development of scalable, secure, and high-performance applications. The ideal candidate will have strong expertise in React.js, Node.js, PostgreSQL, MongoDB, Python, AI/ML, and Google Cloud platforms (GCP). You will play a key role in architecture design, mentoring developers, ensuring best coding practices, and integrating AI/ML solutions into our products.

This role requires a balance of hands-on coding, system design, cloud deployment, and leadership.


Key Responsibilities

  • Design, develop, and deploy scalable full-stack applications using React.js, Node.js, PostgreSQL, and MongoDB.
  • Build, consume, and optimize REST APIs and GraphQL services.
  • Develop AI/ML models with Python and integrate them into production systems.
  • Implement CI/CD pipelines, containerization (Docker, Kubernetes), and cloud deployments (GCP/AWS).
  • Manage security, authentication (JWT, OAuth2), and performance optimization.
  • Use Redis for caching, session management, and queue handling.
  • Lead and mentor junior developers, conduct code reviews, and enforce coding standards.
  • Collaborate with cross-functional teams (product, design, QA) for feature delivery.
  • Monitor and optimize system performance, scalability, and cost-efficiency.
  • Own technical decisions and contribute to long-term architecture strategy.
Read more
Agentic AI Platform

Agentic AI Platform

Agency job
via Peak Hire Solutions by Dhara Thakkar
Gurugram
4 - 6 yrs
₹20L - ₹50L / yr
skill iconPython
skill iconNodeJS (Node.js)
skill iconJava
Software Development
skill iconAngular (2+)
+16 more

Review Criteria

  • Strong Software Engineer, Engineering Manager Profiles
  • Must have minimum 4+ years of hands-on experience in software development
  • Must have 3+ years of hands-on experience in backend development using Java / Node.js / Go / Python (any 1).
  • Must have experience or familiarity with frontend frameworks such as React / Angular / Vue.
  • Must have at least 1+ year of experience leading or mentoring a team of software engineers.
  • Must have a solid understanding of microservices architecture, APIs, and cloud platforms (AWS / GCP / Azure).
  • Must have hands-on experience working with Docker, Kubernetes, and CI/CD pipelines.
  • Top-tier/renowned product-based company (preferred Entreprise B2B SaaS)


Preferred

  • Experience in building or scaling SaaS / platform-based products
  • Exposure to GenAI / LLMs, data pipelines, or workflow automation tools
  • Prior experience in a startup or high-growth product environment


Role & Responsibilities

We are looking for a Software Engineering Manager to lead a high-performing team focused on building scalable, secure, and intelligent enterprise software. The ideal candidate is a strong technologist who enjoys coding, mentoring, and driving high-quality software delivery in a fast-paced startup environment.


Key Responsibilities:

  • Lead and mentor a team of software engineers across backend, frontend, and integration areas.
  • Drive architectural design, technical reviews, and ensure scalability and reliability.
  • Collaborate with Product, Design, and DevOps teams to deliver high-quality releases on time.
  • Establish best practices in agile development, testing automation, and CI/CD pipelines.
  • Build reusable frameworks for low-code app development and AI-driven workflows.
  • Hire, coach, and develop engineers to strengthen technical capabilities and team culture.


Ideal Candidate

  • B.Tech/B.E. in Computer Science from a Tier-1 Engineering College.
  • 3+ years of professional experience as a software engineer, with at least 1 year mentoring or managing engineers.
  • Strong expertise in backend development (Java / Node.js / Go / Python) and familiarity with frontend frameworks (React / Angular / Vue).
  • Solid understanding of microservices, APIs, and cloud architectures (AWS/GCP/Azure).
  • Experience with Docker, Kubernetes, and CI/CD pipelines.
  • Excellent communication and problem-solving skills.


Preferred Qualifications:

  • Experience building or scaling SaaS or platform-based products.
  • Exposure to GenAI/LLM, data pipelines, or workflow automation tools.
  • Prior experience in a startup or high-growth product environment.



Read more
Agentic AI Platform

Agentic AI Platform

Agency job
via Peak Hire Solutions by Dhara Thakkar
Gurugram
3 - 6 yrs
₹10L - ₹25L / yr
DevOps
skill iconPython
Google Cloud Platform (GCP)
Linux/Unix
CI/CD
+21 more

Review Criteria

  • Strong DevOps /Cloud Engineer Profiles
  • Must have 3+ years of experience as a DevOps / Cloud Engineer
  • Must have strong expertise in cloud platforms – AWS / Azure / GCP (any one or more)
  • Must have strong hands-on experience in Linux administration and system management
  • Must have hands-on experience with containerization and orchestration tools such as Docker and Kubernetes
  • Must have experience in building and optimizing CI/CD pipelines using tools like GitHub Actions, GitLab CI, or Jenkins
  • Must have hands-on experience with Infrastructure-as-Code tools such as Terraform, Ansible, or CloudFormation
  • Must be proficient in scripting languages such as Python or Bash for automation
  • Must have experience with monitoring and alerting tools like Prometheus, Grafana, ELK, or CloudWatch
  • Top tier Product-based company (B2B Enterprise SaaS preferred)


Preferred

  • Experience in multi-tenant SaaS infrastructure scaling.
  • Exposure to AI/ML pipeline deployments or iPaaS / reverse ETL connectors.


Role & Responsibilities

We are seeking a DevOps Engineer to design, build, and maintain scalable, secure, and resilient infrastructure for our SaaS platform and AI-driven products. The role will focus on cloud infrastructure, CI/CD pipelines, container orchestration, monitoring, and security automation, enabling rapid and reliable software delivery.


Key Responsibilities:

  • Design, implement, and manage cloud-native infrastructure (AWS/Azure/GCP).
  • Build and optimize CI/CD pipelines to support rapid release cycles.
  • Manage containerization & orchestration (Docker, Kubernetes).
  • Own infrastructure-as-code (Terraform, Ansible, CloudFormation).
  • Set up and maintain monitoring & alerting frameworks (Prometheus, Grafana, ELK, etc.).
  • Drive cloud security automation (IAM, SSL, secrets management).
  • Partner with engineering teams to embed DevOps into SDLC.
  • Troubleshoot production issues and drive incident response.
  • Support multi-tenant SaaS scaling strategies.


Ideal Candidate

  • 3–6 years' experience as DevOps/Cloud Engineer in SaaS or enterprise environments.
  • Strong expertise in AWS, Azure, or GCP.
  • Strong expertise in LINUX Administration.
  • Hands-on with Kubernetes, Docker, CI/CD tools (GitHub Actions, GitLab, Jenkins).
  • Proficient in Terraform/Ansible/CloudFormation.
  • Strong scripting skills (Python, Bash).
  • Experience with monitoring stacks (Prometheus, Grafana, ELK, CloudWatch).
  • Strong grasp of cloud security best practices.



Read more
Synorus
Synorus Admin
Posted by Synorus Admin
Remote only
0 - 1 yrs
₹0.2L - ₹1L / yr
Google colab
Retrieval Augmented Generation (RAG)
Large Language Models (LLM) tuning
skill iconPython
PyTorch
+3 more

About Synorus

Synorus is building a next-generation ecosystem of AI-first products. Our flagship legal-AI platform LexVault is redefining legal research, drafting, knowledge retrieval, and case intelligence using domain-tuned LLMs, private RAG pipelines, and secure reasoning systems.

If you are passionate about AI, legaltech, and training high-performance models — this internship will put you on the front line of innovation.


Role Overview

We are seeking passionate AI/LLM Engineering Interns who can:

  • Fine-tune LLMs for legal domain use-cases
  • Train and experiment with open-source foundation models
  • Work with large datasets efficiently
  • Build RAG pipelines and text-processing frameworks
  • Run model training workflows on Google Colab / Kaggle / Cloud GPUs

This is a hands-on engineering and research internship — you will work directly with senior founders & technical leadership.

Key Responsibilities

  • Fine-tune transformer-based models (Llama, Mistral, Gemma, etc.)
  • Build and preprocess legal datasets at scale
  • Develop efficient inference & training pipelines
  • Evaluate models for accuracy, hallucinations, and trustworthiness
  • Implement RAG architectures (vector DBs + embeddings)
  • Work with GPU environments (Colab/Kaggle/Cloud)
  • Contribute to model improvements, prompt engineering & safety tuning

Must-Have Skills

  • Strong knowledge of Python & PyTorch
  • Understanding of LLMs, Transformers, Tokenization
  • Hands-on experience with HuggingFace Transformers
  • Familiarity with LoRA/QLoRA, PEFT training
  • Data wrangling: Pandas, NumPy, tokenizers
  • Ability to handle multi-GB datasets efficiently

Bonus Skills

(Not mandatory — but a strong plus)

  • Experience with RAG / vector DBs (Chroma, Qdrant, LanceDB)
  • Familiarity with vLLM, llama.cpp, GGUF
  • Worked on summarization, Q&A or document-AI projects
  • Knowledge of legal texts (Indian laws/case-law/statutes)
  • Open-source contributions or research work

What You Will Gain

  • Real-world training on LLM fine-tuning & legal AI
  • Exposure to production-grade AI pipelines
  • Direct mentorship from engineering leadership
  • Research + industry project portfolio
  • Letter of experience + potential full-time offer

Ideal Candidate

  • You experiment with models on weekends
  • You love pushing GPUs to their limits
  • You prefer research + implementation over theory alone
  • You want to build AI that matters — not just demos


Location - Remote

Stipend - 5K - 10K

Read more
Data Havn

Data Havn

Agency job
via Infinium Associate by Toshi Srivastava
Delhi, Gurugram, Noida, Ghaziabad, Faridabad
4 - 6 yrs
₹30L - ₹35L / yr
Fullstack Developer
Mobile App Development
Google Cloud Platform (GCP)
skill iconReact Native
skill iconFlutter
+8 more

DataHavn IT Solutions is a company that specializes in big data and cloud computing, artificial intelligence and machine learning, application development, and consulting services. We want to be in the frontrunner into anything to do with data and we have the required expertise to transform customer businesses by making right use of data.

 

 

About the Role

We're seeking a talented and versatile Full Stack Developer with a strong foundation in mobile app development to join our dynamic team. You'll play a pivotal role in designing, developing, and maintaining high-quality software applications across various platforms.

Responsibilities

  • Full Stack Development: Design, develop, and implement both front-end and back-end components of web applications using modern technologies and frameworks.
  • Mobile App Development: Develop native mobile applications for iOS and Android platforms using Swift and Kotlin, respectively.
  • Cross-Platform Development: Explore and utilize cross-platform frameworks (e.g., React Native, Flutter) for efficient mobile app development.
  • API Development: Create and maintain RESTful APIs for integration with front-end and mobile applications.
  • Database Management: Work with databases (e.g., MySQL, PostgreSQL) to store and retrieve application data.
  • Code Quality: Adhere to coding standards, best practices, and ensure code quality through regular code reviews.
  • Collaboration: Collaborate effectively with designers, project managers, and other team members to deliver high-quality solutions.

Qualifications

  • Bachelor's degree in Computer Science, Software Engineering, or a related field.
  • Strong programming skills in [relevant programming languages, e.g., JavaScript, Python, Java, etc.]. 
  • Experience with [relevant frameworks and technologies, e.g., React, Angular, Node.js, Swift, Kotlin, etc.].
  • Understanding of software development methodologies (e.g., Agile, Waterfall).
  • Excellent problem-solving and analytical skills.
  • Ability to work independently and as part of a team.
  • Strong communication and interpersonal skills.

Preferred Skills (Optional)

  • Experience with cloud platforms (e.g., AWS, Azure, GCP).
  • Knowledge of DevOps practices and tools.
  • Experience with serverless architectures.
  • Contributions to open-source projects.

What We Offer

  • Competitive salary and benefits package.
  • Opportunities for professional growth and development.
  • A collaborative and supportive work environment.
  • A chance to work on cutting-edge projects.


Read more
Data Havn

Data Havn

Agency job
via Infinium Associate by Toshi Srivastava
Delhi, Gurugram, Noida, Ghaziabad, Faridabad
4 - 6 yrs
₹40L - ₹45L / yr
skill iconR Programming
Google Cloud Platform (GCP)
skill iconData Science
skill iconPython
Data Visualization
+3 more

DataHavn IT Solutions is a company that specializes in big data and cloud computing, artificial intelligence and machine learning, application development, and consulting services. We want to be in the frontrunner into anything to do with data and we have the required expertise to transform customer businesses by making right use of data.

 

About the Role:

As a Data Scientist specializing in Google Cloud, you will play a pivotal role in driving data-driven decision-making and innovation within our organization. You will leverage the power of Google Cloud's robust data analytics and machine learning tools to extract valuable insights from large datasets, develop predictive models, and optimize business processes.

Key Responsibilities:

  • Data Ingestion and Preparation:
  • Design and implement efficient data pipelines for ingesting, cleaning, and transforming data from various sources (e.g., databases, APIs, cloud storage) into Google Cloud Platform (GCP) data warehouses (BigQuery) or data lakes (Dataflow).
  • Perform data quality assessments, handle missing values, and address inconsistencies to ensure data integrity.
  • Exploratory Data Analysis (EDA):
  • Conduct in-depth EDA to uncover patterns, trends, and anomalies within the data.
  • Utilize visualization techniques (e.g., Tableau, Looker) to communicate findings effectively.
  • Feature Engineering:
  • Create relevant features from raw data to enhance model performance and interpretability.
  • Explore techniques like feature selection, normalization, and dimensionality reduction.
  • Model Development and Training:
  • Develop and train predictive models using machine learning algorithms (e.g., linear regression, logistic regression, decision trees, random forests, neural networks) on GCP platforms like Vertex AI.
  • Evaluate model performance using appropriate metrics and iterate on the modeling process.
  • Model Deployment and Monitoring:
  • Deploy trained models into production environments using GCP's ML tools and infrastructure.
  • Monitor model performance over time, identify drift, and retrain models as needed.
  • Collaboration and Communication:
  • Work closely with data engineers, analysts, and business stakeholders to understand their requirements and translate them into data-driven solutions.
  • Communicate findings and insights in a clear and concise manner, using visualizations and storytelling techniques.

Required Skills and Qualifications:

  • Strong proficiency in Python or R programming languages.
  • Experience with Google Cloud Platform (GCP) services such as BigQuery, Dataflow, Cloud Dataproc, and Vertex AI.
  • Familiarity with machine learning algorithms and techniques.
  • Knowledge of data visualization tools (e.g., Tableau, Looker).
  • Excellent problem-solving and analytical skills.
  • Ability to work independently and as part of a team.
  • Strong communication and interpersonal skills.

Preferred Qualifications:

  • Experience with cloud-native data technologies (e.g., Apache Spark, Kubernetes).
  • Knowledge of distributed systems and scalable data architectures.
  • Experience with natural language processing (NLP) or computer vision applications.
  • Certifications in Google Cloud Platform or relevant machine learning frameworks.


Read more
Data Havn

Data Havn

Agency job
via Infinium Associate by Toshi Srivastava
Delhi, Gurugram, Noida, Ghaziabad, Faridabad
2.5 - 4.5 yrs
₹10L - ₹20L / yr
skill iconPython
SQL
Google Cloud Platform (GCP)
SQL server
ETL
+9 more

About the Role:


We are seeking a talented Data Engineer to join our team and play a pivotal role in transforming raw data into valuable insights. As a Data Engineer, you will design, develop, and maintain robust data pipelines and infrastructure to support our organization's analytics and decision-making processes.

Responsibilities:

  • Data Pipeline Development: Build and maintain scalable data pipelines to extract, transform, and load (ETL) data from various sources (e.g., databases, APIs, files) into data warehouses or data lakes.
  • Data Infrastructure: Design, implement, and manage data infrastructure components, including data warehouses, data lakes, and data marts.
  • Data Quality: Ensure data quality by implementing data validation, cleansing, and standardization processes.
  • Team Management: Able to handle team.
  • Performance Optimization: Optimize data pipelines and infrastructure for performance and efficiency.
  • Collaboration: Collaborate with data analysts, scientists, and business stakeholders to understand their data needs and translate them into technical requirements.
  • Tool and Technology Selection: Evaluate and select appropriate data engineering tools and technologies (e.g., SQL, Python, Spark, Hadoop, cloud platforms).
  • Documentation: Create and maintain clear and comprehensive documentation for data pipelines, infrastructure, and processes.

 

 Skills:

  • Strong proficiency in SQL and at least one programming language (e.g., Python, Java).
  • Experience with data warehousing and data lake technologies (e.g., Snowflake, AWS Redshift, Databricks).
  • Knowledge of cloud platforms (e.g., AWS, GCP, Azure) and cloud-based data services.
  • Understanding of data modeling and data architecture concepts.
  • Experience with ETL/ELT tools and frameworks.
  • Excellent problem-solving and analytical skills.
  • Ability to work independently and as part of a team.

Preferred Qualifications:

  • Experience with real-time data processing and streaming technologies (e.g., Kafka, Flink).
  • Knowledge of machine learning and artificial intelligence concepts.
  • Experience with data visualization tools (e.g., Tableau, Power BI).
  • Certification in cloud platforms or data engineering.
Read more
 Global Digital Transformation Solutions Provider

Global Digital Transformation Solutions Provider

Agency job
via Peak Hire Solutions by Dhara Thakkar
Bengaluru (Bangalore)
7 - 9 yrs
₹10L - ₹28L / yr
Artificial Intelligence (AI)
Natural Language Processing (NLP)
skill iconPython
skill iconData Science
Generative AI
+10 more

Job Details

Job Title: Lead II - Software Engineering- AI, NLP, Python, Data science

Industry: Technology

Domain - Information technology (IT)

Experience Required: 7-9 years

Employment Type: Full Time

Job Location: Bangalore

CTC Range: Best in Industry


Job Description:

Role Proficiency:

Act creatively to develop applications by selecting appropriate technical options optimizing application development maintenance and performance by employing design patterns and reusing proven solutions. Account for others' developmental activities; assisting Project Manager in day-to-day project execution.


Additional Comments:

Mandatory Skills Data Science Skill to Evaluate AI, Gen AI, RAG, Data Science

Experience 8 to 10 Years

Location Bengaluru

Job Description

Job Title AI Engineer Mandatory Skills Artificial Intelligence, Natural Language Processing, python, data science Position AI Engineer – LLM & RAG Specialization Company Name: Sony India Software Centre About the role: We are seeking a highly skilled AI Engineer with 8-10 years of experience to join our innovation-driven team. This role focuses on the design, development, and deployment of advanced enterprise-scale Large Language Models (eLLM) and Retrieval Augmented Generation (RAG) solutions. You will work on end-to-end AI pipelines, from data processing to cloud deployment, delivering impactful solutions that enhance Sony’s products and services. Key Responsibilities: Design, implement, and optimize LLM-powered applications, ensuring high performance and scalability for enterprise use cases. Develop and maintain RAG pipelines, including vector database integration (e.g., Pinecone, Weaviate, FAISS) and embedding model optimization. Deploy, monitor, and maintain AI/ML models in production, ensuring reliability, security, and compliance. Collaborate with product, research, and engineering teams to integrate AI solutions into existing applications and workflows. Research and evaluate the latest LLM and AI advancements, recommending tools and architectures for continuous improvement. Preprocess, clean, and engineer features from large datasets to improve model accuracy and efficiency. Conduct code reviews and enforce AI/ML engineering best practices. Document architecture, pipelines, and results; present findings to both technical and business stakeholders. Job Description: 8-10 years of professional experience in AI/ML engineering, with at least 4+ years in LLM development and deployment. Proven expertise in RAG architectures, vector databases, and embedding models. Strong proficiency in Python; familiarity with Java, R, or other relevant languages is a plus. Experience with AI/ML frameworks (PyTorch, TensorFlow, etc.) and relevant deployment tools. Hands-on experience with cloud-based AI platforms such as AWS SageMaker, AWS Q Business, AWS Bedrock or Azure Machine Learning. Experience in designing, developing, and deploying Agentic AI systems, with a focus on creating autonomous agents that can reason, plan, and execute tasks to achieve specific goals. Understanding of security concepts in AI systems, including vulnerabilities and mitigation strategies. Solid knowledge of data processing, feature engineering, and working with large-scale datasets. Experience in designing and implementing AI-native applications and agentic workflows using the Model Context Protocol (MCP) is nice to have. Strong problem-solving skills, analytical thinking, and attention to detail. Excellent communication skills with the ability to explain complex AI concepts to diverse audiences. Day-to-day responsibilities: Design and deploy AI-driven solutions to address specific security challenges, such as threat detection, vulnerability prioritization, and security automation. Optimize LLM-based models for various security use cases, including chatbot development for security awareness or automated incident response. Implement and manage RAG pipelines for enhanced LLM performance. Integrate AI models with existing security tools, including Endpoint Detection and Response (EDR), Threat and Vulnerability Management (TVM) platforms, and Data Science/Analytics platforms. This will involve working with APIs and understanding data flows. Develop and implement metrics to evaluate the performance of AI models. Monitor deployed models for accuracy and performance and retrain as needed. Adhere to security best practices and ensure that all AI solutions are developed and deployed securely. Consider data privacy and compliance requirements. Work closely with other team members to understand security requirements and translate them into AI-driven solutions. Communicate effectively with stakeholders, including senior management, to present project updates and findings. Stay up to date with the latest advancements in AI/ML and security and identify opportunities to leverage new technologies to improve our security posture. Maintain thorough documentation of AI models, code, and processes. What We Offer Opportunity to work on cutting-edge LLM and RAG projects with global impact. A collaborative environment fostering innovation, research, and skill growth. Competitive salary, comprehensive benefits, and flexible work arrangements. The chance to shape AI-powered features in Sony’s next-generation products. Be able to function in an environment where the team is virtual and geographically dispersed

Education Qualification: Graduate


Skills: AI, NLP, Python, Data science


Must-Haves

Skills

AI, NLP, Python, Data science

NP: Immediate – 30 Days

 

Read more
Nuware Systems
Ariba Khan
Posted by Ariba Khan
Bengaluru (Bangalore)
6 - 13 yrs
Upto ₹40L / yr (Varies
)
Perl
SQL
skill iconPython

About Nuware

NuWare is a global technology and IT services company built on the belief that organizations require transformational strategies to scale, grow and build into the future owing to a dynamically evolving ecosystem. We strive towards our clients’ success in today’s hyper-competitive market by servicing their needs with next-gen technologies - AI/ML, NLP, chatbots, digital and automation tools.


We empower businesses to enhance their competencies, processes and technologies to fully leverage opportunities and accelerate impact. Through our focus on market differentiation and innovation - we offer services that are agile, streamlined, efficient and customer-centric.


Headquartered in Iselin, NJ, NuWare has been creating business value and generating growth opportunities for clients through its network of partners, global resources, highly skilled talent and SME’s for 25 years. NuWare is technology agnostic and offers services for Systems Integration, Cloud, Infrastructure Management, Mobility, Test automation, Data Sciences and Social & Big Data Analytics.


Required Skills

  • Hands on experience working with Perl
  • Experience with any Batch schedulers such as crontab, Control-M or Airflow
  • Hands on experience with SQL
  • Some moderate experience with Python
  • Strong soft skills
Read more
Semiconductor Manufacturing Industry

Semiconductor Manufacturing Industry

Agency job
via Peak Hire Solutions by Dhara Thakkar
Chennai
5 - 8 yrs
₹40L - ₹48L / yr
skill iconPython
skill iconMachine Learning (ML)
Image Processing
skill iconDeep Learning
Algorithms
+28 more

🎯 Ideal Candidate Profile:

This role requires a seasoned engineer/scientist with a strong academic background from a premier institution and significant hands-on experience in deep learning (specifically image processing) within a hardware or product manufacturing environment.


📋 Must-Have Requirements:

Experience & Education Combinations:

Candidates must meet one of the following criteria:

  • Doctorate (PhD) + 2 years of related work experience
  • Master's Degree + 5 years of related work experience
  • Bachelor's Degree + 7 years of related work experience


Technical Skills:

  • Minimum 5 years of hands-on experience in all of the following:
  • Python
  • Deep Learning (DL)
  • Machine Learning (ML)
  • Algorithm Development
  • Image Processing
  • 3.5 to 4 years of strong proficiency with PyTorch OR TensorFlow / Keras.


Industry & Institute:

  • Education: Must be from a premier institute (IIT, IISC, IIIT, NIT, BITS) or a recognized regional tier 1 college.
  • Industry: Current or past experience in a Product, Semiconductor, or Hardware Manufacturing company is mandatory.
  • Preference: Candidates from engineering product companies are strongly preferred.


ℹ️ Additional Role Details:

  • Interview Process: 3 technical rounds followed by 1 HR round.
  • Work Model: Hybrid (requiring 3 days per week in the office).


Based on the job description you provided, here is a detailed breakdown of the Required Skills and Qualifications for this AI/ML/LLM role, formatted for clarity.


📝 Required Skills and Competencies:

💻 Programming & ML Prototyping:

  • Strong Proficiency: Python, Data Structures, and Algorithms.
  • Hands-on Experience: NumPy, Pandas, Scikit-learn (for ML prototyping).


🤖 Machine Learning Frameworks:

  • Core Concepts: Solid understanding of:
  • Supervised/Unsupervised Learning
  • Regularization
  • Feature Engineering
  • Model Selection
  • Cross-Validation
  • Ensemble Methods: Experience with models like XGBoost and LightGBM.


🧠 Deep Learning Techniques:

  • Frameworks: Proficiency with PyTorch OR TensorFlow / Keras.
  • Architectures: Knowledge of:
  • Convolutional Neural Networks (CNNs)
  • Recurrent Neural Networks (RNNs)
  • Long Short-Term Memory networks (LSTMs)
  • Transformers
  • Attention Mechanisms
  • Optimization: Familiarity with optimization techniques (e.g., Adam, SGD), Dropout, and Batch Normalization.


💬 LLMs & RAG (Retrieval-Augmented Generation):

  • Hugging Face: Experience with the Transformers library (tokenizers, embeddings, model fine-tuning).
  • Vector Databases: Familiarity with Milvus, FAISS, Pinecone, or ElasticSearch.
  • Advanced Techniques: Proficiency in:
  • Prompt Engineering
  • Function/Tool Calling
  • JSON Schema Outputs


🛠️ Data & Tools:

  • Data Management: SQL fundamentals; exposure to data wrangling and pipelines.
  • Tools: Experience with Git/GitHub, Jupyter, and basic Docker.


🎓 Minimum Qualifications (Experience & Education Combinations):

Candidates must have experience building AI systems/solutions with Machine Learning, Deep Learning, and LLMs, meeting one of the following criteria:

  • Doctorate (Academic) Degree + 2 years of related work experience.
  • Master's Level Degree + 5 years of related work experience.
  • Bachelor's Level Degree + 7 years of related work experience.


⭐ Preferred Traits and Mindset:

  • Academic Foundation: Solid academic background with strong applied ML/DL exposure.
  • Curiosity: Eagerness to learn cutting-edge AI and willingness to experiment.
  • Communication: Clear communicator who can explain ML/LLM trade-offs simply.
  • Ownership: Strong problem-solving and ownership mindset.
Read more
ZestFindz Private Limited

at ZestFindz Private Limited

2 candid answers
ZestFindz Info Desk
Posted by ZestFindz Info Desk
Hyderabad
1 - 3 yrs
₹2L - ₹6L / yr
skill iconReact.js
skill iconNodeJS (Node.js)
skill iconExpress
skill iconJavascript
TypeScript
+16 more

We are seeking a talented Full Stack Developer to design, build, and maintain scalable web and mobile applications. The ideal candidate should have hands-on experience in frontend (React.js, Next.js), backend (Node.js, Express), databases (PostgreSQL, MongoDB), and Python for AI/ML integration. You will work closely with the engineering team to deliver secure, high-performance, and user-friendly products.


Key Responsibilities

  • Develop responsive and dynamic web applications using React.js, Next.js and modern UI frameworks.
  • Build and optimize REST APIs and backend services with Node.js and Express.js.
  • Design and manage PostgreSQL and MongoDB databases, ensuring optimized queries and data modeling.
  • Implement state management using Redux/Context API.
  • Ensure API security with JWT, OAuth2, Helmet.js, and rate-limiting.
  • Integrate Google Cloud services (GCP) for hosting, storage, and serverless functions.
  • Deploy and maintain applications using CI/CD pipelines, Docker, and Kubernetes.
  • Use Redis for caching, sessions, and job queues.
  • Optimize frontend performance (lazy loading, code splitting, caching strategies).
  • Collaborate with design, QA, and product teams to deliver high-quality features.
  • Maintain clear documentation and follow coding standards.





Read more
MARS Telecom Systems

at MARS Telecom Systems

2 candid answers
Nikita Sinha
Posted by Nikita Sinha
Hyderabad
7 - 10 yrs
Upto ₹35L / yr (Varies
)
Ubuntu
skill iconC
skill iconC#
skill iconC++
skill iconPython
+1 more

We’re seeking experienced Embedded/Application Engineers with strong hands-on experience in firmware development, computer vision, and modern UI frameworks. You’ll work across low-level and application layers, building performance-critical systems and intuitive frontends.


Key Responsibilities

  • Design, develop, and maintain firmware and application-level code on Ubuntu/Linux.
  • Build and integrate C/C++/Python modules, including computer-vision components.
  • Develop Angular-based frontends to interface with backend applications.
  • Optimize code for speed, stability, and maintainability across environments.
  • Collaborate with cross-functional teams (hardware, QA, and UI) to deliver end-to-end solutions.
  • Participate in peer reviews, testing, debugging, and CI/CD pipelines.

Required Skills

  • Strong experience with C firmware and Linux (Ubuntu 18+) environments.
  • Proficiency in C#, C++, and Python programming.
  • Hands-on experience in Computer Vision / Image Processing.
  • Experience in Angular frontend development.
  • Knowledge of build systems, version control (Git), and Agile practices.

Good to Have

  • Exposure to cross-platform frameworks.
  • Familiarity with embedded device drivers or hardware integration.
  • Basic knowledge of cloud or container-based deployments.


Read more
Corridor Platforms

at Corridor Platforms

3 recruiters
Aniket Agrawal
Posted by Aniket Agrawal
Bengaluru (Bangalore)
4 - 8 yrs
₹30L - ₹50L / yr
skill iconPython
PySpark
Apache Spark
NumPy
pandas
+8 more

About Corridor Platforms

Corridor Platforms is a leader in next-generation risk decisioning and responsible AI governance, empowering banks and lenders to build transparent, compliant, and data-driven solutions. Our platforms combine advanced analytics, real-time data integration, and GenAI to support complex financial decision workflows for regulated industries.

Role Overview

As a Backend Engineer at Corridor Platforms, you will:

  • Architect, develop, and maintain backend components for our Risk Decisioning Platform.
  • Build and orchestrate scalable backend services that automate, optimize, and monitor high-value credit and risk decisions in real time.
  • Integrate with ORM layers – such as SQLAlchemy – and multi RDBMS solutions (Postgres, MySQL, Oracle, MSSQL, etc) to ensure data integrity, scalability, and compliance.
  • Collaborate closely with Product Team, Data Scientists, QA Teams to create extensible APIs, workflow automation, and AI governance features.
  • Architect workflows for privacy, auditability, versioned traceability, and role-based access control, ensuring adherence to regulatory frameworks.
  • Take ownership from requirements to deployment, seeing your code deliver real impact in the lives of customers and end users.

Technical Skills

  • Languages: Python 3.9+, SQL, JavaScript/TypeScript, Angular
  • Frameworks: Flask, SQLAlchemy, Celery, Marshmallow, Apache Spark
  • Databases: PostgreSQL, Oracle, SQL Server, Redis
  • Tools: pytest, Docker, Git, Nx
  • Cloud: Experience with AWS, Azure, or GCP preferred
  • Monitoring: Familiarity with OpenTelemetry and logging frameworks


Why Join Us?

  • Cutting-Edge Tech: Work hands-on with the latest AI, cloud-native workflows, and big data tools—all within a single compliant platform.
  • End-to-End Impact: Contribute to mission-critical backend systems, from core data models to live production decision services.
  • Innovation at Scale: Engineer solutions that process vast data volumes, helping financial institutions innovate safely and effectively.
  • Mission-Driven: Join a passionate team advancing fair, transparent, and compliant risk decisioning at the forefront of fintech and AI governance.

What We’re Looking For

  • Proficiency in Python, SQLAlchemy (or similar ORM), and SQL databases.
  • Experience developing and maintaining scalable backend services, including API, data orchestration, ML workflows,  and workflow automation.
  • Solid understanding of data modeling, distributed systems, and backend architecture for regulated environments.
  • Curiosity and drive to work at the intersection of AI/ML, fintech, and regulatory technology.
  • Experience mentoring and guiding junior developers.


Ready to build backends that shape the future of decision intelligence and responsible AI?

Apply now and become part of the innovation at Corridor Platforms!



Read more
Chtrbox
Smruti Kedare
Posted by Smruti Kedare
Mumbai
2 - 8 yrs
₹10L - ₹18L / yr
skill iconMongoDB
skill iconAmazon Web Services (AWS)
RESTful APIs
API
skill iconNextJs (Next.js)
+2 more

Backend Engineer (MongoDB / API Integrations / AWS / Vectorization)


Position Summary

We are hiring a Backend Engineer with expertise in MongoDB, data vectorization, and advanced AI/LLM integrations. The ideal candidate will have hands-on experience developing backend systems that power intelligent data-driven applications, including robust API integrations with major social media platforms (Meta, Instagram, Facebook, with expansion to TikTok, Snapchat, etc.). In addition, this role requires deep AWS experience (Lambda, S3, EventBridge) to manage serverless workflows, automate cron jobs, and execute both scheduled and manual data pulls. You will collaborate closely with frontend developers and AI engineers to deliver scalable, resilient APIs that power our platform.


Key Responsibilities

  • Design, implement, and maintain backend services with MongoDB and scalable data models.
  • Build pipelines to vectorize data for retrieval-augmented generation (RAG) and other AI-driven features.
  • Develop robust API integrations with major social platforms (Meta, Instagram Graph API, Facebook API; expand to TikTok, Snapchat, etc.).
  • Implement and maintain AWS Lambda serverless functions for scalable backend processes.
  • Use AWS EventBridge to schedule cron jobs and manage event-driven workflows.
  • Leverage AWS S3 for structured and unstructured data storage, retrieval, and processing.
  • Build workflows for manual and automated data pulls from external APIs.
  • Optimize backend systems for performance, scalability, and reliability at high data volumes.
  • Collaborate with frontend engineers to ensure smooth integration into Next.js applications.
  • Ensure security, compliance, and best practices in API authentication (OAuth, tokens, etc.).
  • Contribute to architecture planning, documentation, and system design reviews.


Required Skills/Qualifications

  • Strong expertise with MongoDB (including Atlas) and schema design.
  • Experience with data vectorization and embeddings (OpenAI, Pinecone, MongoDB Atlas Vector Search, etc.).
  • Proven track record of social media API integrations (Meta, Instagram, Facebook; additional platforms a plus).
  • Proficiency in Node.js, Python, or other backend languages for API development.
  • Deep understanding of AWS services:
  • Lambda for serverless functions.
  • S3 for structured/unstructured data storage.
  • EventBridge for cron jobs, scheduled tasks, and event-driven workflows.
  • Strong understanding of REST and GraphQL API design.
  • Experience with data optimization, caching, and large-scale API performance.


Preferred Skills/Experience

  • Experience with real-time data pipelines (Kafka, Kinesis, or similar).
  • Familiarity with CI/CD pipelines and automated deployments on AWS.
  • Knowledge of serverless architecture best practices.
  • Background in SaaS platform development or data analytics systems.


Read more
Arya

Arya

Agency job
via Arya by Ariba Khan
Noida
1 - 3 yrs
Upto ₹15L / yr (Varies
)
skill iconPython
PySpark
skill iconMachine Learning (ML)
Natural Language Processing (NLP)
Artificial Intelligence (AI)

About the Role:

We are looking for a Data Scientist with a strong foundation in geospatial analysis and machine learning to join our team. You’ll work on developing models and insights from satellite imagery for agricultural applications using Google Earth Engine (GEE) and AI tools.


This role is ideal for someone with hands-on project or internship experience in remote sensing or geospatial ML who’s ready to grow into a more focused and impactful role in precision agriculture.


Key Responsibilities:

  • Work with satellite imagery to extract insights about farms, vegetation, and land health.
  • Use Google Earth Engine (JavaScript and Python APIs) for geospatial data processing.
  • Assist in building ML models for:
  • Vegetation and stress index analysis
  • Moisture and crop condition estimation
  • Crop classification and growth monitoring
  • Support the design of data pipelines and automation workflows.
  • Collaborate with team members from agronomy, GIS, and engineering for data integration and delivery.

Preferred Qualifications:

  • 1-3 years of experience in data science, remote sensing, GIS, or a related field (internships or projects count!).
  • Hands-on experience with Google Earth Engine.
  • Proficient in Python and familiar with key data science libraries (NumPy, Pandas, Scikit-learn, etc.).
  • Exposure to working with satellite imagery (e.g., Sentinel, Landsat).
  • Some experience with AI/ML models for classification or time-series analysis.
  • Curious, proactive, and comfortable learning in a fast-moving environment.

Nice to Have:

  • Exposure to agriculture, climate, or environmental data projects.
  • Familiarity with cloud platforms (GCP).
  • Basic GIS skills (e.g., QGIS, shapefiles, geospatial formats).
  • Projects or portfolio showcasing relevant work.


What We Offer:

  • A chance to work on high-impact, real-world problems in agriculture.
  • Supportive team environment and opportunities to grow your skills.
  • Flexible work setup and a collaborative, mission-driven culture
Read more
Sonatype

at Sonatype

5 candid answers
Reshika Mendiratta
Posted by Reshika Mendiratta
Hyderabad
5yrs+
Upto ₹28L / yr (Varies
)
skill iconPython
skill iconData Analytics
PySpark
Looker
databricks
+6 more

About the Role

At Sonatype, we empower developers with best-in-class tools to build secure, high-quality software at scale. Our mission is to create a world where software is always secure and developers can innovate without fear. Trusted by thousands of organizations, including Fortune 500 companies, we are pioneers in software supply chain management, open-source security, and DevSecOps.

We're looking for a Senior Data Analyst to help us shape the future of secure software development. If you love solving complex problems, working with cutting-edge technologies, and mentoring engineering teams, we’d love to hear from you.


What You’ll Do

As a Senior Data Analyst with 5+ years of demonstrated experience, you will transform complex datasets into actionable insights, build and maintain analytics infrastructure, and partner with cross-functional teams to drive data-informed decision-making and product improvements.

You’ll own the end-to-end analytics lifecycle—from data modeling and dashboard creation to experimentation and KPI development—ensuring that our stakeholders have timely, accurate information to optimize operations and enhance customer experiences.


Key Responsibilities:

  • Using the available data and data models, perform analyses that answer specific data questions and identify trends, patterns, and anomalies
  • Build and maintain dashboards and reports using tools like Looker and Databricks; support monthly reporting requirements
  • Collaborate with data engineers, data scientists, and product teams to support data initiatives for internal use as well as for end customers
  • Present findings and insights to both technical and non-technical audiences – provide visual aids, dashboards, reports, and white papers that explain insights gained through multiple analyses
  • Monitor select data and dashboards for usage anomalies and flag for upsell and cross-sell opportunities
  • Translate business requirements into technical specifications for data queries and models
  • Assist in the development and maintenance of databases and data systems; collect, clean, and validate data from various sources to ensure accuracy and completeness


What You Need

We’re seeking an experienced analyst who thrives in an agile, collaborative environment and enjoys tackling technical challenges.

Minimum Qualifications:

  • Bachelor’s degree in a quantitative field (e.g., Mathematics, Statistics, Computer Science, Economics, Business Analytics)
  • 4+ years of experience in a data analysis or business intelligence role
  • Proficiency in SQL, Python, Scala, Pyspark, and other data analyst languages and standards for data querying and manipulation
  • Experience working in a collaborative coding environment (e.g., GitHub)
  • Experience with data science, analysis, and visualization tools (e.g., Databricks, Looker, Spark, Power BI, Plotly)
  • Strong analytical and problem-solving skills with attention to detail
  • Ability to communicate insights clearly and concisely to a variety of stakeholders
  • Understanding of data lakes and data warehousing concepts and experience with data pipelines
  • Knowledge of business systems is a plus (e.g., CRMs, demand generation tools, etc.)
Read more
Apprication pvt ltd

at Apprication pvt ltd

1 recruiter
Adam patel
Posted by Adam patel
Mumbai
2.5 - 4 yrs
₹6L - ₹12L / yr
Artificial Intelligence (AI)
skill iconMachine Learning (ML)
Huggingface
skill iconPython
PyTorch
+13 more

Job Title: AI / Machine Learning Engineer

 Company: Apprication Pvt Ltd

 Location: Goregaon East

 Employment Type: Full-time

 Experience: 2.5-4 Years


  • Bachelor’s or Master’s in Computer Science, Machine Learning, Data Science, or related field.
  • Proven experience of 2.5-4 years as an AI/ML Engineer, Data Scientist, or AI Application Developer.
  • Strong programming skills in Python (TensorFlow, PyTorch, Scikit-learn); familiarity with LangChain, Hugging Face, OpenAI API is a plus.
  • Experience in model deployment, serving, and optimization (FastAPI, Flask, Django, or Node.js).
  • Proficiency with databases (SQL and NoSQL: MySQL, PostgreSQL, MongoDB).
  • Hands-on experience with cloud ML services (Sage Maker, Vertex AI, Azure ML) and DevOps tools (Docker, Kubernetes, CI/CD).
  • Knowledge of MLOps practices: model versioning, monitoring, retraining, experiment tracking.
  • Familiarity with frontend frameworks (React.js, Angular, Vue.js) for building AI-driven interfaces (nice to have).
  • Strong understanding of data structures, algorithms, APIs, and distributed systems.
  • Excellent problem-solving, analytical, and communication skills.
  • Develop and maintain ETL pipelines, data preprocessing workflows, and feature engineering processes.
  • Ensure solutions meet security, compliance, and performance standards.
  • Stay updated with the latest research and trends in deep learning, generative AI, and LLMs.
Read more
Hashone Careers

at Hashone Careers

2 candid answers
Madhavan I
Posted by Madhavan I
Hyderabad
6 - 10 yrs
₹14L - ₹28L / yr
skill iconPython
Terraform
skill iconKubernetes

Job Description

Position: Senior DevOps Engineer

Grade: Senior Level

Experience: 6-10 Years

Location: Hyderabad

Employment Type: Full-time

Open Positions:1


🚀 Job Overview

We are seeking an experienced Lead DevOps Engineer with deep expertise in Kubernetes infrastructure design and implementation. This role requires an individual who can architect, build, and manage enterprise-grade Kubernetes clusters from the ground up. The position offers an exciting opportunity to lead infrastructure modernization initiatives and work with cutting-edge cloud-native technologies.

Initial Setup Phase:


🎯 Key Responsibilities

Infrastructure Design & Implementation


Design and architect enterprise-grade Kubernetes clusters across multi-cloud environments (AWS/Azure/GCP)

Build production-ready Kubernetes infrastructure with high availability, scalability, and security best practices

Implement Infrastructure as Code using Terraform, Helm charts, and GitOps methodologies

Set up monitoring, logging, and observability solutions for Kubernetes workloads

Design disaster recovery and backup strategies for containerized applications


Leadership & Team Management


Lead a team of 3-4 DevOps engineers and provide technical mentorship

Drive best practices for containerization, orchestration, and cloud-native development

Collaborate with development teams to optimize application deployment strategies

Conduct technical reviews and ensure code quality standards across infrastructure components

Facilitate knowledge transfer and create comprehensive documentation


Operational Excellence


Manage CI/CD pipelines integrated with Kubernetes deployments

Implement security policies including RBAC, network policies, and container security scanning

Optimize cluster performance and resource utilization

Automate routine...


Skills

PYTHON, TERRAFORM, CI/CD, kubernetes, CLOUD SERVICES, DOCKER, GIT, JENKINS

Read more
Bengaluru (Bangalore)
2 - 3 yrs
₹9L - ₹10L / yr
FastAPI
skill iconMongoDB
skill iconDocker
skill iconPython

Job Title: Back-End Developer -Immediate joiner - 2 years - FastAPI

Location: Bengaluru, India 


Company Overview:

IAI Solution Pvt Ltd,operates at the edge of applied AI where foundational research meets real-world deployment. We craft intelligent systems that think in teams, adapt with context, and deliver actionable insight across domains.

We are seeking talented Back-End Developers to join our team, where you will play a key role in developing the infrastructure that supports our advanced AI technologies. If you are passionate about back-end development and have a keen interest in AI, this opportunity is perfect for you.


Position Summary: We are looking for an experienced Back-End Developer with expertise in Python, object-oriented programming, FastAPI. The ideal candidate should also be proficient in working with databases such as SQL, NoSQL, MongoDB, SupaBase, and Redis. While AI experience is not mandatory, preference will be given to candidates who have worked on AI projects or have a strong interest in the field.


Key Responsibilities:

  • Develop, test, and maintain scalable back-end systems that power our AI solutions.
  • Design and implement RESTful APIs to support front-end applications and third-party integrations.
  • Collaborate with cross-functional teams to define technical requirements and deliver high-quality products.
  • Optimize back-end performance, ensuring systems are fast, reliable, and scalable.
  • Manage and maintain various database systems, including SQL, NoSQL, MongoDB, SuperBase, and Redis.
  • Troubleshoot and resolve complex technical challenges, leveraging strong problem-solving and critical thinking skills.
  • Handle multiple projects simultaneously, ensuring deadlines are met and quality is maintained.


Qualifications:

  • 2+ years in backend software development.
  • Strong proficiency in Python and object-oriented programming.
  • Extensive experience with FastAPI frameworks.
  • In-depth knowledge of database technologies, including SQL, NoSQL, MongoDB, SuperBase, and Redis. 
  • Demonstrated ability to manage multiple projects in a fast-paced environment.
  • Strong analytical and problem-solving skills with a keen eye for detail.
  • Interest or experience in AI projects is highly desirable.\


Perks & Benefits:

  • Competitive salary with performance-based bonuses.
  • Opportunity to work on cutting-edge AI projects within a talented and innovative team.
  • Access to professional development resources, including AI training programs.




Read more
NeoGenCode Technologies Pvt Ltd
Ritika Verma
Posted by Ritika Verma
Surat, Ahmedabad
8 - 12 yrs
₹15L - ₹22L / yr
skill iconFlutter
DART
skill iconPython
skill iconDjango
CI/CD
+5 more

Job Title: Tech Team Lead – Flutter

Department: Engineering

Location: Surat / Ahmedabad (On-site)

Employment Type: Full-time

About the Company

We are an AI-driven nutrition platform built to make healthier eating and living effortless. We translate ingredients and dishes into reliable nutrition profiles, then personalize recommendations based on goals, context, and preferences—across home cooking, daily routines, and restaurant ordering.

Our technology includes Flutter mobile apps, a Python/Django API, real-time data services, and machine learning models for tagging, ranking, and predictions. We’re building a delightful, safe, and scalable experience that helps users choose better meals and track progress—with strong standards for privacy, quality, and performance.

Role Summary

As the Tech Team Lead, you will lead a cross-functional engineering squad (Flutter, Backend, QA, and occasionally Data) to deliver secure, scalable product features on predictable timelines. This role combines hands-on architecture and coding with team leadership, agile execution, and quality ownership.

You’ll mentor engineers, improve engineering practices, and collaborate closely with Product and Design teams to ship impactful, user-centric experiences.

Key Responsibilities

  • Own squad roadmap delivery — break down epics into milestones, estimate efforts, prioritize, and ensure timely releases.
  • Provide technical leadership — define architecture, enforce coding standards, and guide trade-offs between performance, cost, and complexity.
  • Contribute 30–50% hands-on coding — review critical PRs, pair program, and work on key modules.
  • Ensure quality and reliability through robust testing (unit/integration/E2E), CI/CD gates, observability, and incident management.
  • Mentor and manage 4–8 engineers — conduct 1:1s, set goals, and drive performance and career growth.
  • Maintain security and compliance — apply secure development best practices (AuthN/AuthZ, PII handling, secret management).
  • Collaborate with Product/Design on scope and acceptance criteria; communicate status, risks, and dependencies effectively.
  • Keep technical documentation up to date — architecture decisions, system diagrams, runbooks, and release notes.

Required Qualifications

  • 6–9+ years of software engineering experience, with at least 2–3 years in team/squad leadership.
  • Strong backend expertise in Python, Django/DRF, REST API design, caching, queues, and WebSockets.
  • Working knowledge of Flutter/Dart architecture and state management to lead cross-stack initiatives.
  • Experience with CI/CD (GitHub Actions, Fastlane, Codemagic), containers (Docker), and AWS (EC2/ECS/ECR, S3, CloudFront, CloudWatch).
  • Strong testing culture using PyTest, Flutter Test, and observability tools (logs, metrics, tracing, alerts).
  • Excellent estimation, prioritization, and communication skills, with the ability to challenge scope when necessary.

Preferred (Nice to Have)

  • Experience releasing and managing mobile apps (Play Console, App Store Connect, TestFlight).
  • Exposure to Data/ML pipelines (batch vs. real-time inference, model versioning).
  • Familiarity with security practices — OWASP, IAM, secret management, VPCs.
  • Domain experience in health-tech or food-tech, or other privacy-sensitive industries.

Tech Stack You’ll Work With

  • Mobile/Web: Flutter (Dart), React (optional)
  • Backend: Python, Django/DRF, Celery, Redis, WebSockets
  • Data/Storage: PostgreSQL/MySQL, MongoDB
  • Infra/DevOps: AWS (EC2/ECS/ECR/S3/CloudFront/CloudWatch), Docker, Nginx, GitHub Actions, Fastlane, Codemagic
  • Quality/Observability: PyTest, Flutter Test, Playwright/Cypress, Sentry, Crashlytics, OpenTelemetry (optional)


Read more
CoffeeBeans

at CoffeeBeans

2 candid answers
Nikita Sinha
Posted by Nikita Sinha
Bengaluru (Bangalore), Pune
7 - 9 yrs
Upto ₹32L / yr (Varies
)
skill iconPython
ETL
Data modeling
CI/CD
databricks
+2 more

We are looking for experienced Data Engineers who can independently build, optimize, and manage scalable data pipelines and platforms.

In this role, you’ll:

  • Work closely with clients and internal teams to deliver robust data solutions powering analytics, AI/ML, and operational systems.
  • Mentor junior engineers and bring engineering discipline into our data engagements.

Key Responsibilities

  • Design, build, and optimize large-scale, distributed data pipelines for both batch and streaming use cases.
  • Implement scalable data models, warehouses/lakehouses, and data lakes to support analytics and decision-making.
  • Collaborate with stakeholders to translate business requirements into technical solutions.
  • Drive performance tuning, monitoring, and reliability of data pipelines.
  • Write clean, modular, production-ready code with proper documentation and testing.
  • Contribute to architectural discussions, tool evaluations, and platform setup.
  • Mentor junior engineers and participate in code/design reviews.

Must-Have Skills

  • Strong programming skills in Python and advanced SQL expertise.
  • Deep understanding of ETL/ELT, data modeling (OLTP & OLAP), warehousing, and stream processing.
  • Hands-on with distributed data processing frameworks (Apache Spark, Flink, or similar).
  • Experience with orchestration tools like Airflow (or similar).
  • Familiarity with CI/CD pipelines and Git.
  • Ability to debug, optimize, and scale data pipelines in production.

Good to Have

  • Experience with cloud platforms (AWS preferred; GCP/Azure also welcome).
  • Exposure to Databricks, dbt, or similar platforms.
  • Understanding of data governance, quality frameworks, and observability.
  • Certifications (e.g., AWS Data Analytics, Solutions Architect, or Databricks).

Other Expectations

  • Comfortable working in fast-paced, client-facing environments.
  • Strong analytical and problem-solving skills with attention to detail.
  • Ability to adapt across tools, stacks, and business domains.
  • Willingness to travel within India for short/medium-term client engagements, as needed.
Read more
CADFEM India
Agency job
via hirezyai by Aardra Suresh
Hyderabad
4 - 8 yrs
₹12L - ₹15L / yr
skill iconPython
skill iconReact.js
TypeScript
skill iconPostgreSQL
skill iconAngular (2+)
+2 more

Role Summary

We are seeking a Full-Stack Developer to build and secure features for our Therapy Planning Software (TPS), which integrates with RMS/RIS, EMR systems, devices (DICOM, Bluetooth, VR, robotics, FES), and supports ICD–ICF–ICHI coding. The role involves ~40% frontend and 60% backend development, with end-to-end responsibility for security across application layers.

Responsibilities

Frontend (40%)

  1. Build responsive, accessible UI in React + TypeScript (or Angular/Vue).
  2. Implement multilingual (i18n/l10n) and WCAG 2.1 accessibility standards.
  3. Develop offline-capable PWAs for home programs.
  4. Integrate REST/FHIR APIs for patient workflows, scheduling, and reporting.
  5. Support features like voice-to-text, video capture, and compression.

Backend (60%)

  1. Design and scale REST APIs using Python (FastAPI/Django).
  2. Build modules for EMR storage, assessments, therapy plans, and data logging.
  3. Implement HL7/FHIR endpoints and secure integrations with external EMRs.
  4. Handle file uploads (virus scanning, HD video compression, secure storage).
  5. Optimize PostgreSQL schemas and queries for performance.
  6. Implement RBAC, MFA, PDPA compliance, edit locks, and audit trails.

Security Layer (Ownership)

  1. Identity & Access: OAuth2/OIDC, JWT, MFA, SSO.
  2. Data Protection: TLS, AES-256 at rest, field-level encryption, immutable audit logs.
  3. Compliance: PDPA, HIPAA principles, MDA requirements.
  4. DevSecOps: Secure coding (OWASP ASVS), dependency scanning, secrets management.
  5. Monitoring: Logging/metrics (ELK/Prometheus), anomaly detection, DR/BCP preparedness.

Requirements

  • Strong skills in Python (FastAPI/Django) and React + TypeScript.
  • Experience with HL7/FHIR, EMR data, and REST APIs.
  • Knowledge of OAuth2/JWT authentication, RBAC, audit logging.
  • Proficiency with PostgreSQL and database optimization.
  • Cloud deployment (AWS/Azure) and containerization (Docker/K8s) a plus.

Added Advantage

  • Familiarity with ICD, ICF, ICHI coding systems or medical diagnosis workflows.

Success Metrics

  • Deliver secure end-to-end features with clinical workflow integration.
  • Pass OWASP/ASVS L2 security baseline.
  • Establish full audit trail and role-based access across at least one clinical workflow.


Read more
Albert Invent

at Albert Invent

4 candid answers
3 recruiters
Nikita Sinha
Posted by Nikita Sinha
Bengaluru (Bangalore)
9 - 13 yrs
Upto ₹50L / yr (Varies
)
Search Engine Optimization (SEO)
skill iconElastic Search
skill iconPython

To lead the design, development, and optimization of high-scale search and discovery systems

leveraging deep expertise in OpenSearch. The Search Staff Engineer will enhance search

relevance, query performance, and indexing efficiency by utilizing OpenSearch’s full-text, vector

search, and analytics capabilities. This role focuses on building real-time search pipelines,

implementing advanced ranking models, and architecting distributed indexing solutions to

deliver a high-performance, scalable, and intelligent search experience.


Responsibilities:

• Architect, develop, and maintain a scalable OpenSearch-based search infrastructure for

high-traffic applications.

• Optimize indexing strategies, sharding, replication, and query execution to improve

search performance and reliability.

• Implement cross-cluster search, multi-tenant search solutions, and real-time search

capabilities.

• Ensure efficient log storage, retention policies, and lifecycle management in

OpenSearch.

• Monitor and troubleshoot performance bottlenecks, ensuring high availability and

resilience.

• Design and implement real-time and batch indexing pipelines for structured and

unstructured data.

• Optimize schema design, field mappings, and tokenization strategies for improved

search performance.

• Manage custom analyzers, synonyms, stopwords, and stemming filters for multilingual

search.

• Ensure search infrastructure adheres to security best practices, including encryption,

access control, and audit logging.

• Optimize search for low latency, high throughput, and cost efficiency.

• Collaborate cross-functionally with engineering, product, and operations teams to

ensure seamless platform delivery.

• Define and communicate a strategic roadmap for Search initiatives aligned with business

goals.

• Work closely with stakeholders to understand database requirements and provide

technical solutions.


Requirements:

• 8+ years of experience in search engineering, with at least 3+ years of deep experience in

OpenSearch.

• Strong expertise in search indexing, relevance tuning, ranking algorithms, and query

parsing.

• Hands-on experience with OpenSearch configurations, APIs, shards, replicas, and

cluster scaling.

• Strong programming skills in Node.js and Python and experience with OpenSearch SDKs.

• Proficiency in REST APIs, OpenSearch DSL queries, and aggregation frameworks.

• Knowledge of observability, logging, and monitoring tools (Prometheus, OpenTelemetry,

Grafana).

• Experience managing OpenSearch clusters on AWS OpenSearch, Containers, or self-

hosted environments.

• Strong understanding of security best practices, role-based access control (RBAC),

encryption, and IAM.

• Familiarity with multi-region, distributed search architectures.

• Strong analytical and debugging skills, with a proactive approach to identifying and

mitigating risks.

• Exceptional communication skills, with the ability to influence and drive consensus

among stakeholders.

Read more
Albert Invent

at Albert Invent

4 candid answers
3 recruiters
Nikita Sinha
Posted by Nikita Sinha
Bengaluru (Bangalore)
4 - 6 yrs
Upto ₹30L / yr (Varies
)
skill iconPython
AWS Lambda
Amazon Redshift
Snow flake schema
SQL

To design, build, and optimize scalable data infrastructure and pipelines that enable efficient

data collection, transformation, and analysis across the organization. The Senior Data Engineer

will play a key role in driving data architecture decisions, ensuring data quality and availability,

and empowering analytics, product, and engineering teams with reliable, well-structured data to

support business growth and strategic decision-making.


Responsibilities:

• Develop, and maintain SQL and NoSQL databases, ensuring high performance,

scalability, and reliability.

• Collaborate with the API team and Data Science team to build robust data pipelines and

automations.

• Work closely with stakeholders to understand database requirements and provide

technical solutions.

• Optimize database queries and performance tuning to enhance overall system

efficiency.

• Implement and maintain data security measures, including access controls and

encryption.

• Monitor database systems and troubleshoot issues proactively to ensure uninterrupted

service.

• Develop and enforce data quality standards and processes to maintain data integrity.

• Create and maintain documentation for database architecture, processes, and

procedures.

• Stay updated with the latest database technologies and best practices to drive

continuous improvement.

• Expertise in SQL queries and stored procedures, with the ability to optimize and fine-tune

complex queries for performance and efficiency.

• Experience with monitoring and visualization tools such as Grafana to monitor database

performance and health.


Requirements:

• 4+ years of experience in data engineering, with a focus on large-scale data systems.

• Proven experience designing data models and access patterns across SQL and NoSQL

ecosystems.

• Hands-on experience with technologies like PostgreSQL, DynamoDB, S3, GraphQL, or

vector databases.

• Proficient in SQL stored procedures with extensive expertise in MySQL schema design,

query optimization, and resolvers, along with hands-on experience in building and

maintaining data warehouses.

• Strong programming skills in Python or JavaScript, with the ability to write efficient,

maintainable code.

• Familiarity with distributed systems, data partitioning, and consistency models.

• Familiarity with observability stacks (Prometheus, Grafana, OpenTelemetry) and

debugging production bottlenecks.

• Deep understanding of cloud infrastructure (preferably AWS), including networking, IAM,

and cost optimization.

• Prior experience building multi-tenant systems with strict performance and isolation

guarantees.

• Excellent communication and collaboration skills to influence cross-functional technical

decisions.


Read more
Koolioai
Swarna M
Posted by Swarna M
Chennai
0 - 1 yrs
₹4L - ₹6L / yr
Software Testing (QA)
Test Automation (QA)
Appium
Selenium
skill iconPython

About koolio.ai

Website: www.koolio.ai

koolio Inc. is a cutting-edge Silicon Valley startup dedicated to transforming how stories are told through audio. Our mission is to democratize audio content creation by empowering individuals and businesses to effortlessly produce high-quality, professional-grade content. Leveraging AI and intuitive web-based tools, koolio.ai enables creators to craft, edit, and distribute audio content—from storytelling to educational materials, brand marketing, and beyond—easily. We are passionate about helping people and organizations share their voices, fostering creativity, collaboration, and engaging storytelling for a wide range of use cases.

About the Full-Time Position

We are looking for a Junior QA Engineer (Fresher) to join our team on a full-time, hybrid basis. This is an exciting opportunity for a motivated fresher who is eager to learn and grow in the field of backend testing and quality assurance. You will work closely with senior engineers to ensure the reliability, performance, and scalability of koolio.ai’s backend services. This role is perfect for recent graduates who want to kickstart their career in a dynamic, innovative environment.

Key Responsibilities:

  • Assist in the design and execution of test cases for backend services, APIs, and databases
  • Perform manual and automated testing to validate the functionality and performance of backend systems
  • Help identify, log, and track bugs, working closely with developers for issue resolution
  • Contribute to developing automated test scripts to ensure continuous integration and deployment
  • Document test cases, results, and issues in a clear and organized manner
  • Continuously learn and apply testing methodologies and tools under the guidance of senior engineers

Requirements and Skills:

  • Education: Degree in Computer Science or a related field
  • Work Experience: No prior work experience required; internships or academic projects related to software testing or backend development are a plus
  • Technical Skills:
  • Basic understanding of backend systems and APIs
  • Familiarity with SQL for basic database testing
  • Exposure to any programming or scripting language (e.g., Python, JavaScript, Java)
  • Interest in learning test automation tools and frameworks such as Selenium, JUnit, or Pytest
  • Familiarity with basic version control systems (e.g., Git)
  • Soft Skills:
  • Eagerness to learn and apply new technologies in a fast-paced environment
  • Strong analytical and problem-solving skills
  • Excellent attention to detail and a proactive mindset
  • Ability to communicate effectively and work in a collaborative, remote team
  • Other Skills
  • Familiarity with API testing tools (e.g., Postman) or automation tools is a bonus but not mandatory
  • Basic knowledge of testing methodologies and the software development life cycle is helpful



Compensation and Benefits:

  • Total Yearly Compensation: ₹4.5-6 LPA based on skills and experience
  • Health Insurance: Comprehensive health coverage provided by the company

Why Join Us?

  • Be a part of a passionate and visionary team at the forefront of audio content creation
  • Work on an exciting, evolving product that is reshaping the way audio content is created and consumed
  • Thrive in a fast-moving, self-learning startup environment that values innovation, adaptability, and continuous improvement
  • Enjoy the flexibility of a full-time hybrid position with opportunities to grow professionally and expand your skills
  • Collaborate with talented professionals from around the world, contributing to a product that has a real-world impact


Read more
Gate6
Indore
4 - 5 yrs
₹10L - ₹18L / yr
skill iconPython
skill iconDjango
skill iconFlask
FastAPI
RESTful APIs
+6 more

About Gate6

At Gate6, we’re more than a tech company — we’re a team that grows together. Many of our people have been with us for over 10, even 20 years — a rare legacy in the fast-changing digital world. Why? Because we believe in challenging work, creative freedom, and building real impact through innovation.


With offices in Scottsdale, AZ and Indore, India, we craft cutting-edge digital solutions while nurturing a culture where talent thrives, ideas matter, and careers last. If you’re ready to grow with a team that’s in it for the long run — Gate6 is your place.


About the Role

We are hiring an experienced Senior Python Developer to design and build high-quality web applications and APIs. You will play a key role in system architecture & code quality. The ideal candidate has hands-on experience with backend frameworks, frontend integration, and cloud deployments.


Key Responsibilities

  • Design and implement end-to-end web applications using Python (Django, Flask, or FastAPI).
  • Develop and consume RESTful APIs and ensure security (JWT, OAuth2).
  • Integrate with third-party services such as payment gateways or CRMs.
  • Work with relational and NoSQL databases (MySQL, PostgreSQL, MongoDB).
  • Optimize backend performance and database queries.


Required Skills

  • Strong knowledge of Python frameworksJavaScript (Angular), and SQL.
  • Hands-on experience with API development and version control (Git).
  • Understanding of Microservices architecture.
  • Familiarity with AWS cloud.


Read more
Verix

at Verix

5 candid answers
1 video
Eman Khan
Posted by Eman Khan
Remote only
6 - 10 yrs
₹15L - ₹34L / yr
Large Language Models (LLM)
SEO analytics
Ads analytics
Search
SQL
+8 more

What is OptimizeGEO?

OptimizeGEO is Verix’s flagship platform that helps brands stay visible, cited, and trusted in AI-powered answers. Unlike traditional SEO that optimizes for keywords and rankings, OptimizeGEO operationalizes AEO/GEO principles so brands are discoverable across generative systems (ChatGPT, Gemini, Claude, Perplexity) and answer engines (featured snippets, voice assistants, AI answer boxes).


Role Overview

We are building the next generation of measurement systems for Generative Engine Optimization (AEO/GEO)—defining how “quality,” “trust,” and “impact” are quantified in an AI-first discovery landscape. As Measurement Lead, you will architect the measurement strategy, frameworks, and data models that inform product development, customer outcomes, and go-to-market effectiveness. You’ll combine experimentation, analytics, and generative AI evaluation to create a durable, decision-grade measurement stack.


Key Responsibilities

  • Define Measurement Frameworks: Operationalize KPIs for AEO performance - model visibility/coverage, ranking/recall, trust & attribution signals, and downstream engagement/ROI.
  • Own the Measurement Stack: Partner with Data Engineering to build systems for A/B and multivariate testing, offline/online model evaluation, and longitudinal tracking of AEO metrics.
  • Model & Content Evaluation: Establish benchmarks and scoring systems for generative output quality, factuality, and attribution, leveraging both human-in-the-loop and automated evaluation.
  • Cross-Functional Alignment: Drive shared definitions and measurement standards across Product, Data Science, Customer Success, and GTM.
  • Insight to Action: Translate raw data into clear recommendations that improve product performance and customer ROI; create exec-ready narratives that tie measurement to business outcomes.
  • Thought Leadership: Be the internal SME on measurement in the generative era; evangelize best practices and influence roadmaps through storytelling with data.


Qualifications (Minimum)

  • 6–10 years in analytics, data science, experimentation, or measurement - ideally in search, ads, LLM evaluation, or content optimization.
  • Proven experience designing metric frameworks and experimentation systems for complex or multi-sided products.
  • Deep understanding of AI/LLM evaluation and/or SEO/ads analytics; familiarity with offline vs. online metrics and counterfactuals.
  • Advanced proficiency in SQL and Python/R; hands-on with tools such as Amplitude, Mixpanel, Looker/Looker Studio, dbt, BigQuery/Snowflake, and experiment platforms.
  • Demonstrated ability to connect analytical rigor to strategic decisions; strong communication and stakeholder influence skills.


Preferred Experience

  • Background in search quality, ads measurement, or model eval (e.g., BLEU, ROUGE, BERTScore, factuality/trustworthiness).
  • Experience with human evaluation ops, prompt and data set design, and rubric development for LLMs.
  • Prior experience in startup or new product incubation environments.


What Success Looks Like (Outcomes)

  • Launch the industry’s first AEO Quality Score and reference measurement model.
  • Deliver visibility frameworks that tie AI discoverability → content optimization → commercial ROI.
  • Establish a robust experimentation and evaluation pipeline that accelerates product velocity and elevates customer outcomes.
  • Be recognized as the go-to expert on generative measurement internally and externally.


Ways of Working / Tooling (indicative)

SQL, Python/R, Experiment platforms, Amplitude/Mixpanel, Looker/Looker Studio, BigQuery/Snowflake, dbt, Airflow, prompt-evaluation tooling, annotation platforms, dashboards for exec reporting.


Equal Opportunity

Virtualness is an equal opportunity employer. We celebrate diversity and are committed to an inclusive environment for all employees.

Read more
NeoGenCode Technologies Pvt Ltd
Surat, Ahmedabad
6 - 10 yrs
₹10L - ₹24L / yr
skill iconFlutter
skill iconPython
skill iconDjango
Celery
skill iconRedis
+9 more

Job Title : Technical Team Lead – Engineering Delivery

Experience : 6+ Years

Level : Senior Individual Contributor (Level 1)

Location : Surat / Ahmedabad (On-site)

Employment Type : Full-time


About the Role :

We’re looking for an experienced Technical Team Lead – Engineering Delivery to guide a cross-functional engineering squad comprising Flutter, Backend, QA, and occasionally Data engineers. This position combines hands-on software development with leadership, architecture, and Agile delivery ownership.


Required Qualifications :

  • 6 to 9+ years of software engineering experience, with at least 2 to 3 years in a team or squad leadership role.
  • Strong backend development expertise in Python, Django/DRF, REST API design, caching, queues, and WebSockets.
  • Working knowledge of Flutter/Dart architecture and patterns (state management, navigation).
  • Experience with CI/CD (GitHub Actions, Fastlane/Codemagic), containers (Docker), and AWS (EC2/ECS/ECR, S3, CloudFront, CloudWatch).
  • Solid testing and observability practices using PyTest, Flutter tests, logs, metrics, traces, and alerts.
  • Excellent skills in estimation, prioritization, and communication.

Preferred Skills (Nice to Have) :

  • Experience releasing and managing mobile apps (Play Console, App Store Connect, TestFlight).
  • Familiarity with Data/ML delivery pipelines (batch vs real-time inference, model rollout/versioning).
  • Understanding of security frameworks such as OWASP, secret management, IAM, and VPC basics.
  • Prior experience in health-tech, food-tech, or other privacy-sensitive domains.

Tech Stack You’ll Work With :

  • Mobile/Web : Flutter (Dart), React (optional)
  • Backend : Python, Django/DRF, Celery, Redis, WebSockets
  • Data/Storage : PostgreSQL/MySQL, MongoDB
  • Infra/DevOps : AWS (EC2/ECS/ECR/S3/CloudFront/CloudWatch), Docker, Nginx, GitHub Actions, Fastlane/Codemagic
  • Quality/Observability : PyTest, Flutter Test, Playwright/Cypress, Sentry/Crashlytics, OpenTelemetry
Read more
Apprication pvt ltd

at Apprication pvt ltd

1 recruiter
Adam patel
Posted by Adam patel
Mumbai
2 - 3 yrs
₹6L - ₹12L / yr
skill iconPython
Natural Language Processing (NLP)
Langchaing

Apprication Pvt Ltd is Hiring Senior Data Analyst with minimum 2 Years Full time experience excluding internship.

-Lead and mentor Data Science team members, ensuring knowledge sharing and growth through structured guidance.

- Architect and deploy end-to-end AI/ML solutions including LLM applications, RAG systems, and multi-agent workflows. Collaborate with cross-functional teams (engineering, product, domain experts)

-To align AI solutions with business goals. Establish MLOps practices, CI/CD pipelines, and standardized evaluation frameworks for production-ready AI. Drive innovation by researching, prototyping, and implementing state-of-the-art techniques in Generative AI and Machine Learning


Read more
Product Based AI Company

Product Based AI Company

Agency job
via Recruiting Bond by Pavan Kumar
Bengaluru (Bangalore)
3 - 5 yrs
₹20L - ₹45L / yr
skill iconPython
FastAPI
Web Realtime Communication (WebRTC)
WebSocket
FFmpeg
+23 more

Who we are: My AI Client is building the foundational platform for the "agentic economy," moving beyond simple chatbots to create an ecosystem for autonomous AI agents and they aim to provide tools for developers to launch, manage, and monetize AI agents as "digital coworkers."


The Challenge

The current AI stack is fragmented, leading to issues with multimodal data, silent webhook failures, unpredictable token usage, and nascent agent-to-agent collaboration. My AI Client is building a unified, robust backend to resolve these issues for the developer community.


Your Mission

As a foundational member of the backend team, you will architect core systems, focusing on:


  • Agent Nervous System: Designing agent-to-agent messaging, lifecycle management, and high-concurrency, low-latency communication.
  • Multimodal Chaos Taming: Engineering systems to process and understand real-time images, audio, video, and text.
  • Bulletproof Systems: Developing secure, observable webhook systems with robust billing, metering, and real-time payment pipelines.


What You'll Bring

  • My AI Client seeks an experienced engineer comfortable with complex systems and ambiguity.


Core Experience:

● Typically 3 to 5 years of experience in backend engineering roles.

● Expertise in Python, especially with async frameworks like FastAPI.

● Strong command of Docker and cloud deployment (AWS, Cloud Run, or similar).

● Proven experience designing and building microservice or agent-based architectures.


Specialized Experience (Ideal):


  • Real-Time Systems: Experience with real-time media transmission like WebRTC, WebSockets and ways to process them.
  • Scalable Systems: Experience in building scalable, fault-tolerant systems with a strong understanding of observability, monitoring, and alerting best practices.
  • Reliable Webhooks: Knowledge of scalable webhook infrastructure with retry logic, backoffs, and security.
  • Data Processing: Experience with multimodal data (e.g., OCR, audio transcription, video chunking with FFmpeg/OpenCV).
  • Payments & Metering: Familiarity with usage-based billing systems or token-based ledgers.


Your Impact

  • The systems designed by this role will form the foundation for:
  • Thousands of AI agents for major partners across chat, video, and APIs.
  • A new creator economy enabling developers to earn revenue through agents.
  • The overall speed, security, and scalability of my client’s AI platform.


Why Join Us?

  • Opportunity to solve hard problems with clean, scalable code.
  • Small, fast-paced team with high ownership and zero micromanagement.
  • Belief in platform engineering as a craft and care for developer experience.
  • Conviction that AI agents are the future, and a desire to build their powering platform.
  • Dynamic, collaborative in-office work environment in Bengaluru in a Hybrid setup (weekly 2 days from office)
  • Meaningful equity in a growing, well-backed company.
  • Direct work with founders and engineers from top AI companies.
  • A real voice in architectural and product decisions.
  • Opportunity to solve cutting-edge problems with no legacy code.


Ready to Build the Future?

My AI Client is building the core platform for the next software paradigm. Interested candidates are encouraged to apply with their GitHub, resume, or anything that showcases their thinking.



Read more
SCA Technologies

at SCA Technologies

4 candid answers
1 video
Reshika Mendiratta
Posted by Reshika Mendiratta
Gurugram
4yrs+
Upto ₹40L / yr (Varies
)
skill iconJava
skill iconSpring Boot
skill iconJavascript
skill iconNodeJS (Node.js)
skill iconPython
+10 more

Job Responsibilities:

  • Develop features across multiple sub-modules within our applications, including collaboration in requirements definition, prototyping, design, coding, testing, debugging, effort estimation, and continuous quality improvement of the design & code and deployment.
  • Design and implement new features, provide fixes/workarounds to bugs, and innovate in alternate solutions.
  • Provide quick solutions to problems and take a feature/component through the entire life cycle, improving space–time performance and usability/reliability.
  • Design, implement, and adhere to the overall architecture to fulfill the functional requirements through software components.
  • Take accountability for the successful delivery of functionality or modules contributing to the overall product objective.
  • Create consistent design specifications using flowcharts, class diagrams, Entity Relationship Diagrams (ERDs), and other visual techniques to convey the development approach to the lead developer and other stakeholders.
  • Conduct source code walkthroughs, refactoring, and ensure adherence to documentation standards.
  • Support troubleshooting efforts in production systems and fulfill support requests from developers.

Experience and Skills:

  • Bachelor’s degree in Computer Science or similar technical discipline required; Master’s degree preferred.
  • Strong experience as a software engineer with demonstrated success developing a variety of software systems and increasing responsibility in analysis, design, implementation, and deployment tasks with a reputed software product company.
  • Hands-on experience in product development using Java 8, J2EE, Spring Boot, Spring MVC, JSF, REST API, JSON, SQL Server, PostgreSQL, Oracle, Redis Cache, Amber, JavaScript/jQuery.
  • Good to have experience in Handlebars.js, Flyway, PrimeFaces.
  • Experience developing data-driven applications utilizing major relational database engines (SQL Server, Oracle, DB2) including writing complex queries, stored procedures, and performing query optimization.
  • Experience building web-based software systems with N-tier architectures, dynamic content, scalable solutions, and complex security implementations.
  • Strong understanding of Design Patterns, system architecture, and configurations for enterprise web applications.
  • Exposure to development environments such as Eclipse, GitHub/Bitbucket.
  • Comfortable with source code management concepts (version control).
  • Self-motivated, energetic, fast learner with excellent communication skills (interaction with remote teams required).
  • Experience with Agile software development is a plus.

Travel: Based on business needs.

Location: Gurgaon

Read more
Palcode.ai

at Palcode.ai

2 candid answers
Team Palcode
Posted by Team Palcode
Remote only
2 - 4 yrs
₹15 - ₹20 / mo
skill iconPython
skill iconFlask
FastAPI

We are building cutting-edge AI products in the Construction Tech space – transforming how General Contractors, Estimators, and Project Managers manage bids, RFIs, and scope gaps. Our platform integrates AI Agents, voice automation, and vision systems to reduce hours of manual work and unlock new efficiencies for construction teams.

Joining us means you will be part of a lean, high-impact team working on production-ready AI workflows that touch real projects in the field.


Role Overview

We are seeking a part-time consultant (10–15 hours/week) with strong Backend development skills in Python (backend APIs) and ReactJS (frontend UI). You will work closely with the founding team to design, develop, and deploy features across the stack, directly contributing to AI-driven modules like:


Key Responsibilities

  • Build and maintain modular Python APIs (FastAPI/Flask) with clean architecture.
  • You must have at least 24 hours monthly backend Python expertise (excluding training, any Internships)
  • We are ONLY looking for Backend Developers, Python-based Data Science, Analyst Role are not a match.
  • Integrate AI services (OpenAI, LangChain, OCR/vision libraries) into production flows.
  • Work with AWS services (Lambda, S3, RDS/Postgres, CloudWatch) for deployment.
  • Collaborate with founders to convert fuzzy product ideas into technical deliverables.
  • Ensure production readiness: logging, CI/CD pipelines, error handling, and test coverage.


Part-Time Eligibility Check -

  • This is a fixed monthly paid role - NOT hourly
  • We are a funded startup, and by compliance, Payments are generally prorated to your current monthly drawings (No negotiations on it)
  • You should have 2-3 hours per day to code
  • You should be a pro in AI-based Coding. We ship code really fast.
  • You need to know Tools Like ChatGPT to generate solutions (Not Code) - use of the Cursor to build those solutions. Job ID 319083
  • You will be assigned an independent task every week - we run 2 weeks of sprints
  • I read the requirements, and I'm okay to proceed (Removing Spam applications).




Read more
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Find more jobs
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort