50+ Python Jobs in India
Apply to 50+ Python Jobs on CutShort.io. Find your next job, effortlessly. Browse Python Jobs and apply today!
Job Description:
We are looking for a skilled Backend Developer with 2–5 years of experience in software development, specializing in Python and/or Golang. If you have strong programming skills, enjoy solving problems, and want to work on secure and scalable systems, we'd love to hear from you!
Location - Pune, Baner.
Interview Rounds - In Office
Key Responsibilities:
Design, build, and maintain efficient, reusable, and reliable backend services using Python and/or Golang
Develop and maintain clean and scalable code following best practices
Apply Object-Oriented Programming (OOP) concepts in real-world development
Collaborate with front-end developers, QA, and other team members to deliver high-quality features
Debug, optimize, and improve existing systems and codebase
Participate in code reviews and team discussions
Work in an Agile/Scrum development environment
Required Skills: Strong experience in Python or Golang (working knowledge of both is a plus)
Good understanding of OOP principles
Familiarity with RESTful APIs and back-end frameworks
Experience with databases (SQL or NoSQL)
Excellent problem-solving and debugging skills
Strong communication and teamwork abilities
Good to Have:
Prior experience in the security industry
Familiarity with cloud platforms like AWS, Azure, or GCP
Knowledge of Docker, Kubernetes, or CI/CD tools
Requires that any candidate know the M365 Collaboration environment. SharePoint Online, MS Teams. Exchange Online, Entra and Purview. Need developer that possess a strong understanding of Data Structure, Problem Solving abilities, SQL, PowerShell, MS Teams App Development, Python, Visual Basic, C##, JavaScript, Java, HTML, PHP, C.
Need a strong understanding of the development lifecycle, and possess debugging skills time management, business acumen, and have a positive attitude is a must and open to continual growth.
Capability to code appropriate solutions will be tested in any interview.
Knowledge of a wide variety of Generative AI models
Conceptual understanding of how large language models work
Proficiency in coding languages for data manipulation (e.g., SQL) and machine learning & AI development (e.g., Python)
Experience with dashboarding tools such as Power BI and Tableau (beneficial but not essential)
We're seeking an AI/ML Engineer to join our team-
As an AI/ML Engineer, you will be responsible for designing, developing, and implementing artificial intelligence (AI) and machine learning (ML) solutions to solve real world business problems. You will work closely with cross-functional teams, including data scientists, software engineers, and product managers, to deploy and integrate Applied AI/ML solutions into the products that are being built at NonStop. Your role will involve researching cutting-edge algorithms, data processing techniques, and implementing scalable solutions to drive innovation and improve the overall user experience.
Responsibilities
- Applied AI/ML engineering; Building engineering solutions on top of the AI/ML tooling available in the industry today. Eg: Engineering APIs around OpenAI
- AI/ML Model Development: Design, develop, and implement machine learning models and algorithms that address specific business challenges, such as natural language processing, computer vision, recommendation systems, anomaly detection, etc.
- Data Preprocessing and Feature Engineering: Cleanse, preprocess, and transform raw data into suitable formats for training and testing AI/ML models. Perform feature engineering to extract relevant features from the data
- Model Training and Evaluation: Train and validate AI/ML models using diverse datasets to achieve optimal performance. Employ appropriate evaluation metrics to assess model accuracy, precision, recall, and other relevant metrics
- Data Visualization: Create clear and insightful data visualizations to aid in understanding data patterns, model behavior, and performance metrics
- Deployment and Integration: Collaborate with software engineers and DevOps teams to deploy AI/ML models into production environments and integrate them into various applications and systems
- Data Security and Privacy: Ensure compliance with data privacy regulations and implement security measures to protect sensitive information used in AI/ML processes
- Continuous Learning: Stay updated with the latest advancements in AI/ML research, tools, and technologies, and apply them to improve existing models and develop novel solutions
- Documentation: Maintain detailed documentation of the AI/ML development process, including code, models, algorithms, and methodologies for easy understanding and future reference
Requirements
- Bachelor's, Master's or PhD in Computer Science, Data Science, Machine Learning, or a related field. Advanced degrees or certifications in AI/ML are a plus
- Proven experience as an AI/ML Engineer, Data Scientist, or related role, ideally with a strong portfolio of AI/ML projects
- Proficiency in programming languages commonly used for AI/ML. Preferably Python
- Familiarity with popular AI/ML libraries and frameworks, such as TensorFlow, PyTorch, scikit-learn, etc.
- Familiarity with popular AI/ML Models such as GPT3, GPT4, Llama2, BERT etc.
- Strong understanding of machine learning algorithms, statistics, and data structures
- Experience with data preprocessing, data wrangling, and feature engineering
- Knowledge of deep learning architectures, neural networks, and transfer learning
- Familiarity with cloud platforms and services (e.g., AWS, Azure, Google Cloud) for scalable AI/ML deployment
- Solid understanding of software engineering principles and best practices for writing maintainable and scalable code
- Excellent analytical and problem-solving skills, with the ability to think critically and propose innovative solutions
- Effective communication skills to collaborate with cross-functional teams and present complex technical concepts to non-technical stakeholders
Description
We are seeking a skilled and detail-oriented Software Developer to automate our internal workflows, develop tools for internal use that are used by our development team.
We follow the following practices: unit testing, continuous integration CI, continuous deployment CD, and DevOps.
We have codebases in go, java, python, vue js, bash and support the development team that develops C code.
You need to like challenges, explore new fields and find solutions for problems.
You will be responsible for coordinating, automating, and validating internal workflows and ensuring operational stability, and system reliability.
Requirements
- Bachelor’s degree in Computer Science, Engineering, or related field.
- 2+ years in professional software development
- Solid understanding of software development patterns like SOLID, GoF or similar.
- Experience automating deployments for different kinds of applications.
- Strong understanding of Git version control, merge/rebase strategies, tagging.
- Familiarity with containerization (Docker) and deployment orchestration (e.g., docker compose).
- Solid scripting experience (bash, or similar).
- Understanding of observability, monitoring, and probing tooling (e.g., Prometheus, Grafana, blackbox exporter).
Preferred Skills
- Experience in SRE
- Proficiency in CI/CD tooling (e.g., GitHub Actions, Jenkins, GitLab).
- Familiarity with build tools like Make, CMake, or similar.
- Exposure to artifact management systems (e.g., aptly, Artifactory, Nexus).
- Experience deploying to Linux production systems with service uptime guarantees.
Responsibilities
- Develop new services that are needed by SRE, Field or Development Team by adopting unit testing, agile, clean code practices.
- Drive the CI/CD pipeline and maintain the workflows, using tools such as GitLab, Jenkins
- Deploy the services and implement and refine the automation for different environments.
- Operate: The services that the SRE Team developed.
- Automate release pipelines: Build and maintain CI/CD workflows using tools such as Jenkins and GitLab.
- Version control: Manage and enforce Git best practices, branching strategies (e.g., Git Flow), tagging, and release versioning.
- Collaboration: Work closely with developers, QA, and product teams to align on release timelines and feature readines
- Success Metrics
- Achieve >99% service up time with minimal rollbacks.
- Delivery in time, hold timelines.
Benefits
Enjoy a great environment, great people, and a great package
- Stock Appreciation Rights - Generous pre series-B stock options
- Generous Gratuity Plan - Long service compensation far exceeding Indian statutory requirements
- Health Insurance - Premium health insurance for employee, spouse and children
- Working Hours - Flexible working hours with sole focus on enabling a great work environment
- Work Environment - Work with top industry experts in an environment that fosters co-operation, learning and developing skills
- Make a Difference - We're here because we want to make an impact on the world - we hope you do too!

is a global digital solutions partner trusted by leading Fortune 500 companies in industries such as pharma & healthcare, retail, and BFSI. Expertise in data and analytics, data engineering, machine learning, AI, and automation help companies streamline operations and unlock business value.
Required Skills
• 12+ years of proven experience in designing large-scale enterprise systems and distributed
architectures.
• Strong expertise in Azure, AWS, Python, Docker, LangChain, Solution Architecture, C#, .Net
• Frontend technologies like React, Angular and, ASP.Net MVC
• Deep knowledge of architecture frameworks (TOGAF).
• Understanding of security principles, identity management, and data protection.
• Experience with solution architecture methodologies and documentation standards
• Deep understanding of databases (SQL and NoSQL), RESTful APIs, and message brokers.
• Excellent communication, leadership, and stakeholder management skills.
We are seeking enthusiastic and motivated fresh graduates with a strong foundation in programming, primarily in Python, and basic knowledge of Java, C#, or JavaScript. This role offers hands-on experience in developing applications, writing clean code, and collaborating on real-world projects under expert guidance.
Key Responsibilities
• Develop and maintain applications using Python as the primary language.
• Assist in coding, debugging, and testing software modules in Java, C#, or JavaScript as needed.
• Collaborate with senior developers to learn best practices and contribute to project deliverables.
• Write clean, efficient, and well-documented code.
• Participate in code reviews and follow standard development processes.
• Continuously learn and adapt to new technologies and frameworks.
Core Expectations
• Eagerness to Learn: Open to acquiring new programming skills and frameworks.
• Adaptability: Ability to work across multiple languages and environments.
• Problem-Solving: Strong analytical skills to troubleshoot and debug issues.
• Team Collaboration: Work effectively with peers and seniors.
• Professionalism: Good communication skills and a positive attitude.
Qualifications
• Bachelor’s degree in Computer Science, IT, or related field.
• Strong understanding of Python (OOP, data structures, basic frameworks like Flask/Django).
• Basic knowledge of Java, C#, or JavaScript.
• Familiarity with version control systems (Git).
• Understanding of databases (SQL/NoSQL) is a plus.
NOTE: Laptop with high speed internet is mandatory
AI based systems design and development, entire pipeline from image/ video ingest, metadata ingest, processing, encoding, transmitting.
Implementation and testing of advanced computer vision algorithms.
Dataset search, preparation, annotation, training, testing, fine tuning of vision CNN models. Multimodal AI, LLMs, hardware deployment, explainability.
Detailed analysis of results. Documentation, version control, client support, upgrades.
Job Description
Location: Mumbai (with short/medium-term travel opportunities within India & foreign location)
Experience: 5 -8 years
Job Type: Full-time
About the Role
We are looking for experienced data engineers who can independently build, optimize, and manage scalable data pipelines and platforms. In this role, you’ll work closely with clients and internal teams to deliver robust data solutions that power analytics, AI/ML, and operational systems. You’ll also help mentor junior engineers and bring engineering discipline into our data engagements.
Key Responsibilities
Design, build, and optimize large-scale, distributed data pipelines for both batch and streaming use cases.
Implement scalable data models, data warehouses/lakehouses, and data lakes to support analytics and decision-making.
Collaborate with cross-functional stakeholders to understand business requirements and translate them into technical data solutions.
Drive performance tuning, monitoring, and reliability of data pipelines.
Write clean, modular, and production-ready code with proper documentation and testing.
Contribute to architectural discussions, tool evaluations, and platform setup.
Mentor junior engineers and participate in code/design reviews.
Must-Have Skills
Strong programming skills in Python and advanced SQL expertise.
Deep understanding of data engineering concepts such as ETL/ELT, data modeling (OLTP & OLAP), warehousing, and stream processing.
Experience with distributed data processing frameworks (e.g., Apache Spark, Flink, or similar).
Exposure with Java is mandate
Experience with building pipelines using orchestration tools like Airflow or similar.
Familiarity with CI/CD pipelines and version control tools like Git.
Ability to debug, optimize, and scale data pipelines in real-world settings.
Good to Have
Experience working on any major cloud platform (AWS preferred; GCP or Azure also welcome).
Exposure to Databricks, dbt, or similar platforms is a plus.
Experience with Snowflake is preferred.
Understanding of data governance, data quality frameworks, and observability.
Certification in AWS (e.g., Data Analytics, Solutions Architect) or Databricks is a plus.
Other Expectations
Comfortable working in fast-paced, client-facing environments.
Strong analytical and problem-solving skills with attention to detail.
Ability to adapt across tools, stacks, and business domains.
Willingness to travel within India for short/medium-term client engagements as needed.
Experience- 6 to 8 years
Location- Bangalore
Job Description-
- Extensive experience with machine learning utilizing the latest analytical models in Python. (i.e., experience in generating data-driven insights that play a key role in rapid decision-making and driving business outcomes.)
- Extensive experience using Tableau, table design, PowerApps, Power BI, Power Automate, and cloud environments, or equivalent experience designing/implementing data analysis pipelines and visualization.
- Extensive experience using AI agent platforms. (AI = data analysis: a required skill for data analysts.)
- A statistics major or equivalent understanding of statistical analysis results interpretation.
We are looking for a Cloud Security Engineer to join our organization. The ideal candidate will have strong hands-on experience in ensuring robust security controls across both applications and organizational data. This candidate is expected to work closely with multiple stakeholders to architect, implement, and monitor effective safeguards. The ideal candidate will champion secure design, conduct risk assessments, drive vulnerability management, and promote data protection best practices for the organization
Responsibilities
- Design and implement security measures for website and API applications.
- Conduct security-first code reviews, vulnerability assessments, and posture audits for business-critical applications.
- Conduct security testing activities like SAST & DAST by integrating them within the project’s CI/CD pipelines and development workflows.
- Manage all penetration testing activities including working with external vendors for security certification of business-critical applications.
- Develop and manage data protection policies and RBAC controls for sensitive organizational data like PII, revenue, secrets, etc.
- Oversee encryption, key management, and secure data storage solutions.
- Monitor threats and responds to incidents involving application and data breaches.
- Collaborate with engineering, data, product and compliance teams to achieve security-by-design principles.
- Ensure compliance with regulatory standards (GDPR, HIPAA, etc.) and internal organizational policies.
- Automate recurrent security tasks using scripts and security tools.
- Maintain documentation around data flows, application architectures, and security controls.
Requirements
- 10+ years’ experience in application security and/or data security engineering.
- Strong understanding of security concepts including zero trust architecture, threat modeling, security frameworks (like SOC 2, ISO 27001), and best practices in corporate security environments.
- Strong knowledge of modern web/mobile application architectures and common vulnerabilities (like OWASP Top 10, etc.)
- Proficiency in secure coding practices and code reviews for major programming languages including Java, .NET, Python, JavaScript, Typescript, React, etc.
- Hands-on experience in at-least two Software tooling in areas of vulnerability scanning and static/dynamic analysis. Software tooling can include Checkmarx, Veracode, SonarQube, Burp Suite, AppScan, etc.
- Advanced understanding of data encryption, key management, and secure storage (SQL, NoSQL, Cloud) and secure transfer mechanisms.
- Working experience in Cloud Environments like AWS & GCP and familiarity with the recommended security best practices.
- Familiarity with regulatory frameworks such as GDPR, HIPAA, PCI DSS and the controls needed to implement them.
- Experience integrating security into DevOps/CI/CD processes.
- Hands-on Experience with automation in any of the scripting languages (Python, Bash, etc.)
- Ability to conduct incident response and forensic investigations related to application/data breaches.
- Excellent communication and documentation skills.
Good To Have:
- Cloud Security certifications in any one of the below
- AWS Certified Security – Specialty
- GCP Professional Cloud Security
- Experience with container security (Docker, Kubernetes) and cloud security tools (AWS, Azure, GCP).
- Experience in safeguard data storage solutions like GCP GCS, BigQuery, etc.
- Hands-on work with any SIEM/SOC platforms for monitoring and alerting.
- Knowledge of data loss prevention (DLP) solutions and IAM (identity and access management) systems.
Perks:
- Day off on the 3rd Friday of every month (one long weekend each month)
- Monthly Wellness Reimbursement Program to promote health and well-being
- Monthly Office Commutation Reimbursement Program
- Paid paternity and maternity leave
Experience Required: 2-5 Years
No. of vacancies: 2
Job Type: Full Time
Vacancy Role: WFO
Job Description
ChicMic Studios is hiring for a highly skilled and experienced Sr. Python Developer to join our dynamic team. The ideal candidate will have a robust background in developing web applications using Django and Flask, with expertise in deploying and managing applications on AWS. Proficiency in Django Rest Framework (DRF), a solid understanding of machine learning concepts, and hands-on experience with tools like PyTorch, TensorFlow, and transformer architectures are essential.
Roles & Responsibilities
- Develop, maintain, and scale web applications using Django & DRF.
- Implement and manage payment gateway integrations and ensure secure transaction handling.
- Design and optimize SQL queries, transaction management, and data integrity.
- Work with Redis and Celery for caching, task queues, and background job processing.
- Develop and deploy applications on AWS services (EC2, S3, RDS, Lambda, Cloud Formation).
- Implement strong security practices including CSRF token generation, SQL injection prevention, JWT authentication, and other security mechanisms.
- Build and maintain microservices architectures with scalability and modularity in mind.
- Develop Web Socket-based solutions including real-time chat rooms and notifications.
- Ensure robust application testing with unit testing and test automation frameworks.
- Collaborate with cross-functional teams to analyze requirements and deliver effective solutions.
- Monitor, debug, and optimize application performance, scalability, and reliability.
- Stay updated with emerging technologies, frameworks, and industry best practices.
- Create scalable machine learning models using frameworks like PyTorch, TensorFlow, and scikit-learn.
- Implement transformer architectures (e.g., BERT, GPT) for NLP and other advanced AI use cases.
- Optimize machine learning models through advanced techniques such as hyperparameter tuning, pruning, and quantization.
- Deploy and manage machine learning models in production environments using tools like TensorFlow Serving, TorchServe, and AWS SageMaker.
Qualifications
- Bachelor’s degree in Computer Science, Engineering, or a related field.
- 2-5 years of professional experience as a Python Developer.
- Proficient in Python with a strong understanding of its ecosystem.
- Extensive experience with Django and Flask frameworks.
- Hands-on experience with AWS services for application deployment and management.
- Strong knowledge of Django Rest Framework (DRF) for building APIs.
- Expertise in machine learning frameworks such as PyTorch, TensorFlow, and scikit-learn.
- Experience with transformer architectures for NLP and advanced AI solutions.
- Solid understanding of SQL and NoSQL databases (e.g., PostgreSQL, MongoDB).
- Familiarity with MLOps practices for managing the machine learning lifecycle.
- Basic knowledge of front-end technologies (e.g., JavaScript, HTML, CSS) is a plus.
- Excellent problem-solving skills and the ability to work independently and as part of a team.
- Strong communication skills and the ability to articulate complex technical concepts to non-technical stakeholders.
Job description
Job Title: Python Trainer (Workshop Model Freelance / Part-time)
Location: Thrissur & Ernakulam
Program Duration: 30 or 60 Hours (Workshop Model)
Job Type: Freelance / Contract
About the Role:
We are seeking an experienced and passionate Python Trainer to deliver interactive, hands-on training sessions for students under a workshop model in Thrissur and Ernakulam locations. The trainer will be responsible for engaging learners with practical examples and real-time coding exercises.
Key Responsibilities:
Conduct offline workshop-style Python training sessions (30 or 60 hours total).
Deliver interactive lectures and coding exercises focused on Python programming fundamentals and applications.
Customize the curriculum based on learners skill levels and project needs.
Guide students through mini-projects, assignments, and coding challenges.
Ensure effective knowledge transfer through practical, real-world examples.
Requirements:
Experience: 15 years of training or industry experience in Python programming.
Technical Skills: Strong knowledge of Python, including OOPs concepts, file handling, libraries (NumPy, Pandas, etc.), and basic data visualization.
Prior experience in academic or corporate training preferred.
Excellent communication and presentation skills.
Mode: Offline Workshop (Thrissur / Ernakulam)
Duration: Flexible – 30 Hours or 60 Hours Total
Organization: KGiSL Microcollege
Role: Other
Industry Type: Education / Training
Department: Other
Employment Type: Full Time, Permanent
Role Category: Other
Education
UG: Any Graduate
Key Skills
Data Science,Artificial Intelligence
Role: Other
Industry Type: Education / Training
Department: Other
Employment Type: Full Time, Permanent
Role Category: Other
Education
UG: Any Graduate
Job description
Job Title: Python Trainer (Workshop Model Freelance / Part-time)
Location: Thrissur & Ernakulam
Program Duration: 30 or 60 Hours (Workshop Model)
Job Type: Freelance / Contract
About the Role:
We are seeking an experienced and passionate Python Trainer to deliver interactive, hands-on training sessions for students under a workshop model in Thrissur and Ernakulam locations. The trainer will be responsible for engaging learners with practical examples and real-time coding exercises.
Key Responsibilities:
Conduct offline workshop-style Python training sessions (30 or 60 hours total).
Deliver interactive lectures and coding exercises focused on Python programming fundamentals and applications.
Customize the curriculum based on learners skill levels and project needs.
Guide students through mini-projects, assignments, and coding challenges.
Ensure effective knowledge transfer through practical, real-world examples.
Requirements:
Experience: 15 years of training or industry experience in Python programming.
Technical Skills: Strong knowledge of Python, including OOPs concepts, file handling, libraries (NumPy, Pandas, etc.), and basic data visualization.
Prior experience in academic or corporate training preferred.
Excellent communication and presentation skills.
Mode: Offline Workshop (Thrissur / Ernakulam)
Duration: Flexible – 30 Hours or 60 Hours Total
Organization: KGiSL Microcollege
Role: Other
Industry Type: Education / Training
Department: Other
Employment Type: Full Time, Permanent
Role Category: Other
Education
UG: Any Graduate
Key Skills
Data Science,Artificial Intelligence
Role: Other
Industry Type: Education / Training
Department: Other
Employment Type: Full Time, Permanent
Role Category: Other
Education
UG: Any Graduate
AccioJob is conducting a Walk-In Hiring Drive with Adobe for the position of Tech Apprentice.
To apply, register and select your slot here: https://go.acciojob.com/fBcgVa
Required Skills: JavaScript, Java, SQL, HTML, CSS, Python
Eligibility:
Degree: MCA
Branch: All
Graduation Year: 2023, 2024, 2025
Work Details:
Work Location: Bangalore (Onsite)
CTC: 6 LPA to 7 LPA
Evaluation Process:
Round 1: Offline Assessment at AccioJob Bangalore & Chennai Centre
Important Note: Bring your laptop & earphones for the test.
Further Rounds (for shortlisted candidates only):
Programming Test, Aptitude + Programming Test, Technical Interview 1
Register here: https://go.acciojob.com/fBcgVa
AccioJob is conducting a Walk-In Hiring Drive with Adobe for the position of Tech Apprentice.
To apply, register and select your slot here: https://go.acciojob.com/vZDDPN
Required Skills: JavaScript, Java, SQL, HTML, CSS, Python
Eligibility:
Degree: BTech./BE
Branch: Computer Science/CSE/Other CS related branch, Electrical/Other electrical related branches, IT, Communications
Graduation Year: 2023, 2024
Work Details:
Work Location: Bangalore (Onsite)
CTC: 6 LPA to 7 LPA
Evaluation Process:
Round 1: Offline Assessment at AccioJob Bangalore & Chennai Centre
Important Note: Bring your laptop & earphones for the test.
Further Rounds (for shortlisted candidates only):
Programming Test, Aptitude + Programming Test, Technical Interview 1
Register here: https://go.acciojob.com/vZDDPN
Job Profile : Python Developer
Job Location : Ahmedabad, Gujarat - On site
Job Type : Full time
Experience - 4+ Years
Key Responsibilities:
- Design, develop, and maintain Python-based applications and services.
- Collaborate with cross-functional teams to define, design, and ship new features.
- Write clean, maintainable, and efficient code following best practices.
- Optimize applications for maximum speed and scalability.
- Troubleshoot, debug, and upgrade existing systems.
- Integrate user-facing elements with server-side logic.
- Implement security and data protection measures.
- Work with databases (SQL/NoSQL) and integrate data storage solutions.
- Participate in code reviews to ensure code quality and share knowledge with the team.
- Stay up-to-date with emerging technologies and industry trends.
Requirements:
- 4+ years of professional experience in Python development.
- Strong knowledge of Python frameworks such as Django, Flask, or FastAPI.
- Experience with RESTful APIs and web services.
- Proficiency in working with databases (e.g., PostgreSQL, MySQL, MongoDB).
- Familiarity with front-end technologies (e.g., HTML, CSS, JavaScript) is a plus.
- Experience with version control systems (e.g., Git).
- Knowledge of cloud platforms (e.g., AWS, Azure, Google Cloud) is a plus.
- Understanding of containerization tools like Docker and orchestration tools like Kubernetes is good to have
- Strong problem-solving skills and attention to detail.
- Excellent communication and teamwork skills.
Good to Have:
- Experience with data analysis and visualization libraries (e.g., Pandas, NumPy, Matplotlib).
- Knowledge of asynchronous programming and event-driven architecture.
- Familiarity with CI/CD pipelines and DevOps practices.
- Experience with microservices architecture.
- Knowledge of machine learning frameworks (e.g., TensorFlow, PyTorch) is a plus.
- Hands-on experience in RAG and LLM model intergration would be surplus.
Artificial Intelligence Resercher (Computer vision)
Responsibility
• Work on Various SOTA Computer Vision Models, Dataset Augmentation & Dataset Generation
Techniques that help improve model accuracy & precision.
• Work on development & improvement of End-to-End Pipeline use cases running at scale.
• Programming skills with multi-threaded GPU CUDA computing and API Solutions.
• Proficient with Training of Detection, Classification & Segmentation Models with TensorFlow,
Pytorch, MX Net etc
Required Skills
• Strong development skills required in Python and C++.
• Ability to architect a solution based on given requirements and convert the business requirements into a technical computer vision problem statement.
• Ability to work in a fast-paced environment and coordinate across different parts of different projects.
• Bringing in the technical expertise around the implementation of best coding standards and
practices across the team.
• Extensive experience of working on edge devices like Jetson Nano, Raspberry Pi and other GPU powered low computational devices.
• Experience with using Docker, Nvidia Docker, Nvidia NGC containers for Computer Vision Deep
Learning
• Experience with Scalable Cloud Deployment Architecture for Video Analytics(Involving Kubernetes
and or Kafka)
• Good experience with any of one cloud technologies like AWS, Azure and Google Cloud.
• Experience in working with Model Optimisation for Nvidia Hardware (Tensors Conversion of both TensorFlow & Pytorch models.
• Proficient understanding of code versioning tools, such as Git.
• Proficient in Data Structures & Algorithms.
• Well versed in software design paradigms and good development practices.
• Experience with Scalable Cloud Deployment Architecture for Video Analytics(Involving Kubernetes
and or Kafka).
We're looking for an experienced Full-Stack Engineer who can architect and build AI-powered agent systems from the ground up. You'll work across the entire stack—from designing scalable backend services and LLM orchestration pipelines to creating frontend interfaces for agent interactions through widgets, bots, plugins, and browser extensions.
You should be fluent in modern backend technologies, AI/LLM integration patterns, and frontend development, with strong systems design thinking and the ability to navigate the complexities of building reliable AI applications.
Note: This is an on-site, 6-day-a-week role. We are in a critical product development phase where sthe peed of iteration directly determines market success. At this early stage, speed of execution and clarity of thought are our strongest moats, and we are doubling down on both as we build through our 0→1 journey.
WHAT YOU BRING:
You take ownership of complex technical challenges end to end, from system architecture to deployment, and thrive in a lean team where every person is a builder. You maintain a strong bias for action, moving quickly to prototype and validate AI agent capabilities while building production-grade systems. You consistently deliver reliable, scalable solutions that leverage AI effectively — whether it's designing robust prompt chains, implementing RAG systems, building conversational interfaces, or creating seamless browser extensions.
You earn trust through technical depth, reliable execution, and the ability to bridge AI capabilities with practical business needs. Above all, you are obsessed with building intelligent systems that actually work. You think deeply about system reliability, performance, cost optimization, and you're motivated by creating AI experiences that deliver real value to our enterprise customers.
WHAT YOU WILL DO:
Your primary responsibility (95% of your time) will be designing and building AI agent systems across the full stack. Specifically, you will:
- Architect and implement scalable backend services for AI agent orchestration, including LLM integration, prompt management, context handling, and conversation state management.
- Design and build robust AI pipelines — implementing RAG systems, agent workflows, tool calling, and chain-of-thought reasoning patterns.
- Develop frontend interfaces for AI interactions including embeddable widgets, Chrome extensions, chat interfaces, and integration plugins for third-party platforms.
- Optimize LLM operations — managing token usage, implementing caching strategies, handling rate limits, and building evaluation frameworks for agent performance.
- Build observability and monitoring systems for AI agents, including prompt versioning, conversation analytics, and quality assurance pipelines.
- Collaborate on system design decisions around AI infrastructure, model selection, vector databases, and real-time agent capabilities.
- Stay current with AI/LLM developments and pragmatically adopt new techniques (function calling, multi-agent systems, advanced prompting strategies) where they add value.
BASIC QUALIFICATIONS:
- 4–6 years of full-stack development experience, with at least 1 year working with LLMs and AI systems.
- Strong backend engineering skills: proficiency in Node.js, Python, or similar; experience with API design, database systems, and distributed architectures.
- Hands-on AI/LLM experience: prompt engineering, working with OpenAI/Anthropic/Google APIs, implementing RAG, managing context windows, and optimizing for latency/cost.
- Frontend development capabilities: JavaScript/TypeScript, React or Vue, browser extension development, and building embeddable widgets.
- Systems design thinking: ability to architect scalable, fault-tolerant systems that handle the unique challenges of AI applications (non-determinism, latency, cost).
- Experience with AI operations: prompt versioning, A/B testing for prompts, monitoring agent behavior, and implementing guardrails.
- Understanding of vector databases, embedding models, and semantic search implementations.
- Comfortable working in fast-moving, startup-style environments with high ownership.
PREFERRED QUALIFICATIONS:
- Experience with advanced LLM techniques: fine-tuning, function calling, agent frameworks (LangChain, LlamaIndex, AutoGPT patterns).
- Familiarity with ML ops tools and practices for production AI systems.
- Prior work on conversational AI, chatbots, or virtual assistants at scale.
- Experience with real-time systems, WebSockets, and streaming responses.
- Knowledge of browser automation, web scraping, or RPA technologies.
- Experience with multi-tenant SaaS architectures and enterprise security requirements.
- Contributions to open-source AI/LLM projects or published work in the field.
WHAT WE OFFER:
- Competitive salary + meaningful equity.
- High ownership and the opportunity to shape product direction.
- Direct impact on cutting-edge AI product development.
- A collaborative team that values clarity, autonomy, and velocity.
Review Criteria
- Strong Senior/Lead DevOps Engineer Profile
- 8+ years of hands-on experience in DevOps engineering, with a strong focus on AWS cloud infrastructure and services (EC2, VPC, EKS, RDS, Lambda, CloudFront, etc.).
- Must have strong system administration expertise (installation, tuning, troubleshooting, security hardening)
- Solid experience in CI/CD pipeline setup and automation using tools such as Jenkins, GitHub Actions, or similar
- Hands-on experience with Infrastructure as Code (IaC) tools such as Terraform, CloudFormation, or Ansible
- Must have strong database expertise across MongoDB and Snowflake (administration, performance optimization, integrations)
- Experience with monitoring and observability tools such as Prometheus, Grafana, ELK, CloudWatch, or Datadog
- Good exposure to containerization and orchestration using Docker and Kubernetes (EKS)
- Must be currently working in an AWS-based environment (AWS experience must be in the current organization)
- Its an IC role
Preferred
- Must be proficient in scripting languages (Bash, Python) for automation and operational tasks.
- Must have strong understanding of security best practices, IAM, WAF, and GuardDuty configurations.
- Exposure to DevSecOps and end-to-end automation of deployments, provisioning, and monitoring.
- Bachelor’s or Master’s degree in Computer Science, Information Technology, or related field.
Role & Responsibilities
We are seeking a highly skilled Senior DevOps Engineer with 8+ years of hands-on experience in designing, automating, and optimizing cloud-native solutions on AWS. AWS and Linux expertise are mandatory. The ideal candidate will have strong experience across databases, automation, CI/CD, containers, and observability, with the ability to build and scale secure, reliable cloud environments.
Key Responsibilities:
Cloud & Infrastructure as Code (IaC)-
- Architect and manage AWS environments ensuring scalability, security, and high availability.
- Implement infrastructure automation using Terraform, CloudFormation, and Ansible.
- Configure VPC Peering, Transit Gateway, and PrivateLink/Connect for advanced networking.
CI/CD & Automation:
- Build and maintain CI/CD pipelines (Jenkins, GitHub, SonarQube, automated testing).
- Automate deployments, provisioning, and monitoring across environments.
Containers & Orchestration:
- Deploy and operate workloads on Docker and Kubernetes (EKS).
- Implement IAM Roles for Service Accounts (IRSA) for secure pod-level access.
- Optimize performance of containerized and microservices applications.
Monitoring & Reliability:
- Implement observability with Prometheus, Grafana, ELK, CloudWatch, M/Monit, and Datadog.
- Establish logging, alerting, and proactive monitoring for high availability.
Security & Compliance:
- Apply AWS security best practices including IAM, IRSA, SSO, and role-based access control.
- Manage WAF, Guard Duty, Inspector, and other AWS-native security tools.
- Configure VPNs, firewalls, and secure access policies and AWS organizations.
Databases & Analytics:
- Must have expertise in MongoDB, Snowflake, Aerospike, RDS, PostgreSQL, MySQL/MariaDB, and other RDBMS.
- Manage data reliability, performance tuning, and cloud-native integrations.
- Experience with Apache Airflow and Spark.
Ideal Candidate
- 8+ years in DevOps engineering, with strong AWS Cloud expertise (EC2, VPC, TG, RDS, S3, IAM, EKS, EMR, SCP, MWAA, Lambda, CloudFront, SNS, SES etc.).
- Linux expertise is mandatory (system administration, tuning, troubleshooting, CIS hardening etc).
- Strong knowledge of databases: MongoDB, Snowflake, Aerospike, RDS, PostgreSQL, MySQL/MariaDB, and other RDBMS.
- Hands-on with Docker, Kubernetes (EKS), Terraform, CloudFormation, Ansible.
- Proven ability with CI/CD pipeline automation and DevSecOps practices.
- Practical experience with VPC Peering, Transit Gateway, WAF, Guard Duty, Inspector and advanced AWS networking and security tools.
- Expertise in observability tools: Prometheus, Grafana, ELK, CloudWatch, M/Monit, and Datadog.
- Strong scripting skills (Shell/bash, Python, or similar) for automation.
- Bachelor / Master’s degree
- Effective communication skills
About Forbes Advisor
Forbes Digital Marketing Inc. is a high-growth digital media and technology company dedicated to helping consumers make confident, informed decisions about their money, health, careers, and everyday life.
We do this by combining data-driven content, rigorous product comparisons, and user-first design — all built on top of a modern, scalable platform. Our global teams bring deep expertise across journalism, product, performance marketing, data, and analytics.
The Role
We’re hiring a Data Scientist to help us unlock growth through advanced analytics and machine learning. This role sits at the intersection of marketing performance, product optimization, and decision science.
You’ll partner closely with Paid Media, Product, and Engineering to build models, generate insight, and influence how we acquire, retain, and monetize users. From campaign ROI to user segmentation and funnel optimization, your work will directly shape how we grow.This role is ideal for someone who thrives on business impact, communicates clearly, and wants to build re-usable, production-ready insights — not just run one-off analyses.
What You’ll Do
Marketing & Revenue Modelling
• Own end-to-end modelling of LTV, user segmentation, retention, and marketing
efficiency to inform media optimization and value attribution.
• Collaborate with Paid Media and RevOps to optimize SEM performance, predict high-
value cohorts, and power strategic bidding and targeting.
Product & Growth Analytics
• Work closely with Product Insights and General Managers (GMs) to define core metrics, KPIs, and success frameworks for new launches and features.
• Conduct deep-dive analysis of user behaviour, funnel performance, and product engagement to uncover actionable insights.
• Monitor and explain changes in key product metrics, identifying root causes and business impact.
• Work closely with Data Engineering to design and maintain scalable data pipelines that
support machine learning workflows, model retraining, and real-time inference.
Predictive Modelling & Machine Learning
• Build predictive models for conversion, churn, revenue, and engagement using regression, classification, or time-series approaches.
• Identify opportunities for prescriptive analytics and automation in key product and marketing workflows.
• Support development of reusable ML pipelines for production-scale use cases in product recommendation, lead scoring, and SEM planning.
Collaboration & Communication
• Present insights and recommendations to a variety of stakeholders — from ICs to executives — in a clear and compelling manner.
• Translate business needs into data problems, and complex findings into strategic action plans.
• Work cross-functionally with Engineering, Product, BI, and Marketing to deliver and deploy your work.
What You’ll Bring
Minimum Qualifications
• Bachelor’s degree in a quantitative field (Mathematics, Statistics, CS, Engineering, etc.).
• 4+ years in data science, growth analytics, or decision science roles.
• Strong SQL and Python skills (Pandas, Scikit-learn, NumPy).
• Hands-on experience with Tableau, Looker, or similar BI tools.
• Familiarity with LTV modelling, retention curves, cohort analysis, and media attribution.
• Experience with GA4, Google Ads, Meta, or other performance marketing platforms.
• Clear communication skills and a track record of turning data into decisions.
Nice to Have
• Experience with BigQuery and Google Cloud Platform (or equivalent).
• Familiarity with affiliate or lead-gen business models.
• Exposure to NLP, LLMs, embeddings, or agent-based analytics.
• Ability to contribute to model deployment workflows (e.g., using Vertex AI, Airflow, or Composer).
Why Join Us?
• Remote-first and flexible — work from anywhere in India with global exposure.
• Monthly long weekends (every third Friday off).
• Generous wellness stipends and parental leave.
• A collaborative team where your voice is heard and your work drives real impact.
• Opportunity to help shape the future of data science at one of the world’s most trusted
brands.
What You’ll Do
- Build and enhance real-time voice agents for sales, support, and operations use cases
- Work with ASR, TTS, LLMs, and dialog orchestration frameworks to improve voice quality and response accuracy
- Integrate multiple providers (Deepgram, Sarvam, ElevenLabs, OpenAI) into a modular voice pipeline
- Reduce latency, improve interrupt handling, and make conversations more natural
- Own the agent evaluation process: call audits, response quality scoring, and accuracy improvements
- Collaborate with product teams and customers to understand real call behaviours and solve friction points
- Contribute to voice infrastructure: WebRTC, streaming sockets, call routing flows
⚙️ What You Bring
- 3–6 years of experience as an AI/ML/LLM Engineer working on voice or conversational systems
- Strong hands-on experience with LLM prompting, finetuning, embeddings, and RAG
- Solid understanding of STT and TTS models and tuning them for natural speech
- Proficiency in Python and Node.js
- Experience with real-time audio streaming (WebRTC, WebSockets) is a strong advantage
- Ability to debug live calls and rapidly improve agent behaviour
- Strong sense of ownership and willingness to iterate closely with users
🏆 What Success Looks Like
- Voice agents handle interruptions smoothly and sound natural
- Call success and customer completion rates increase month over month
- Latency, token usage, and infra costs are reduced through better engineering and smart model choices
- Customers begin trusting agents as teammates, not just tools
If You Are Someone Who:
✅ Builds fast and learns faster
✅ Cares deeply about product experience, not just model performance
✅ Wants to create the future of how businesses talk to customers
About the Role:
We are looking for an experienced AWS Engineer with strong expertise in cloud infrastructure design and backend development. The ideal candidate will be hands-on with AWS native services, microservices architecture, and API development, with a preference for those having experience in Go (Golang).
Key Responsibilities:
- Design, build, and maintain scalable, cloud-native applications on AWS
- Develop and manage microservices and containerized deployments using Docker and ECS/Fargate
- Implement and manage AWS services including Fargate, ECS, Lambda, Aurora (PostgreSQL), S3, CloudFront, and DMS
- Build and maintain RESTful APIs (OpenAPI/Swagger) and SOAP/XML services
- Ensure security, performance, and high availability of deployed applications
- Monitor and troubleshoot production issues using CloudWatch and CloudTrail
- Collaborate with cross-functional teams to deliver robust, efficient, and secure solutions
Requirements:
- 6+ years of software development experience with a focus on AWS-based solutions
- Strong hands-on expertise with AWS cloud services and microservices architecture
- Experience in API design and development (REST/SOAP)
- Go (Golang) experience preferred; other backend languages acceptable
- Proficiency with Docker, CI/CD pipelines, and container orchestration
- Strong knowledge of PostgreSQL/Aurora schema design and optimization
- Familiarity with AWS security best practices, IAM, and OAuth 2.0
Preferred:
- AWS Certifications (Solutions Architect / Developer Associate)
- Strong problem-solving, communication, and collaboration skills
ABOUT THE JOB:
Job Title: QA Automation Specialist
Location: Teynampet, Chennai
Job Type: Full-time
Company: Gigadesk Technologies Pvt. Ltd. [Greatify.ai]
COMPANY DESCRIPTION:
At Greatify.ai, we are transforming educational institutions with cutting-edge AI-powered solutions. Our platform acts as a smart operating system for colleges, schools, and universities—enhancing learning, streamlining operations, and maximizing efficiency.
With 100+ institutions served, 100,000+ students impacted globally, and 1,000+ educators empowered, we are redefining the future of education.
COMPANY WEBSITE: https://www.greatify.ai/
JOB DESCRIPTION:
As a QA Automation Specialist at Greatify, you will be responsible for designing, building, and maintaining robust automated test frameworks and suites covering UI, API, integration, regression, and performance tests for our ed‑tech platforms. As part of an Agile, cross‑functional team, you’ll integrate automation into our CI/CD pipelines to speed up release cycles while ensuring high product quality and reliability. Your role ensures consistent quality, provides actionable insights, and champions automation best practices across the QA function.
KEY RESPONSIBILITIES:
1. Quality Assurance Strategy:
- Develop and own QA strategy for EdTech product suites.
- Work with Product and Engineering teams to define quality benchmarks and release criteria.
- Ensure quality is embedded early in the software development lifecycle.
2. Test Planning & Execution:
- Design, write, and execute test cases and scenarios—manual and automated.
- Manage regression, integration, and exploratory testing.
- Monitor test outcomes, identify risks, and mitigate issues.
3. Automation Framework Development
- Develop scalable, maintainable automation frameworks using Playwright and Selenium, structured with Cucumber (BDD) for readable test specifications.
- Write automation scripts in Python and Java, following best practices like modular design and Page Object Model
4. Bug Tracking and Reporting:
- Log, triage, and track bugs using tools like Jira.
- Generate insightful quality reports for stakeholders.
5. Usability and Functional Testing:
- Evaluate UX across web/mobile platforms.
- Support UX teams with accessibility and user satisfaction testing.
6. Collaboration and Mentoring:
- Foster a strong QA culture with best practices and collaboration.
QUALIFICATIONS:
- Bachelor’s degree in Computer Science, Information Technology, or related field.
- 2+ years of QA experience with at least 2 years in automation testing.
- Proficiency in writing automation scripts using mainstream tools
- Experience in education tech systems.
- Hands-on knowledge of Agile/Scrum processes.
- Familiarity with programming languages Python and Java, using Playwright and Selenium for automation scripting, and employing JMeter or k6 with Grafana for performance testing.
- Experience setting up CI/CD pipelines via GitHub Actions and Jenkins, and managing test cases and execution tracking in ClickUp
- Experience with cross-browser and mobile automation is a plus.
- Strong problem-solving skills and attention to detail.
- Excellent communication and team collaboration skills.
We are looking for an 4-5 years of experienced AI/ML Engineer who can design, develop, and deploy scalable machine learning models and AI-driven solutions. The ideal candidate should have strong expertise in data processing, model building, and production deployment, along with solid programming and problem-solving skills.
Key Responsibilities
- Develop and deploy machine learning, deep learning, and NLP models for various business use cases.
- Build end-to-end ML pipelines including data preprocessing, feature engineering, training, evaluation, and production deployment.
- Optimize model performance and ensure scalability in production environments.
- Work closely with data scientists, product teams, and engineers to translate business requirements into AI solutions.
- Conduct data analysis to identify trends and insights.
- Implement MLOps practices for versioning, monitoring, and automating ML workflows.
- Research and evaluate new AI/ML techniques, tools, and frameworks.
- Document system architecture, model design, and development processes.
Required Skills
- Strong programming skills in Python (NumPy, Pandas, Scikit-learn, TensorFlow, PyTorch, Keras).
- Hands-on experience in building and deploying ML/DL models in production.
- Good understanding of machine learning algorithms, neural networks, NLP, and computer vision.
- Experience with REST APIs, Docker, Kubernetes, and cloud platforms (AWS/GCP/Azure).
- Working knowledge of MLOps tools such as MLflow, Airflow, DVC, or Kubeflow.
- Familiarity with data pipelines and big data technologies (Spark, Hadoop) is a plus.
- Strong analytical skills and ability to work with large datasets.
- Excellent communication and problem-solving abilities.
Preferred Qualifications
- Bachelor’s or Master’s degree in Computer Science, Data Science, AI, Machine Learning, or related fields.
- Experience in deploying models using cloud services (AWS Sagemaker, GCP Vertex AI, etc.).
- Experience in LLM fine-tuning or generative AI is an added advantage.
The Python Support Engineer is responsible for providing Tier 2/Tier 3 technical support for our software applications and systems, focusing primarily on components built with Python. This role involves diagnosing and resolving complex production issues, performing Root Cause Analysis (RCA), developing temporary workarounds, and implementing permanent code fixes using Python.
🛠️ Key Responsibilities
1. Technical Support & Troubleshooting
- Diagnose and Resolve Issues: Act as the escalation point for complex technical issues related to Python applications, backend services (APIs), and data pipelines.
- Log and Data Analysis: Utilize advanced analytical skills and Python scripts (e.g., using Pandas or regular expressions) to parse system logs, database records, and monitoring data to pinpoint the root cause of failures.
- Debugging and Fixes: Read, understand, debug, and modify existing Python code to implement necessary bug fixes, patches, and minor enhancements.
- Database Interaction: Write and execute complex SQL queries to investigate data integrity issues and system performance problems across various relational (e.g., PostgreSQL, MySQL) and NoSQL databases.
2. Development and Automation
- Automation: Develop and maintain Python scripts and utility tools (e.g., using Bash/Shell scripting) to automate repetitive support tasks, streamline system health checks, and improve incident response efficiency.
- Monitoring and Alerting: Configure and fine-tune monitoring tools (e.g., Prometheus, Grafana, ELK stack) to proactively detect issues and ensure system reliability.
- Documentation: Create and maintain detailed technical documentation, including RCAs, knowledge base articles, runbooks, and troubleshooting guides for the support team.
ROLES AND RESPONSIBILITIES:
You'll work closely with our team to implement best practices, improve our architecture, and create a high-performance engineering culture. Over a 6–9-month period, you'll also immerse yourself in game development, Unity, and C# to become a well-rounded technical leader in the gaming space.
- Drive maximum development velocity through direct involvement in development sprints, ensuring developers work as efficiently and effectively as possible.
- Lead and mentor a team of engineers, fostering a culture of technical excellence and continuous improvement.
- Drive architectural decisions that ensure scalable, maintainable, and high-performance game products.
- Foster a high-performance engineering culture aligned with ambitious goals, accountability, and proactive problem-solving.
- Implement and enforce engineering best practices (e.g., code reviews, testing, documentation) and the adoption of new tools, technologies including AI, and methodologies to optimize team efficiency.
- Transition our team to a high-performance culture aligned with our ambitious, venture-backed goals.
IDEAL CANDIDATE:
- 8+ years of software engineering experience with at least 3+ years in a technical leadership role
- Ability to reasonably estimate and plan tasks and features.
- Strong programming fundamentals and hands-on coding abilities
- Strong grasp of software architecture, TDD, code reviews, and clean coding principles.
- Proficient at profiling games to identify bottlenecks and performance issues.
- Experience building complex, scalable software systems
- Proven track record of driving architectural decisions and technical excellence
- Experience mentoring and developing engineering talent
- Strong problem-solving skills and attention to detail
- Excellent communication skills and ability to explain complex technical concepts
- Experience with agile development methodologies
- Bachelor's degree in computer science, Engineering, or equivalent practical experience
PERKS, BENEFITS AND WORK CULTURE:
- We foster a culture of continuous learning.
- We value talent and the ability for significant self-improvement.
- Honest feedback and transparency across all departments allow for rapid skill development.
- You will have the opportunity to work on an exciting new game development product with complete autonomy.
Job Description.
1. Cloud experience (Any cloud is fine although AWS is preferred. If non-AWS cloud, then the experience should reflect familiarity with the cloud's common services)
2. Good grasp of Scripting (in Linux for sure ie bash/sh/zsh etc, Windows : nice to have)
3. Python or Java or JS basic knowledge (Python Preferred)
4. Monitoring tools
5. Alerting tools
6. Logging tools
7. CICD
8. Docker/containers/(k8s/terraform nice to have)
9. Experience working on distributed applications with multiple services
10. Incident management
11. DB experience in terms of basic queries
12. Understanding of performance analysis of applications
13. Idea about data pipelines would be nice to have
14. Snowflake querying knowledge: nice to have
The person should be able to :
Monitor system issues
Create strategies to detect and address issues
Implement automated systems to troubleshoot and resolve issues.
Write and review post-mortems
Manage infrastructure for multiple product teams
Collaborate with product engineering teams to ensure best practices are being followed.
Role: Lead Software Engineer (Backend)
Salary: INR 28 to INR 40L per annum
Performance Bonus: Up to 10% of the base salary can be added
Location: Hulimavu, Bangalore, India
Experience: 6-10 years
About AbleCredit:
AbleCredit has built a foundational AI platform to help BFSI enterprises reduce OPEX by up to 70% by powering workflows for onboarding, claims, credit, and collections. Our GenAI model achieves over 95% accuracy in understanding Indian dialects and excels in financial analysis.
The company was founded in June 2023 by Utkarsh Apoorva (IIT Delhi, built Reshamandi, Guitarstreet, Edulabs); Harshad Saykhedkar (IITB, ex-AI Lead at Slack); and Ashwini Prabhu (IIML, co-founder of Mythiksha, ex-Product Head at Reshamandi, HandyTrain).
What Work You’ll Do
- Build best-in-class AI systems - that enterprises can trust, where reliability and explainability are not optional.
- Operate in founder mode — build, patch, or fork, whatever it takes to ship today, not next week.
- Work at the frontier of AI x Systems — making AI models behave predictably to solve real, enterprise-grade problems.
- Own end-to-end feature delivery — from requirement scoping to design, development, testing, deployment, and post-release optimization.
- Design and implement complex, distributed systems that support large-scale workflows and integrations for enterprise clients.
- Operate with full technical ownership — make architectural decisions, review code, and mentor junior engineers to maintain quality and velocity.
- Build scalable, event-driven services leveraging AWS Lambda, SQS/SNS, and modern asynchronous patterns.
- Work with cross-functional teams to design robust notification systems, third-party integrations, and data pipelines that meet enterprise reliability and security standards.
The Skills You Have..
- Strong background as an Individual Contributor — capable of owning systems from concept to production without heavy oversight.
- Expertise in system design, scalability, and fault-tolerant architecture.
- Proficiency in Node.js (bonus) or another backend language such as Go, Java, or Python.
- Deep understanding of SQL (PostgreSQL/MySQL) and NoSQL (MongoDB/DynamoDB) systems.
- Hands-on experience with AWS services — Lambda, API Gateway, S3, CloudWatch, ECS/EKS, and event-based systems.
- Experience in designing and scaling notification systems and third-party API integrations.
- Proficiency in event-driven architectures and multi-threading/concurrency models.
- Strong understanding of data modeling, security practices, and performance optimization.
- Familiarity with CI/CD pipelines, automated testing, and monitoring tools.
- Strong debugging, performance tuning, and code review skills.
What You Should Have Done in the Past
- Delivered multiple complex backend systems or microservices from scratch in a production environment.
- Led system design discussions and guided teams on performance, reliability, and scalability trade-offs.
- Mentored SDE-1 and SDE-2 engineers, enabling them to deliver features independently.
- Owned incident response and root cause analysis for production systems.
- (Bonus) Built or contributed to serverless systems using AWS Lambda, with clear metrics on uptime, throughput, and cost-efficiency.
Highlights:
- PTO & Holidays
- Opportunity to work with a core Gen AI startup.
- Flexible hours and an extremely positive work environment
We’re looking for a skilled Senior Machine Learning Engineer to help us transform the Insurtech space. You’ll build intelligent agents and models that read, reason, and act.
Insurance ops are broken. Underwriters drown in PDFs. Risk clearance is chaos. Emails go in circles. We’ve lived it – and we’re fixing it. Bound AI is building agentic AI workflows that go beyond chat. We orchestrate intelligent agents to handle policy operations end-to-end:
• Risk clearance.
• SOV ingestion.
• Loss run summarization.
• Policy issuance.
• Risk triage.
No hallucinations. No handwaving. Just real-world AI that executes – in production, at scale.
Join us to help shape the future of insurance through advanced technology!
We’re Looking For:
- Deep experience in GenAI, LLM fine-tuning, and multi-agent orchestration (LangChain, DSPy, or similar).
- 5+ years of proven experience in the field
- Strong ML/AI engineering background in both foundational modeling (NLP, transformers, RAG) and traditional ML.
- Solid Python engineering chops – you write production-ready code, not just notebooks.
- A startup mindset – curiosity, speed, and obsession with shipping things that matter.
- Bonus – Experience with insurance or document intelligence (SOVs, Loss Runs, ACORDs).
What You’ll Be Doing:
- Develop foundation-model-based pipelines to read and understand insurance documents.
- Develop GenAI agents that handle real-time decision-making and workflow orchestration, and modular, composable agent architectures that interact with humans, APIs, and other agents.
- Work on auto-adaptive workflows that optimize around data quality, context, and risk signals.
Role - Python Developer
Location - Ahmedabad
Experience - 1 - 2 Years
Employment Type - Full-Time
Role Overview:
We are looking for a Python-focused AI/ML Engineer to develop, train, and deploy machine learning models and AI-driven solutions. The ideal candidate should have strong Python skills and hands-on experience with ML frameworks.
Key Responsibilities:
- Build and optimize ML/DL models using Python.
- Develop data pipelines and perform data preprocessing.
- Deploy models using MLOps tools and cloud platforms.
- Collaborate with cross-functional teams to deliver AI solutions.
- Conduct model testing, tuning, and performance monitoring.
Required Skills:
- Strong proficiency in Python, NumPy, Pandas, Scikit-learn.
- Experience with TensorFlow or PyTorch.
- Understanding of ML algorithms and model evaluation.
- Familiarity with REST APIs and Git.
- Basic knowledge of cloud services (AWS/Azure/GCP).
Preferred Skills:
- Experience with NLP or Computer Vision.
- Knowledge of Docker, Kubernetes, and MLflow.
AccioJob is conducting a Walk-In Hiring Drive with AntStack for the position of Python Backend Developer.
To apply, register and select your slot here: https://go.acciojob.com/WUWVgb
Required Skills: Git, SQL, JavaScript, REST APIs, Cloud Platforms, Python
Eligibility:
- Degree: BTech./BE, MTech./ME, BCA, MCA, BSc., MSc
- Branch: All
- Graduation Year: 2025
Work Details:
- Work Location: Bangalore (Onsite)
- CTC: 4.5 LPA to 6 LPA
Evaluation Process:
Round 1: Offline Assessment at AccioJob Bangalore Centre
Important Note: Bring your laptop & earphones for the test.
Further Rounds (for shortlisted candidates only):
- Resume Evaluation
- Technical Interview 1
- Technical Interview 2
- Technical Interview 3
- HR Discussion
Register here: https://go.acciojob.com/WUWVgb
About the Company
Gruve is an innovative Software Services startup dedicated to empowering Enterprise Customers in managing their Data Life Cycle. We specialize in Cyber Security, Customer Experience, Infrastructure, and advanced technologies such as Machine Learning and Artificial Intelligence. Our mission is to assist our customers in their business strategies utilizing their data to make more intelligent decisions. As a well-funded early-stage startup, Gruve offers a dynamic environment with strong customer and partner networks.
Why Gruve
At Gruve, we foster a culture of innovation, collaboration, and continuous learning. We are committed to building a diverse and inclusive workplace where everyone can thrive and contribute their best work. If you’re passionate about technology and eager to make an impact, we’d love to hear from you.
Gruve is an equal opportunity employer. We welcome applicants from all backgrounds and thank all who apply; however, only those selected for an interview will be contacted.
Position Summary
We are seeking a highly experienced and visionary Senior Engineering Manager – Inference Services to lead and scale our team responsible for building high-performance inference systems that power cutting-edge AI/ML products. This role requires a blend of strong technical expertise, leadership skills, and product-oriented thinking to drive innovation, scalability, and reliability of our inference infrastructure.
Key Responsibilities
Leadership & Strategy
- Lead, mentor, and grow a team of engineers focused on inference platforms, services, and optimizations.
- Define the long-term vision and roadmap for inference services in alignment with product and business goals.
- Partner with cross-functional leaders in ML, Product, Data Science, and Infrastructure to deliver robust, low-latency, and scalable inference solutions.
Engineering Excellence
- Architect and oversee development of distributed, production-grade inference systems ensuring scalability, efficiency, and reliability.
- Drive adoption of best practices for model deployment, monitoring, and continuous improvement of inference pipelines.
- Ensure high availability, cost optimization, and performance tuning of inference workloads across cloud and on-prem environments.
Innovation & Delivery
- Evaluate emerging technologies, frameworks, and hardware accelerators (GPUs, TPUs, etc.) to continuously improve inference efficiency.
- Champion automation and standardization of model deployment and lifecycle management.
- Balance short-term delivery with long-term architectural evolution.
People & Culture
- Build a strong engineering culture focused on collaboration, innovation, and accountability.
- Provide coaching, feedback, and career development opportunities to team members.
- Foster a growth mindset and data-driven decision-making.
Basic Qualifications
Experience
- 12+ years of software engineering experience with at least 4–5 years in engineering leadership roles.
- Proven track record of managing high-performing teams delivering large-scale distributed systems or ML platforms.
- Experience in building and operating inference systems, ML serving platforms, or real-time data systems at scale.
Technical Expertise
- Strong understanding of machine learning model deployment, serving, and optimization (batch & real-time).
- Proficiency in cloud-native technologies (Kubernetes, Docker, microservices architecture).
- Hands-on knowledge of inference frameworks (TensorFlow Serving, Triton Inference Server, TorchServe, etc.) and hardware accelerators.
- Solid background in programming languages (Python, Java, C++ or Go) and performance optimization techniques.
Preferred Qualifications
- Experience with MLOps platforms and end-to-end ML lifecycle management.
- Prior work in high-throughput, low-latency systems (ad-tech, search, recommendations, etc.).
- Knowledge of cost optimization strategies for large-scale inference workloads.
Tired of static UIs? Ready to build the next generation of WealthTech?
We're looking for a sharp, proactive React Developer who is obsessed with building high-performance, beautiful, and scalable frontends. This is your chance to own the user experience for an award-winning fintech platform, working closely with founders and senior engineers to deliver immediate, real-world financial impact.
About Us: The Next Generation of WealthTech
We're Cambridge Wealth, an award-winning force in mutual fund distribution and Fintech. We're not just moving money; we're redefining wealth management for everyone from retail investors to ultra-HNIs (including the NRI segment). Our brand is synonymous with excellence, backed by accolades from the BSE and top Mutual Fund houses.
If you thrive on building high-performance, scalable systems that drive real-world financial impact, you'll feel right at home. Join us in Pune to build the future of finance.
[Learn more: www.cambridgewealth.in]
The Role: Ship Fast, Design Smart
You will be a core frontend specialist, leveraging React to translate complex financial data and AI insights into intuitive, high-speed user interfaces.
Key Impact Areas:
React Development & Prototyping
- Rapid Prototyping: Design and execute quick, iterative front-end experiments in React to validate new features and market hypotheses, moving from concept to production in days, not months.
- UX/UI Ownership: Build scalable, modern, and pixel-perfect UIs that are responsive, fast, and keep the customer's experience top-of-mind at all times.
- Performance: Optimize React components and overall application performance for speed and stability in a data-heavy environment.
Product Execution & Collaboration
- Agile Catalyst: Actively participate in and optimize Agile sprints, ensuring clear technical milestones, backlog grooming, and maintaining a laser focus on preventing scope creep.
- Domain Alignment: Translate complex financial requirements and user stories into precise, actionable React components and seamless front-end workflows.
- Problem Solver: Proactively identify and resolve technical and process bottlenecks, acting as the ultimate problem solver for the engineering and product teams.
Your Tech Stack & Experience
The Must-Haves
- Mindset: A verifiable track record as a proactive First Principle Problem Solver with an intense Passion to Ship production-ready features frequently.
- Customer Empathy: Keeps the customer's experience in mind at all times.
- Systems Thinker: Diagnoses and solves problems by viewing the organization as an interconnected system to anticipate broad impacts and develop holistic, strategic solutions.
- Frontend Expertise: 2+ years of professional experience with deep expertise in ReactJS (Hooks, state management, routing).
- Backend Knowledge: Solid understanding of API integration (REST/GraphQL) and data fetching best practices.
Added Advantage (Nice to Have Skills!)
Proficiency in Node.js and
Experience with Strapi (or similar Headless CMS).
Python (Django/Flask)
Apply now to join our award-winning, forward-thinking team!
Our High-Velocity Hiring Process:
- You Apply & Engage: Quick application and a few insightful questions. (5 min)
- Online Tech Challenge: Prove your tech mettle. (90-100 min)
- People Sync: A focused call to understand if there is cultural and value alignment. (30 min)
- Deep Dive Technical Interview: Discuss architecture and projects with our senior engineers. (1 hour)
- Founder's Vision Interview: Meet the leadership and discuss your impact. (1 hour)
- Offer & Onboarding: Reference and BGV check follow the successful offer.
Question for You: What are you building right now that you're most proud of?
Job Description -
Position: Senior Data Engineer (Azure)
Experience - 6+ Years
Mode - Hybrid
Location - Gurgaon, Pune, Jaipur, Bangalore, Bhopal
Key Responsibilities:
- Data Processing on Azure: Azure Data Factory, Streaming Analytics, Event Hubs, Azure Databricks, Data Migration Service, Data Pipeline.
- Provisioning, configuring, and developing Azure solutions (ADB, ADF, ADW, etc.).
- Design and implement scalable data models and migration strategies.
- Work on distributed big data batch or streaming pipelines (Kafka or similar).
- Develop data integration and transformation solutions for structured and unstructured data.
- Collaborate with cross-functional teams for performance tuning and optimization.
- Monitor data workflows and ensure compliance with data governance and quality standards.
- Contribute to continuous improvement through automation and DevOps practices.
Required Skills & Experience:
- 6–10 years of experience as a Data Engineer.
- Strong proficiency in Azure Databricks, PySpark, Python, SQL, and Azure Data Factory.
- Experience in Data Modelling, Data Migration, and Data Warehousing.
- Good understanding of database structure principles and schema design.
- Hands-on experience using MS SQL Server, Oracle, or similar RDBMS platforms.
- Experience in DevOps tools (Azure DevOps, Jenkins, Airflow, Azure Monitor) – good to have.
- Knowledge of distributed data processing and real-time streaming (Kafka/Event Hub).
- Familiarity with visualization tools like Power BI or Tableau.
- Strong analytical, problem-solving, and debugging skills.
- Self-motivated, detail-oriented, and capable of managing priorities effectively.
Position: QA Engineer – Machine Learning Systems (5 - 7 years)
Location: Remote (Company in Mumbai)
Company: Big Rattle Technologies Private Limited
Immediate Joiners only.
Summary:
The QA Engineer will own quality assurance across the ML lifecycle—from raw data validation through feature engineering checks, model training/evaluation verification, batch prediction/optimization validation, and end-to-end (E2E) workflow testing. The role is hands-on with Python automation, data profiling, and pipeline test harnesses in Azure ML and Azure DevOps. Success means probably correct data, models, and outputs at production scale and cadence.
Key Responsibilities:
Test Strategy & Governance
- ○ Define an ML-specific Test Strategy covering data quality KPIs, feature consistency
- checks, model acceptance gates (metrics + guardrails), and E2E run acceptance
- (timeliness, completeness, integrity).
- ○ Establish versioned test datasets & golden baselines for repeatable regression of
- features, models, and optimizers.
Data Quality & Transformation
- Validate raw data extracts and landed data lake data: schema/contract checks, null/outlier thresholds, time-window completeness, duplicate detection, site/material coverage.
- Validate transformed/feature datasets: deterministic feature generation, leakage detection, drift vs. historical distributions, feature parity across runs (hash or statistical similarity tests).
- Implement automated data quality checks (e.g., Great Expectations/pytest + Pandas/SQL) executed in CI and AML pipelines.
Model Training & Evaluation
- Verify training inputs (splits, windowing, target leakage prevention) and hyperparameter configs per site/cluster.
- Automate metric verification (e.g., MAPE/MAE/RMSE, uplift vs. last model, stability tests) with acceptance thresholds and champion/challenger logic.
- Validate feature importance stability and sensitivity/elasticity sanity checks (price/volume monotonicity where applicable).
- Gate model registration/promotion in AML based on signed test artifacts and reproducible metrics.
Predictions, Optimization & Guardrails
- Validate batch predictions: result shapes, coverage, latency, and failure handling.
- Test model optimization outputs and enforced guardrails: detect violations and prove idempotent writes to DB.
- Verify API push to third party system (idempotency keys, retry/backoff, delivery receipts).
Pipelines & E2E
- Build pipeline test harnesses for AML pipelines (data-gen nightly, training weekly,
- prediction/optimization) including orchestrated synthetic runs and fault injection
- (missing slice, late competitor data, SB backlog).
- Run E2E tests from raw data store -> ADLS -> AML -> RDBMS -> APIM/Frontend, assert
- freshness SLOs and audit event completeness (Event Hubs -> ADLS immutable).
Automation & Tooling
- Develop Python-based automated tests (pytest) for data checks, model metrics, and API contracts; integrate with Azure DevOps (pipelines, badges, gates).
- Implement data-driven test runners (parameterized by site/material/model-version) and store signed test artifacts alongside models in AML Registry.
- Create synthetic test data generators and golden fixtures to cover edge cases (price gaps, competitor shocks, cold starts).
Reporting & Quality Ops
- Publish weekly test reports and go/no-go recommendations for promotions; maintain a defect taxonomy (data vs. model vs. serving vs. optimization).
- Contribute to SLI/SLO dashboards (prediction timeliness, queue/DLQ, push success, data drift) used for release gates.
Required Skills (hands-on experience in the following):
- Python automation (pytest, pandas, NumPy), SQL (PostgreSQL/Snowflake), and CI/CD (Azure
- DevOps) for fully automated ML QA.
- Strong grasp of ML validation: leakage checks, proper splits, metric selection
- (MAE/MAPE/RMSE), drift detection, sensitivity/elasticity sanity checks.
- Experience testing AML pipelines (pipelines/jobs/components), and message-driven integrations
- (Service Bus/Event Hubs).
- API test skills (FastAPI/OpenAPI, contract tests, Postman/pytest-httpx) + idempotency and retry
- patterns.
- Familiar with feature stores/feature engineering concepts and reproducibility.
- Solid understanding of observability (App Insights/Log Analytics) and auditability requirements.
Required Qualifications:
- Bachelor’s or Master’s degree in Computer Science, Information Technology, or related field.
- 5–7+ years in QA with 3+ years focused on ML/Data systems (data pipelines + model validation).
- Certification in Azure Data or ML Engineer Associate is a plus.
Why should you join Big Rattle?
Big Rattle Technologies specializes in AI/ ML Products and Solutions as well as Mobile and Web Application Development. Our clients include Fortune 500 companies. Over the past 13 years, we have delivered multiple projects for international and Indian clients from various industries like FMCG, Banking and Finance, Automobiles, Ecommerce, etc. We also specialise in Product Development for our clients.
Big Rattle Technologies Private Limited is ISO 27001:2022 certified and CyberGRX certified.
What We Offer:
- Opportunity to work on diverse projects for Fortune 500 clients.
- Competitive salary and performance-based growth.
- Dynamic, collaborative, and growth-oriented work environment.
- Direct impact on product quality and client satisfaction.
- 5-day hybrid work week.
- Certification reimbursement.
- Healthcare coverage.
How to Apply:
Interested candidates are invited to submit their resume detailing their experience. Please detail out your work experience and the kind of projects you have worked on. Ensure you highlight your contributions and accomplishments to the projects.
JD for Cloud engineer
Job Summary:
We are looking for an experienced GCP Cloud Engineer to design, implement, and manage cloud-based solutions on Google Cloud Platform (GCP). The ideal candidate should have expertise in GKE (Google Kubernetes Engine), Cloud Run, Cloud Loadbalancer, Cloud function, Azure DevOps, and Terraform, with a strong focus on automation, security, and scalability.
You will work closely with development, operations, and security teams to ensure robust cloud infrastructure and CI/CD pipelines while optimizing performance and cost.
Key Responsibilities:
1. Cloud Infrastructure Design & Management
- Architect, deploy, and maintain GCP cloud resources via terraform/other automation.
- Implement Google Cloud Storage, Cloud SQL, filestore, for data storage and processing needs.
- Manage and configure Cloud Load Balancers (HTTP(S), TCP/UDP, and SSL Proxy) for high availability and scalability.
- Optimize resource allocation, monitoring, and cost efficiency across GCP environments.
2. Kubernetes & Container Orchestration
- Deploy, manage, and optimize workloads on Google Kubernetes Engine (GKE).
- Work with Helm charts for microservices deployments.
- Automate scaling, rolling updates, and zero-downtime deployments.
3. Serverless & Compute Services
- Deploy and manage applications on Cloud Run and Cloud Functions for scalable, serverless workloads.
- Optimize containerized applications running on Cloud Run for cost efficiency and performance.
4. CI/CD & DevOps Automation
- Design, implement, and manage CI/CD pipelines using Azure DevOps.
- Automate infrastructure deployment using Terraform, Bash and Powershell scripting
- Integrate security and compliance checks into the DevOps workflow (DevSecOps).
Required Skills & Qualifications:
✔ Experience: 8+ years in Cloud Engineering, with a focus on GCP.
✔ Cloud Expertise: Strong knowledge of GCP services (GKE, Compute Engine, IAM, VPC, Cloud Storage, Cloud SQL, Cloud Functions).
✔ Kubernetes & Containers: Experience with GKE, Docker, GKE Networking, Helm.
✔ DevOps Tools: Hands-on experience with Azure DevOps for CI/CD pipeline automation.
✔ Infrastructure-as-Code (IaC): Expertise in Terraform for provisioning cloud resources.
✔ Scripting & Automation: Proficiency in Python, Bash, or PowerShell for automation.
✔ Security & Compliance: Knowledge of cloud security principles, IAM, and compliance standards.
Job Title: Back-End Developer
Department: IT
Experience: 3.5 Years+
Location: Mohali
Shift: Rotational Shifts
Employment Type: Full-time
Job Overview:
The Senior Backend Developer will be responsible for the full lifecycle of our backend services, focusing on architecture, security, and performance.
Key Responsibilities:
- Core API Development & Logic: Design, build, and maintain robust, scalable, and secure RESTful/ GraphQL APIs using Node.js and Python to serve both internal and external consumers.
- System Architecture: Lead the design and implementation of application components, focusing on microservices architecture, ensuring services are loosely coupled and highly available.
- Database Management: Expertly manage and optimize complex database schemas and queries for both SQL (PostgreSQL/MySQL) and NoSQL (MongoDB/Redis) systems, ensuring data integrity and high performance.
- Performance and Scalability: Identify and resolve performance bottlenecks, implement caching strategies, and optimize server-side code and architecture for maximum speed and scalability.
- Code Quality and Standards: Write clean, efficient, well-documented, and testable code. Conduct thorough code reviews and mentor junior developers on best practices, design patterns, and coding standards.
- Security Implementation: Implement security and data protection settings,including authentication, authorization, and encryption protocols.
- DevOps and Deployment: Work with DevOps pipelines (CI/CD, Docker, Kubernetes) for smooth and automated deployment of services to cloud platforms (AWS/Azure/GCP).
- Collaboration: Work closely with cross-functional teams (Frontend, Product, QA) to understand requirements and translate them into technical specifications and deliver high-quality features.
Required Skills & Qualifications:
- Programming Expertise: Deep, demonstrable expertise in Node.js and Python.
- Node.js: Strong command of asynchronous programming, event loops, and related frameworks (e.g., Express.js).
- Python: Extensive experience with web frameworks (e.g., Django, Django Rest Framework, Flask).
- API Design: Proven ability to design and implement highly performant and secure RESTful APIs. Experience with GraphQL is a strong advantage.
- Databases: Expert knowledge of both Relational (e.g., PostgreSQL, MySQL) and NoSQL databases (e.g., MongoDB, Redis, Cassandra). Proficiency in query optimization and schema design.
- Cloud & Tools: Experience with cloud computing platforms (e.g., AWS, Azure, or GCP). Proficiency with Docker and Git.
- Testing: Strong experience with unit, integration, and end-to-end testing frameworks.
Soft Skills & Leadership
- Problem-Solving: Excellent analytical and problem-solving skills with a track record of troubleshooting complex production issues.
- Communication: Strong verbal and written communication skills to articulate technical decisions and collaborate effectively with diverse teams.
- Ownership: Proven ability to take ownership of complex projects, drive them to completion, and manage technical debt.
Lead/Senior Data Scientist - Generative AI/LLM, Healthcare
We are seeking a highly experienced and innovative Lead or Senior Data Scientist to pioneer the application of cutting-edge Generative AI and Large Language Models (LLMs) within the dynamic healthcare and life sciences domain. This is a remote opportunity requiring a minimum of 8+ years of professional experience.
The ideal candidate will be a technical leader responsible for driving the development and deployment of next-generation AI solutions that transform complex, real-world clinical data—particularly in areas like Oncology—into actionable insights. This role focuses on enhancing operational efficiencies and improving patient access and clinical outcomes across global health ecosystems.
Core Responsibilities:
- Lead the research, design, and deployment of novel Generative AI and LLM solutions for complex problems such as automated clinical document summarization, real-world evidence (RWE) generation, and patient journey analysis.
- Architect and optimize data pipelines and ML Ops strategies to handle large, diverse, and unstructured clinical datasets, ensuring model robustness and scalability in a regulated environment.
- Serve as a subject matter expert in Generative AI/LLMs, providing technical leadership, mentoring junior data scientists, and driving best practices within the team.
Required Experience:
- 8+ years of progressive experience in Data Science, Machine Learning, or AI Engineering.
- Deep technical expertise in developing, fine-tuning, and deploying Large Language Models (LLMs) and other Generative AI models in production environments.
- Strong domain knowledge in the Healthcare or Life Sciences industry, with direct experience utilizing clinical, claims, or Real-World Evidence (RWE) data.
- Proven ability to lead complex, cross-functional projects and translate business challenges into defined AI/ML solutions.
Responsibilities:
• Develop, and maintain SQL and NoSQL databases, ensuring high performance, scalability, and
reliability.
• Collaborate with the API team and Data Science team to build robust data pipelines and
automations.
• Work closely with stakeholders to understand database requirements and provide technical
solutions.
• Optimize database queries and performance tuning to enhance overall system efficiency.
• Implement and maintain data security measures, including access controls and encryption.
• Monitor database systems and troubleshoot issues proactively to ensure uninterrupted
service.
• Develop and enforce data quality standards and processes to maintain data integrity.
• Create and maintain documentation for database architecture, processes, and procedures.
• Stay updated with the latest database technologies and best practices to drive continuous
improvement.
• Expertise in SQL queries and stored procedures, with the ability to optimize and fine-tune
complex queries for performance and efficiency.
• Experience with monitoring and visualization tools such as Grafana to monitor database
performance and health
Requirements:
• Bachelor’s degree in Computer Science, Engineering, or equivalent experience
• 2+ years of experience in data engineering, with a focus on large-scale data systems.
• Proven experience designing data models and access patterns across SQL and NoSQL
ecosystems.
• Hands-on experience with technologies like SQL, DynamoDB, S3, and Lambda services.
• Proficient in SQL stored procedures with extensive expertise in MySQL schema design, query
optimization, and resolvers, along with hands-on experience in building and maintaining data
warehouses.
• Strong programming skills in Python or JavaScript, with the ability to write efficient,
maintainable code.
• Familiarity with observability stacks (Prometheus, Grafana, Open Telemetry) and debugging
production bottlenecks.
• Understanding cloud infrastructure (preferably AWS), including networking, IAM, and cost
optimization.
• Excellent communication and collaboration skills to influence cross-functional technical
decisions.
AI Engineer – Supaboard.ai
Location: Bengaluru, India (On-site, 5 days a week)
Experience Level: 2–5 years
Compensation: ₹8 – ₹16 LPA
Tech Stack: Python, TypeScript, OpenAI/Anthropic APIs, Hugging Face, Vector DBs
Employment Type: Full-time, In-office
About Supaboard.ai
Supaboard.ai is building an intelligent data analytics platform powered by modern AI systems, enabling teams to transform their raw data into dashboards, insights, and automations—instantly.
We combine analytics, automation, and AI into a single powerful engine used by fast-growing teams.
We’re looking for an AI Engineer who loves working with models, fine-tuning, LLM orchestration, and building AI-driven features that scale.
Key Responsibilities
- Fine-tune open-source LLMs (e.g., Llama, Mistral, T5, Qwen) for internal use cases like text classification, summarization, extraction, and agent workflows.
- Develop, test, and optimize prompt engineering strategies for production use-cases.
- Build and maintain pipelines for training, evaluating, and deploying custom AI models.
- Integrate models with Supaboard’s backend using Python, TypeScript, and cloud-based AI platforms.
- Work with libraries like Hugging Face Transformers, LangChain, OpenAI, Gemini, Anthropic, and vector databases (Pinecone/Weaviate).
- Create high-accuracy evaluation datasets and design automated evaluation harnesses.
- Optimize performance, latency, and reliability of AI-powered features in production.
- Collaborate with product, engineering, and data teams to design and implement AI-driven product features.
Requirements
- Strong proficiency in Python (must) and basic experience with TypeScript (preferred).
- Solid understanding of LLMs, embeddings, tokenization, model architectures, and NLP pipelines.
- Prior experience fine-tuning or training open-source models using PyTorch, Hugging Face, or similar frameworks.
- Experience calling and orchestrating external LLM APIs (OpenAI, Anthropic, Google Gemini, etc.).
- Ability to design prompts, tune them, and create deterministic and reliable chains/flows.
- Hands-on experience with vector databases (Pinecone, Weaviate, Chroma) and retrieval pipelines.
- Familiarity with cloud environments (AWS/GCP) and deploying AI workloads.
- Good understanding of evaluation metrics and experiment tracking (W&B or similar).
- Strong debugging skills and an ownership-driven mindset.
Nice to Have
- Experience building agents, tool-calling workflows, and multi-model pipelines.
- Knowledge of distributed training, quantization (GGUF/GGML), or optimization techniques (LoRA, QLoRA, PEFT).
- Experience building AI-based features for data analytics or SaaS products.
- Familiarity with FastAPI or Node-based backend services.
Why Supaboard.ai?
- Build a core part of India’s next-gen AI analytics platform.
- Work with cutting-edge open-source AI models and real-world production workloads.
- Massive ownership, high impact, and a fast-paced startup environment.
- A culture that rewards learning, curiosity, and technical growth.
We are looking for experienced Data Engineers who can independently build, optimize, and manage scalable data pipelines and data platforms. In this role, you will collaborate with clients and internal teams to deliver robust data solutions that support analytics, AI/ML, and operational systems. You will also mentor junior engineers and bring strong engineering discipline to our data engagements.
Key Responsibilities
- Design, build, and optimize large-scale, distributed batch and streaming data pipelines.
- Implement scalable data models, data warehouses/lakehouses, and data lakes to support analytics and decision-making.
- Work closely with cross-functional stakeholders to translate business requirements into technical data solutions.
- Drive performance tuning, monitoring, and reliability of data pipelines.
- Write clean, modular, production-ready code with proper documentation and testing.
- Contribute to architecture discussions, tool evaluations, and platform setup.
- Mentor junior engineers and participate in code/design reviews.
Must-Have Skills
- Strong programming skills in Python (exp with Java is a good to have).
- Advanced SQL expertise with ability to work on complex queries and optimizations.
- Deep understanding of data engineering concepts such as ETL/ELT, data modeling (OLTP & OLAP), warehousing, and stream processing.
- Experience with distributed processing frameworks like Apache Spark, Flink, or similar.
- Experience with Snowflake (preferred).
- Hands-on experience building pipelines using orchestration tools such as Airflow or similar.
- Familiarity with CI/CD, version control (Git), and modern development practices.
- Ability to debug, optimize, and scale data pipelines in real-world environments.
Good to Have
- Experience with major cloud platforms (AWS preferred; GCP/Azure also welcome).
- Exposure to Databricks, dbt, or similar platforms.
- Understanding of data governance, data quality frameworks, and observability.
- Certifications in AWS (Data Analytics / Solutions Architect) or Databricks.
Other Expectations
- Comfortable working in fast-paced, client-facing environments.
- Strong analytical and problem-solving skills with excellent attention to detail.
- Ability to adapt across tools, stacks, and business domains.
- Willingness to travel within India for short/medium-term client engagements as needed.
What We’re Looking For
- 3-5 years of Data Science & ML experience in consumer internet / B2C products.
- Degree in Statistics, Computer Science, or Engineering (or certification in Data Science).
- Machine Learning wizardry: recommender systems, NLP, user profiling, image processing, anomaly detection.
- Statistical chops: finding meaningful insights in large data sets.
- Programming ninja: R, Python, SQL + hands-on with Numpy, Pandas, scikit-learn, Keras, TensorFlow (or similar).
- Visualization skills: Redshift, Tableau, Looker, or similar.
- A strong problem-solver with curiosity hardwired into your DNA.
- Brownie Points
- Experience with big data platforms: Hadoop, Spark, Hive, Pig.
- Extra love if you’ve played with BI tools like Tableau or Looker.
- 5+ years of experience
- Strong in Python, Selenium-based testing, Scripting
- Knowledge of REST APIs and GraphQL APIs
- Working knowledge of Kubernetes, Cloud
- Working understanding of large-scale distributed systems architecture and data-driven
development
- Good debugging and communication skills
About Asha Health
Asha Health helps medical practices launch their own AI clinics. We're backed by Y Combinator, General Catalyst, 186 Ventures, Reach Capital and many more. We recently raised an oversubscribed seed round from some of the best investors in Silicon Valley. Our team includes AI product leaders from companies like Google, physician executives from major health systems, and more.
About the Role
We're looking for a Software Engineer Intern to join our engineering team in our Bangalore office (currently 5 teammates). We're looking for someone who is an all-rounder, but has particularly exceptional backend engineering skills.
Our ideal candidate has built AI agents at the orchestration layer level and leveraged clever engineering techniques to improve latency & reliability for complex workflows.
You will be working alongside senior engineers on our team who will mentor you and coach you; however we expect strong backend engineering skills coming in.
Responsibilities
In this role, you will have the opportunity to build state-of-the-art AI agents, and learn what it takes to build an industry-leading multimodal, multi-agent suite.
You'll wear many hats. Your responsibilities will fall into 3 categories:
AI Engineering
- Develop AI agents with a high bar for reliability and performance.
- Build SOTA LLM-powered tools for providers, practices, and patients.
- Architect our data annotation, fine tuning, and RLHF workflows.
- Live on the bleeding-edge ensuring that every week, we have the most cutting edge agents as the industry evolves.
Full-Stack Engineering (80% backend, 20% frontend)
- Lead the team in designing scalable architecture to support performant web applications.
- Develop features end-to-end for our web applications (Typescript, nodeJS, python etc).
Requirements
You do not need AI experience to apply to this role. While we prefer candidates that have some AI experience, we have hired engineers before that do not have any, but have demonstrated that they are very fast learners.
We prefer candidates who have worked as a founding engineer at an early stage startup (Seed or Preseed) or a Senior Software Engineer at a Series A or B startup.
Perks of Working at Asha Health
#1 Build cool stuff: work on the latest, cutting-edge tech (build frontier AI agents with technologies that evolve every 2 weeks).
#2 Surround yourself with top talent: our team includes senior AI product leaders from companies like Google, experienced physician executives, and top 1% engineering talent (the best of the best).
#3 Rocketship trajectory: we get more customer interest than we have time to onboard, it's a good problem to have :)
Our Mission
To make video as accessible to machines as text and voice are today.
At lookup, we believe the world's most valuable asset is trapped. Video is everywhere, but it's unsearchable—a black box of insight that no one can open or atleast open affordably. We’re changing that. We're building the search engine for the visual world, so anyone can find or do anything with video just by asking.
Text is queryable. Voice is transcribed. Video, the largest and richest data source of all, is still a black box. A computer can't understand it, and so its value remains trapped.
Our mission at lookup is to fix this.
About the Role
We are looking for founding Backend Engineers to build a highly performant, reliable, and scalable API platform that makes enterprise video knowledge readily available for video search, summarization, and natural‑language Q&A. You will partner closely with our ML team working on vision‑language models to productionize research and deliver fast, trustworthy APIs for customers.
Examples of technical challenges you will work on include: distributed video storage, a unified application framework and data model for indexing large video libraries, low‑latency clip retrieval, vector search at scale, and end‑to‑end build, test, deploy, and observability in cloud environments.
What You’ll Do:
- Design and build robust backend services and APIs (REST, gRPC) for vector search, video summarization, and video Q&A.
- Own API performance and reliability, including low‑latency retrieval, pagination, rate limiting, and backwards‑compatible versioning.
- Design schemas and tune queries in Postgres, and integrate with unstructured storage.
- Implement observability across metrics, logs, and traces. Set error budgets and SLOs.
- Write clear design docs and ship high‑quality, well‑tested code.
- Collaborate with ML engineers to integrate and productionize VLMs and retrieval pipelines.
- Take ownership of architecture from inception to production launch.
Who You Are:
- 3+ years of professional experience in backend development.
- Proven experience building and scaling polished WebSocket, gRPC, and REST APIs.
- Exposure to distributed systems and container orchestration (Docker and Kubernetes).
- Hands‑on experience with AWS.
- Strong knowledge of SQL (Postgres) and NoSQL (e.g., Cassandra), including schema design, query optimization, and scaling.
- Familiarity with our stack is a plus, but not mandatory: Python (FastAPI), Celery, Kafka, Postgres, Redis, Weaviate, React.
- Ability to diagnose complex issues, identify root causes, and implement effective fixes.
- Comfortable working in a fast‑paced startup environment.
Nice to have:
- Hands-on work with LLM agents, vector embeddings, or RAG applications.
- Building video streaming pipelines and storage systems at scale (FFmpeg, RTSP, WebRTC).
- Proficiency with modern frontend frameworks (React, TypeScript, Tailwind CSS) and responsive UI design.
Location & Culture
- Full-time, in-office role in Bangalore (we’re building fast and hands-on).
- Must be comfortable with a high-paced environment and collaboration across PST time zones for our US customers and investors.
- Expect startup speed — daily founder syncs, rapid design-to-prototype cycles, and a culture of deep ownership.
Why You’ll Love This Role
- Work on the frontier of video understanding and real-world AI — products that can redefine trust and automation.
- Build core APIs that make video queryable and power real customer use.
- Own systems end to end: performance, reliability, and developer experience.
- Work closely with founders and collaborate in person in Bangalore.
- Competitive salary with meaningful early equity.
About the Role:
We are building cutting-edge AI products designed for enterprise-scale applications and arelooking for a Senior Python Developer to join our core engineering team. You will beresponsible for designing and delivering robust, scalable backend systems that power ouradvanced AI solutions.
Key Responsibilities:
- Design, develop, and maintain scalable Python-based backend applications and services.
- Collaborate with AI/ML teams to integrate machine learning models into production environments.
- Optimize applications for performance, reliability, and security.
- Write clean, maintainable, and testable code following best practices.
- Work with cross-functional teams including Data Science, DevOps, and UI/UX to ensure seamless delivery.
- Participate in code reviews, architecture discussions, and technical decision-making.
- Troubleshoot, debug, and upgrade existing systems.
Required Skills & Experience:
- Minimum 5 years of professional Python development experience.
- Strong expertise in Django / Flask / FastAPI.
- Hands-on experience with REST APIs, microservices, and event-driven architecture.
- Solid understanding of databases (PostgreSQL, MySQL, MongoDB, Redis).
- Familiarity with cloud platforms (AWS / Azure / GCP) and CI/CD pipelines.
- Experience with AI/ML pipeline integration is a strong plus.
- Strong problem-solving and debugging skills.
- Excellent communication skills and ability to work in a collaborative environment.
Good to Have:
- Experience with Docker, Kubernetes.
- Exposure to message brokers (RabbitMQ, Kafka).
- Knowledge of data engineering tools (Airflow, Spark).
- Familiarity with Neo4j or other graph databases.
You will be responsible for building a highly-scalable and extensible robust application. This position reports to the Engineering Manager.
Responsibilities:
- Align Sigmoid with key Client initiatives
- Interface daily with customers across leading Fortune 500 companies to understand strategic requirements
- Ability to understand business requirements and tie them to technology solutions
- Open to work from client location as per the demand of the project / customer.
- Facilitate in Technical Aspects
- Develop and evolve highly scalable and fault-tolerant distributed components using Java technologies.
- Excellent experience in Application development and support, integration development and quality assurance.
- Provide technical leadership and manage it day to day basis
- Interface daily with customers across leading Fortune 500 companies to understand strategic requirements
- Stay up-to-date on the latest technology to ensure the greatest ROI for customer & Sigmoid
- Hands on coder with good understanding on enterprise level code.
- Design and implement APIs, abstractions and integration patterns to solve challenging distributed computing problems
- Experience in defining technical requirements, data extraction, data transformation, automating jobs, productionizing jobs, and exploring new big data technologies within a Parallel Processing environment
- Culture
- Must be a strategic thinker with the ability to think unconventional / out:of:box.
- Analytical and solution driven orientation.
- Raw intellect, talent and energy are critical.
- Entrepreneurial and Agile : understands the demands of a private, high growth company.
- Ability to be both a leader and hands on "doer".
Qualifications: -
- 3-5 year track record of relevant work experience and a computer Science or a related technical discipline is required
- Experience in development of Enterprise scale applications and capable in developing framework, design patterns etc. Should be able to understand and tackle technical challenges, and propose comprehensive solutions.
- Experience with functional and object-oriented programming, Java (Preferred) or Python is a must.
- Hand-On knowledge in Map Reduce, Hadoop, PySpark, Hbase and ElasticSearch.
- Development and support experience in Big Data domain
- Experience with database modelling and development, data mining and warehousing.
- Unit, Integration and User Acceptance Testing.
- Effective communication skills (both written and verbal)
- Ability to collaborate with a diverse set of engineers, data scientists and product managers
- Comfort in a fast-paced start-up environment.
Preferred Qualification:
- Experience in Agile methodology.
- Proficient with SQL and its variation among popular databases.
- Experience working with large, complex data sets from a variety of sources.
Job Title: Python Developer
Experience Level: 4+ years
Job Summary:
We are seeking a skilled Python Developer with strong experience in developing and maintaining APIs. Familiarity with 2D and 3D geometry concepts is a strong plus. The ideal candidate will be passionate about clean code, scalable systems, and solving complex geometric and computational problems.
Key Responsibilities:
· Design, develop, and maintain robust and scalable APIs using Python.
· Work with geometric data structures and algorithms (2D/3D).
· Collaborate with cross-functional teams including front-end developers, designers, and product managers.
· Optimize code for performance and scalability.
· Write unit and integration tests to ensure code quality.
· Participate in code reviews and contribute to best practices.
Required Skills:
· Strong proficiency in Python.
· Experience with RESTful API development (e.g., Flask, FastAPI, Django REST Framework).
· Good understanding of 2D/3D geometry, computational geometry, or CAD-related concepts.
· Familiarity with libraries such as NumPy, SciPy, Shapely, Open3D, or PyMesh.
· Experience with version control systems (e.g., Git).
· Strong problem-solving and analytical skills.
Good to Have:
· Experience with 3D visualization tools or libraries (e.g., VTK, Blender API, Three.js via Python bindings).
· Knowledge of mathematical modeling or simulation.
· Exposure to cloud platforms (AWS, Azure, GCP).
· Familiarity with CI/CD pipelines.
Education:
· Bachelor’s or Master’s degree in Computer Science, Engineering, Mathematics, or a related field.
Domain - Credit risk / Fintech
Roles and Responsibilities:
1. Development, validation and monitoring of Application and Behaviour score cards
for Retail loan portfolio
2. Improvement of collection efficiency through advanced analytics
3. Development and deployment of fraud scorecard
4. Upsell / Cross-sell strategy implementation using analytics
5. Create modern data pipelines and processing using AWS PAAS components (Glue,
Sagemaker studio, etc.)
6. Deploying software using CI/CD tools such as Azure DevOps, Jenkins, etc.
7. Experience with API tools such as REST, Swagger, and Postman
8. Model deployment in AWS and management of production environment
9. Team player who can work with cross-functional teams to gather data and derive
insights
Mandatory Technical skill set :
1. Previous experience in scorecard development and credit risk strategy development
2. Python and Jenkins
3. Logistic regression, Scorecard, ML and neural networks
4. Statistical analysis and A/B testing
5. AWS Sagemaker, S3 , Ec2, Dockers
6. REST API, Swagger and Postman
7. Excel
8. SQL
9. Visualisation tools such as Redash / Grafana
10. Bitbucket, Githubs and versioning tools
The Software Engineer – SRE will be responsible for building and maintaining highly reliable, scalable, and secure infrastructure that powers the Albert platform. This role focuses on automation, observability, and operational excellence to ensure seamless deployment, performance, and reliability of core platform services.
Key Responsibilities
- Act as a passionate representative of the Albert product and brand.
- Collaborate with Product Engineering and other stakeholders to plan and deliver core platform capabilities that enable scalability, reliability, and developer productivity.
- Work with the Site Reliability Engineering (SRE) team on shared full-stack ownership of a collection of services and/or technology areas.
- Understand the end-to-end configuration, technical dependencies, and overall behavioral characteristics of all microservices.
- Design and deliver the mission-critical stack, focusing on security, resiliency, scale, and performance.
- Take ownership of end-to-end performance and operability.
- Apply strong knowledge of automation and orchestration principles.
- Serve as the ultimate escalation point for complex or critical issues not yet documented as Standard Operating Procedures (SOPs).
- Troubleshoot and define mitigations using a deep understanding of service topology and dependencies.
Requirements
- Bachelor’s degree in Computer Science, Engineering, or equivalent experience.
- 2+ years of software engineering experience, with at least 1 year in an SRE role focused on automation.
- Strong experience in Infrastructure as Code (IAC), preferably using Terraform.
- Proficiency in Python or Node.js, with experience designing RESTful APIs and working in microservices architecture.
- Solid expertise in AWS cloud infrastructure and platform technologies including APIs, distributed systems, and microservices.
- Hands-on experience with observability stacks, including centralized log management, metrics, and tracing.
- Familiarity with CI/CD tools (e.g., CircleCI) and performance testing tools like K6.
- Passion for bringing automation and standardization to engineering operations.
- Ability to build high-performance APIs with low latency (<200ms).
- Ability to work in a fast-paced environment, learning from peers and leaders.
- Demonstrated ability to mentor other engineers and contribute to team growth, including participation in recruiting activities.
Good to Have
- Experience with Kubernetes and container orchestration.
- Familiarity with observability tools such as Prometheus, Grafana, OpenTelemetry, or Datadog.
- Experience building Internal Developer Platforms (IDPs) or reusable frameworks for engineering teams.
- Exposure to ML infrastructure or data engineering workflows.
- Experience working in compliance-heavy environments (e.g., SOC2, HIPAA).



























