50+ Remote Python Jobs in India
Apply to 50+ Remote Python Jobs on CutShort.io. Find your next job, effortlessly. Browse Python Jobs and apply today!
The Senior Software Developer is responsible for development of CFRA’s report generation framework using a modern technology stack: Python on AWS cloud infrastructure, SQL, and Web technologies. This is an opportunity to make an impact on both the team and the organization by being part of the design and development of a new customer-facing report generation framework that will serve as the foundation for all future report development at CFRA.
The ideal candidate has a passion for solving business problems with technology and can effectively communicate business and technical needs to stakeholders. We are looking for candidates that value collaboration with colleagues and having an immediate, tangible impact for a leading global independent financial insights and data company.
Key Responsibilities
- Analyst Workflows: Design and development of CFRA’s integrated content publishing platform using a proprietary 3rd party editorial and publishing platform for integrated digital publishing.
- Designing and Developing APIs: Design and development of robust, scalable, and secure APIs on AWS, considering factors like performance, reliability, and cost-efficiency.
- AWS Service Integration: Integrate APIs with various AWS services such as AWS Lambda, Amazon API Gateway, Amazon SQS, Amazon SNS, AWS Glue, and others, to build comprehensive and efficient solutions.
- Performance Optimization: Identify and implement optimizations to improve performance, scalability, and efficiency, leveraging AWS services and tools.
- Security and Compliance: Ensure APIs are developed following best security practices, including authentication, authorization, encryption, and compliance with relevant standards and regulations.
- Monitoring and Logging: Implement monitoring and logging solutions for APIs using AWS CloudWatch, AWS X-Ray, or similar tools, to ensure availability, performance, and reliability.
- Continuous Integration and Deployment (CI/CD): Establish and maintain CI/CD pipelines for API development, automating testing, deployment, and monitoring processes on AWS.
- Documentation and Training: Create and maintain comprehensive documentation for internal and external users, and provide training and support to developers and stakeholders.
- Team Collaboration: Collaborate effectively with cross-functional teams, including product managers, designers, and other developers, to deliver high-quality solutions that meet business requirements.
- Problem Solving: troubleshooting efforts, identifying root causes and implementing solutions to ensure system stability and performance.
- Stay Updated: Stay updated with the latest trends, tools, and technologies related to development on AWS, and continuously improve your skills and knowledge.
Desired Skills and Experience
- Development: 5+ years of extensive experience in designing, developing, and deploying using modern technologies, with a focus on scalability, performance, and security.
- AWS Services: proficiency in using AWS services such as AWS Lambda, Amazon API Gateway, Amazon SQS, Amazon SNS, Amazon SES, Amazon RDS, Amazon DynamoDB, and others, to build and deploy API solutions.
- Programming Languages: Proficiency in programming languages commonly used for development, such as Python, Node.js, or others, as well as experience with serverless frameworks like AWS.
- Architecture Design: Ability to design scalable and resilient API architectures using microservices, serverless, or other modern architectural patterns, considering factors like performance, reliability, and cost-efficiency.
- Security: Strong understanding of security principles and best practices, including authentication, authorization, encryption, and compliance with standards like OAuth, OpenID Connect, and AWS IAM.
- DevOps Practices: Familiarity with DevOps practices and tools, including CI/CD pipelines, infrastructure as code (IaC), and automated testing, to ensure efficient and reliable deployment on AWS.
- Problem-solving Skills: Excellent problem-solving skills, with the ability to troubleshoot complex issues, identify root causes, and implement effective solutions to ensure the stability and performance.
- Communication Skills: Strong communication skills, with the ability to effectively communicate technical concepts to both technical and non-technical stakeholders, and collaborate with cross- functional teams.
- Agile Methodologies: Experience working in Agile development environments, following practices like Scrum or Kanban, and ability to adapt to changing requirements and priorities.
- Continuous Learning: A commitment to continuous learning and staying updated with the latest trends, tools, and technologies related to development and AWS services.
- Bachelor's Degree: A bachelor's degree in Computer Science, Software Engineering, or a related field is often preferred, although equivalent experience and certifications can also be valuable.
The Lead Software Developer is responsible for development of CFRA’s report generation framework using a modern technology stack: Python on AWS cloud infrastructure, SQL, and Web technologies. This is an opportunity to make an impact on both the team and the organization by being part of the design and development of a new customer-facing report generation framework that will serve as the foundation for all future report development at CFRA.
The ideal candidate has a passion for solving business problems with technology and can effectively communicate business and technical needs to stakeholders. We are looking for candidates that value collaboration with colleagues and having an immediate, tangible impact for a leading global independent financial insights and data company.
Key Responsibilities
- Analyst Workflows: Lead the design and development of CFRA’s integrated content publishing platform using a proprietary 3rd party editorial and publishing platform for integrated digital publishing.
- Designing and Developing APIs: Lead the design and development of robust, scalable, and secure APIs on AWS, considering factors like performance, reliability, and cost-efficiency.
- Architecture Planning: Collaborate with architects and stakeholders to define architecture, including API gateway, microservices, and serverless components, ensuring alignment with business goals and AWS best practices.
- Technical Leadership: Provide technical guidance and leadership to the development team, ensuring adherence to coding standards, best practices, and AWS guidelines.
- AWS Service Integration: Integrate APIs with various AWS services such as AWS Lambda, Amazon API Gateway, Amazon SQS, Amazon SNS, AWS Glue, and others, to build comprehensive and efficient solutions.
- Performance Optimization: Identify and implement optimizations to improve performance, scalability, and efficiency, leveraging AWS services and tools.
- Security and Compliance: Ensure APIs are developed following best security practices, including authentication, authorization, encryption, and compliance with relevant standards and regulations.
- Monitoring and Logging: Implement monitoring and logging solutions for APIs using AWS CloudWatch, AWS X-Ray, or similar tools, to ensure availability, performance, and reliability.
- Continuous Integration and Deployment (CI/CD): Establish and maintain CI/CD pipelines for API development, automating testing, deployment, and monitoring processes on AWS.
- Documentation and Training: Create and maintain comprehensive documentation for internal and external users, and provide training and support to developers and stakeholders.
- Team Collaboration: Collaborate effectively with cross-functional teams, including product managers, designers, and other developers, to deliver high-quality solutions that meet business requirements.
- Problem Solving: Lead troubleshooting efforts, identifying root causes and implementing solutions to ensure system stability and performance.
- Stay Updated: Stay updated with the latest trends, tools, and technologies related to development on AWS, and continuously improve your skills and knowledge.
Desired Skills and Experience
- Development: 10+ years of extensive experience in designing, developing, and deploying using modern technologies, with a focus on scalability, performance, and security.
- AWS Services: Strong proficiency in using AWS services such as AWS Lambda, Amazon API Gateway, Amazon SQS, Amazon SNS, Amazon SES, Amazon RDS, Amazon DynamoDB, and others, to build and deploy API solutions.
- Programming Languages: Proficiency in programming languages commonly used for development, such as Python, Node.js, or others, as well as experience with serverless frameworks like AWS.
- Architecture Design: Ability to design scalable and resilient API architectures using microservices, serverless, or other modern architectural patterns, considering factors like performance, reliability, and cost-efficiency.
- Security: Strong understanding of security principles and best practices, including authentication, authorization, encryption, and compliance with standards like OAuth, OpenID Connect, and AWS IAM.
- DevOps Practices: Familiarity with DevOps practices and tools, including CI/CD pipelines, infrastructure as code (IaC), and automated testing, to ensure efficient and reliable deployment on AWS.
- Problem-solving Skills: Excellent problem-solving skills, with the ability to troubleshoot complex issues, identify root causes, and implement effective solutions to ensure the stability and performance.
- Team Leadership: Experience leading and mentoring a team of developers, providing technical guidance, code reviews, and fostering a collaborative and innovative environment.
- Communication Skills: Strong communication skills, with the ability to effectively communicate technical concepts to both technical and non-technical stakeholders, and collaborate with cross- functional teams.
- Agile Methodologies: Experience working in Agile development environments, following practices like Scrum or Kanban, and ability to adapt to changing requirements and priorities.
- Continuous Learning: A commitment to continuous learning and staying updated with the latest trends, tools, and technologies related to development and AWS services.
- Bachelor's Degree: A bachelor's degree in Computer Science, Software Engineering, or a related field is often preferred, although equivalent experience and certifications can also be valuable.
Job Summary:
Full-time 6+ Years We are looking for a Lead Data Scientist with the ability to lead a data science team and help us gain useful insight out of raw data. Lead Data Scientist responsibilities include managing the client, data science team, planning projects and building analytics models. You should have a strong problem-solving ability and a knack for statistical analysis. If you’re also able to align our data products with our business goals, we’d like to talk to you. Responsibilities
● You would be required to Identify, develop and implement the appropriate statistical techniques, algorithms and Deep learning / ML Models to create new, scalable solutions that address business challenges across industry domains.
● Define, develop, maintain and evolve data models, tools and capabilities.
● Communicate your findings to the appropriate teams through visualizations
● Provide solutions for but not limited to: Object detection/Image recognition, natural language processing, Sentiment Analysis, Topic Modeling, Concept Extraction, Recommender Systems, Text Classification, Clustering , Customer Segmentation & Targeting, Propensity Modeling, Churn Modeling, Lifetime Value Estimation, Forecasting, Modeling Response to Incentives, Marketing Mix Optimization, Price Optimization, GenAI and LLMs.
● Ability to build, train and lead a team of data scientists
. ● Follow/maintain an agile methodology for delivering on project milestones.
● Experience applying statistical ideas and methods to data sets to answer business problems.
● Mine and analyze data to drive optimization and improvement of product development, marketing techniques and business strategies.
● Working on ML Model Containerisation.
● Creating ML Model Inference Pipelines.
● Testing and Monitoring ML Models. Tech Stack
: ● Strong Python programming Knowledge- Data Science and Advanced Concepts like Abstract Class, Function Overloading + Overriding, Inheritance, Modular Programming and Reusability
● Working knowledge of Transformers, Hugging face etc.
. ● Working knowledge in implementing large language modes for enterprise applications.
● Cloud Experience in using Azure services,
● REST API using Flask or fastapi frameworks
● Good to have - Pyspark: Spark Dataframe Operations, SQL functions API, Parallel Execution
● Good to have - Unit testing using Python pytest or Unittest Preferred Qualifications:
● Bachelors/ Masters/ PhD degree in Math, Computer Science, Information Systems, Machine Learning, Statistics, Applied Mathematics or related technical degree.
● Minimum of 6 years of experience in a related position, as a senior data scientist building Predictive analytics, Computer Vision, NLP and GenAI solutions for various types of business problems.
● Advanced knowledge of statistical techniques, machine learning algorithms and deep learning frameworks like Tensorflow, Keras, Pytorch, etc..
● Strong planning and project management skills, able to
Key Responsibilities
● Design and maintain high-performance backend applications and microservices
● Architect scalable, cloud-native systems and collaborate across engineering teams
● Write high-quality, performant code and conduct thorough code reviews
● Build and operate CI/CD pipelines and production systems
● Work with databases, containerization (Docker/Kubernetes), and cloud platforms
● Lead agile practices and continuously improve service reliability
Required Qualifications
● 4+ years of professional software development experience
● 2+ years contributing to service design and architecture
● Strong expertise in modern languages like Golang, Python
● Deep understanding of scalable, cloud-native architectures and microservices
● Production experience with distributed systems and database technologies
● Experience with Docker, software engineering best practices
● Bachelor's Degree in Computer Science or related technical field
Preferred Qualifications
● Experience with Golang, AWS, and Kubernetes
● CI/CD pipeline experience with GitHub Actions
● Start-up environment experience
Backend Engineer
Job Overview
We are seeking an experienced Backend Engineer to architect and deploy scalable backend applications and services. The ideal candidate will have expertise in modern programming languages, distributed systems, and team leadership.
Key Responsibilities
● Design and maintain high-performance backend applications and microservices
● Architect scalable, cloud-native systems and collaborate across engineering teams
● Write high-quality, performant code and conduct thorough code reviews
● Build and operate CI/CD pipelines and production systems
● Work with databases, containerization (Docker/Kubernetes), and cloud platforms
● Lead agile practices and continuously improve service reliability
Required Qualifications
● 4+ years of professional software development experience
● 2+ years contributing to service design and architecture
● Strong expertise in modern languages like Golang, Python
● Deep understanding of scalable, cloud-native architectures and microservices
● Production experience with distributed systems and database technologies
● Experience with Docker, software engineering best practices
● Bachelor's Degree in Computer Science or related technical field
Preferred Qualifications
● Experience with Golang, AWS, and Kubernetes
● CI/CD pipeline experience with GitHub Actions
● Start-up environment experience
We’re looking for a full-stack generalist who can handle the entire product lifecycle from frontend, backend, APIs, AI integrations, deployment, and everything in between. Someone who enjoys owning features from concept to production and working across the entire stack
Required Qualifications
- 4+ years of professional software development experience
- 2+ years contributing to service design and architecture
- Strong expertise in modern languages like Golang, Python
- Deep understanding of scalable, cloud-native architectures and microservices
- Production experience with distributed systems and database technologies
- Experience with Docker, software engineering best practices
- Bachelor's Degree in Computer Science or related technical field
Preferred Qualifications
- Experience with Golang, AWS, and Kubernetes
- CI/CD pipeline experience with GitHub Actions
Start-up environment experience
Job Summary: Lead/Senior ML Data Engineer (Cloud-Native, Healthcare AI)
Experience Required: 8+ Years
Work Mode: Remote
We are seeking a highly autonomous and experienced Lead/Senior ML Data Engineer to drive the critical data foundation for our AI analytics and Generative AI platforms. This is a specialized hybrid position, focusing on designing, building, and optimizing scalable data pipelines (ETL/ELT) that transform complex, messy clinical and healthcare data into high-quality, production-ready feature stores for Machine Learning and NLP models.
The successful candidate will own technical work streams end-to-end, ensuring data quality, governance, and low-latency delivery in a cloud-native environment.
Key Responsibilities & Focus Areas:
- ML Data Pipeline Ownership (70-80% Focus): Design and implement high-performance, scalable ETL/ELT pipelines using PySpark and a Lakehouse architecture (such as Databricks) to ingest, clean, and transform large-scale healthcare datasets.
- AI Data Preparation: Specialize in Feature Engineering and data preparation for complex ML workloads, including transforming unstructured clinical data (e.g., medical notes) for Generative AI and NLP model training.
- Cloud Architecture & Orchestration: Deploy, manage, and optimize data workflows using Airflow in a production AWS environment.
- Data Governance & Compliance: Mandatorily implement pipelines with robust data masking, pseudonymization, and security controls to ensure continuous adherence to HIPAA and other relevant health data privacy regulations.
- Technical Leadership: Lead and define technical requirements from ambiguous business problems, acting as a key contributor to the data architecture strategy for the core AI platform.
Non-Negotiable Requirements (The "Must-Haves"):
- 5+ years of progressive experience as a Data Engineer, with a clear focus on ML/AI support.
- Deep expertise in PySpark/Python for distributed data processing.
- Mandatory proficiency with Lakehouse platforms (e.g., Databricks) in an AWS production environment.
- Proven experience handling complex clinical/healthcare data (EHR, Claims), including unstructured text.
- Hands-on experience with HIPAA/GDPR compliance in data pipeline design.
About the Role
We are looking for a passionate GenAI Developer to join our dynamic team at Hardwin Software Solutions. In this role, you will design and develop scalable backend systems, leverage AWS services for data processing, and work on cutting-edge Generative AI solutions. If you enjoy solving complex problems and building impactful applications, we’d love to hear from you.
What You Will Do
- Develop robust and scalable backend services and APIs using Python, integrating with various AWS services.
- Design, implement, and maintain data processing pipelines leveraging AWS (e.g., S3, Lambda).
- Collaborate with cross-functional teams to translate requirements into efficient technical solutions.
- Write clean, maintainable code while following agile engineering practices (CI/CD, version control, release cycles).
- Optimize application performance and scalability by fine-tuning AWS resources and leveraging advanced Python techniques.
- Contribute to the development and integration of Generative AI techniques into business applications.
What You Should Have
- Bachelor’s degree in Computer Science, Engineering, or related field.
- 3+ years of professional experience in software development.
- Strong programming skills in Python and good understanding of data structures & algorithms.
- Hands-on experience with AWS services: S3, Lambda, DynamoDB, OpenSearch.
- Experience with Relational Databases, Source Control, and CI/CD pipelines.
- Practical knowledge of Generative AI techniques (mandatory).
- Strong analytical and mathematical problem-solving abilities.
- Excellent communication skills in English.
- Ability to work both independently and collaboratively, with a proactive and self-motivated attitude.
- Strong organizational skills with the ability to prioritize tasks and meet deadlines.
Job Summary
We are seeking an experienced Databricks Developer with strong skills in PySpark, SQL, Python, and hands-on experience deploying data solutions on AWS (preferred), Azure. The role involves designing, developing, and optimizing scalable data pipelines and analytics workflows on the Databricks platform.
Key Responsibilities
- Develop and optimize ETL/ELT pipelines using Databricks and PySpark.
- Build scalable data workflows on AWS (EC2, S3, Glue, Lambda, IAM) or Azure (ADF, ADLS, Synapse).
- Implement and manage Delta Lake (ACID, schema evolution, time travel).
- Write efficient, complex SQL for transformation and analytics.
- Build and support batch and streaming ingestion (Kafka, Kinesis, EventHub).
- Optimize Databricks clusters, jobs, notebooks, and PySpark performance.
- Collaborate with cross-functional teams to deliver reliable data solutions.
- Ensure data governance, security, and compliance.
- Troubleshoot pipelines and support CI/CD deployments.
Required Skills & Experience
- 4–8 years in Data Engineering / Big Data development.
- Strong hands-on experience with Databricks (clusters, jobs, workflows).
- Advanced PySpark and strong Python skills.
- Expert-level SQL (complex queries, window functions).
- Practical experience with AWS (preferred) or Azure cloud services.
- Experience with Delta Lake, Parquet, and data lake architectures.
- Familiarity with CI/CD tools (GitHub Actions, Azure DevOps, Jenkins).
- Good understanding of data modeling, optimization, and distributed systems.
Summary:
We are seeking a highly skilled Python Backend Developer with proven expertise in FastAPI to join our team as a full-time contractor for 12 months. The ideal candidate will have 5+ years of experience in backend development, a strong understanding of API design, and the ability to deliver scalable, secure solutions. Knowledge of front-end technologies is an added advantage. Immediate joiners are preferred. This role requires full-time commitment—please apply only if you are not engaged in other projects.
Job Type:
Full-Time Contractor (12 months)
Location:
Remote / On-site (Jaipur preferred, as per project needs)
Experience:
5+ years in backend development
Key Responsibilities:
- Design, develop, and maintain robust backend services using Python and FastAPI.
- Implement and manage Prisma ORM for database operations.
- Build scalable APIs and integrate with SQL databases and third-party services.
- Deploy and manage backend services using Azure Function Apps and Microsoft Azure Cloud.
- Collaborate with front-end developers and other team members to deliver high-quality web applications.
- Ensure application performance, security, and reliability.
- Participate in code reviews, testing, and deployment processes.
Required Skills:
- Expertise in Python backend development with strong experience in FastAPI.
- Solid understanding of RESTful API design and implementation.
- Proficiency in SQL databases and ORM tools (preferably Prisma)
- Hands-on experience with Microsoft Azure Cloud and Azure Function Apps.
- Familiarity with CI/CD pipelines and containerization (Docker).
- Knowledge of cloud architecture best practices.
Added Advantage:
- Front-end development knowledge (React, Angular, or similar frameworks).
- Exposure to AWS/GCP cloud platforms.
- Experience with NoSQL databases.
Eligibility:
- Minimum 5 years of professional experience in backend development.
- Available for full-time engagement.
- Please excuse if you are currently engaged in other projects—we require dedicated availability.
Role Overview
As a Senior SQL Developer, you’ll be responsible for data extracts, updating, and maintaining reports as requested by stakeholders. You’ll work closely with finance operations and developers to ensure data requests are appropriately managed.
Key Responsibilities
- Design, develop, and optimize complex SQL queries, stored procedures, functions, and tasks across multiple databases/schemas.
- Transform cost-intensive models from full-refreshes to incremental loads based on upstream data.
- Help design proactive monitoring of data to catch data issues/data delays.
Qualifications
- 5+ years of experience as a SQL developer, preferably in a B2C or tech environment. • Ability to translate requirements into datasets.
- Understanding of dbt framework for transformations.
- Basic usage of git - branching/ PR generation.
- Detail-oriented with strong organizational and time management skills.
- Ability to work cross-functionally and manage multiple projects simultaneously.
Bonus Points
- Experience with Snowflake and AWS data technologies.
- Experience with Python and containers (Docker)
About Grey chain (https://greychaindesign.com/)
A Generative AI-as-a-service, Mobile & Digital Transformation firm helping organizations reimagine user experiences with Disruptive & Transformational thinking and partnership.
Trusted by: UNICEF, BOSE, KFINTECH, WHO and many Fortune 500 Companies
Home of over 110 Product Engineering nerds building the next generation of Digital Products.Primary Industries: Banking & Financial Services, Non-Profits, Retail & eCommerce, Consumer Goods and Consulting.
Location: Remote
Job Summary:
We're seeking an experienced QA Automation Engineer to join our team. The ideal candidate will have strong technical skills, excellent communication, and a proactive approach to issue resolution. The QA Automation Engineer will design, develop, and maintain automated testing solutions to ensure the quality and reliability of our software applications.
Key Responsibilities:
- Design, develop, and maintain automated testing frameworks using Python (Pytest)
- Create and execute automated test scripts to ensure thorough coverage of application functionality
- Collaborate with cross-functional teams to identify and prioritize testing needs
- Proactively identify and report issues, risks, and opportunities for process improvement
- Work independently with minimal supervision, taking ownership of tasks and deliverables
- Develop and maintain pipelines, integrating automated testing into CI/CD workflows
- Apply OOPS concepts, code reusability principles, and fuzzy logic to write efficient, modular code
- Analyze application flows and functionalities to identify potential issues and improve test coverage
Requirements:
- 3-4 years of experience in automation testing
- Proficient in Python (2-3 years) with expertise in Pytest
- Strong understanding of pipelines and ability to make required changes or learn quickly
- Excellent communication and collaboration skills
- Ability to work independently with minimal supervision
- Proactive approach to issue identification and risk management
- Strong grasp of OOPS concepts, code reusability, and fuzzy logic
Inflection.io is a venture-backed B2B marketing automation company, enabling to communicate with their customers and prospects from one platform. We're used by leading SaaS companies like Sauce Labs, Sigma Computing, BILL, Mural, and Elastic, many of which pay more than $100K/yr (1 crore rupee).
And,... it’s working! We have world-class stats: our largest deal is over 3 crore, we have a 5 star rating on G2, over 100% NRR, and constantly break sales and customer records. We’ve raised $14M in total since 2021 with $7.6M of fresh funding in 2024, giving us many years of runway.
However, we’re still in startup mode with approximately 30 employees and looking for the next SDE3 to help propel Inflection forward. Do you want to join a fast growing startup that is aiming to build a very large company?
Key Responsibilities:
- Lead the design, development, and deployment of complex software systems and applications.
- Collaborate with engineers and product managers to define and implement innovative solutions
- Provide technical leadership and mentorship to junior engineers, promoting best practices and fostering a culture of continuous improvement.
- Write clean, maintainable and efficient code, ensuring high performance and scalability of the software.
- Conduct code reviews and provide constructive feedback to ensure code quality and adherence to coding standards.
- Troubleshoot and resolve complex technical issues, optimizing system performance and reliability.
- Stay updated with the latest industry trends and technologies, evaluating their potential for adoption in our projects.
- Participate in the full software development lifecycle, from requirements gathering to deployment and monitoring.
Qualifications:
- 5+ years of professional software development experience, with a strong focus on backend development.
- Proficiency in one or more programming languages such as Java, Python, Golang or C#
- Strong understanding of database systems, both relational (e.g., MySQL, PostgreSQL) and NoSQL (e.g., MongoDB, Cassandra).
- Hands-on experience with message brokers such as Kafka, RabbitMQ, or Amazon SQS.
- Experience with cloud platforms (AWS or Azure or Google Cloud) and containerization technologies (Docker, Kubernetes).
- Proven track record of designing and implementing scalable, high-performance systems.
- Excellent problem-solving skills and the ability to think critically and creatively.
- Strong communication and collaboration skills, with the ability to work effectively in a fast-paced, team-oriented environment.
About Us
We are a company where the ‘HOW’ of building software is just as important as the ‘WHAT.’ We partner with large organizations to modernize legacy codebases and collaborate with startups to launch MVPs, scale, or act as extensions of their teams. Guided by Software Craftsmanship values and eXtreme Programming Practices, we deliver high-quality, reliable software solutions tailored to our clients' needs.
We thrive to:
- Bring our clients' dreams to life by being their trusted engineering partners, crafting innovative software solutions.
- Challenge offshore development stereotypes by delivering exceptional quality, and proving the value of craftsmanship.
- Empower clients to deliver value quickly and frequently to their end users.
- Ensure long-term success for our clients by building reliable, sustainable, and impactful solutions.
- Raise the bar of software craft by setting a new standard for the community.
Our Core Values
- Quality with Pragmatism: We aim for excellence with a focus on practical solutions.
- Extreme Ownership: We own our work and its outcomes fully.
- Proactive Collaboration: Teamwork elevates us all.
- Pursuit of Mastery: Continuous growth drives us.
- Effective Feedback: Honest, constructive feedback fosters improvement.
- Client Success: Our clients’ success is our success.
Internship Overview
If you’re a Software Craftsperson who takes pride in clean, test-driven code and believes in Extreme Programming principles, we’d love to meet you. At Incubyte, we’re a DevOps organization where developers own the entire release cycle, meaning you’ll get hands-on experience across programming, cloud infrastructure, client communication, and everything in between. Ready to level up your craft and join a team that’s as quality-obsessed as you are? Read on!
Internship Details
Duration- 6months
Location- 1 month of onsite training at Pune/Ahmedabad and then 5 months remote
Stipend- 25k/month
Full time conversion opportunity based on internship performance
Full time compensation= 6-8lpa
Eligible candidates must be from the graduating batch of 2026 or have graduated in 2025.
We are accepting registrations via following form- https://forms.gle/pHx6Qz3yruh565eh8
What You'll Do
- Write Tests First: Start by writing tests to ensure code quality
- Clean Code: Produce self-explanatory, clean code with predictable results
- Frequent Releases: Make frequent, small releases
- Pair Programming: Work in pairs for better results
- Peer Reviews: Conduct peer code reviews for continuous improvement
- Product Team: Collaborate in a product team to build and rapidly roll out new features and fixes
- Full Stack Ownership: Handle everything from the front end to the back end, including infrastructure and DevOps pipelines
- Never Stop Learning: Commit to continuous learning and improvement
What We're Looking For
- Basic proficiency in Java, RESTful APIs, React, Ruby on Rails (ROR), and Test-Driven Development (TDD).
- A keen interest in learning technologies like Bootstrap, JavaScript, HTML5, CSS3, Python, and Angular.
- Foundational knowledge of object-oriented programming, data structures, algorithms, and software engineering methodologies.
- Familiarity or willingness to learn Agile and Extreme Programming methodologies in a continuous deployment environment.
- Curiosity to explore tools and technologies like web server ecosystems, relational DBMS, CI tools (e.g., Jenkins, Hudson, Bamboo), web frameworks, front-end technologies, complexity analysis, and performance optimization.
- Basic understanding of source control, bug tracking systems, and how to write user stories or create technical documentation.
Benefits
What We Offer
- Dedicated Learning & Development Budget: Fuel your growth with a budget dedicated solely to learning.
- Conference Talks Sponsorship: Amplify your voice! If you’re speaking at a conference, we’ll fully sponsor and support your talk.
- Cutting-Edge Projects: Work on exciting projects with the latest AI technologies
- Employee-Friendly Leave Policy: Recharge with ample leave options designed for a healthy work-life balance.
- Comprehensive Medical & Term Insurance: Full coverage for you and your family’s peace of mind.
- And More: Extra perks to support your well-being and professional growth.
Work Environment
- Remote-First Culture: At Incubyte, we thrive on a culture of structured flexibility — while you have control over where and how you work, everyone commits to a consistent rhythm that supports their team during core working hours for smooth collaboration and timely project delivery. By striking the perfect balance between freedom and responsibility, we enable ourselves to deliver high-quality standards our customers recognize us by. With asynchronous tools and push for active participation, we foster a vibrant, hands-on environment where each team member’s engagement and contributions drive impactful results.
- Work-In-Person: Twice a year, we come together for two-week sprints to collaborate in person, foster stronger team bonds, and align on goals. Additionally, we host an annual retreat to recharge and connect as a team. All travel expenses are covered.
- Proactive Collaboration: Collaboration is central to our work. Through daily pair programming sessions, we focus on mentorship, continuous learning, and shared problem-solving. This hands-on approach keeps us innovative and aligned as a team.
Incubyte is an equal opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all employees.
We are looking for a Lead Data Scientist with the ability to lead a data science team and help us gain useful insight out of raw data. Lead Data Scientist responsibilities include managing the client, data science team, planning projects and building analytics models. You should have a strong problem-solving ability and a knack for statistical analysis. If you’re also able to align our data products with our business goals, we’d like to talk to you.
KEY SKILL – Predictive Analysis, Statistical Modeling, Time Series
Responsibilities
● You would be required to Identify, develop and implement the appropriate statistical techniques, algorithms and Deep learning / ML Models to create new, scalable solutions that address business challenges across industry domains.
● Define, develop, maintain and evolve data models, tools and capabilities.
● Communicate your findings to the appropriate teams through visualizations
● Provide solutions for but not limited to: Object detection/Image recognition, natural language processing, Sentiment Analysis, Topic Modeling, Concept Extraction, Recommender Systems, Text Classification, Clustering , Customer Segmentation & Targeting, Propensity Modeling, Churn Modeling, Lifetime Value Estimation, Forecasting, Modeling Response to Incentives, Marketing Mix Optimization, Price Optimization, GenAI and LLMs.
● Ability to build, train and lead a team of data scientists.
● Follow/maintain an agile methodology for delivering on project milestones.
● Experience applying statistical ideas and methods to data sets to answer business problems.
● Mine and analyze data to drive optimization and improvement of product development, marketing techniques and business strategies.
● Working on ML Model Containerisation.
● Creating ML Model Inference Pipelines.
● Testing and Monitoring ML Models.
We're at the forefront of creating advanced AI systems, from fully autonomous agents that provide intelligent customer interaction to data analysis tools that offer insightful business solutions. We are seeking enthusiastic interns who are passionate about AI and ready to tackle real-world problems using the latest technologies.
Duration: 6 months
Perks:
- Hands-on experience with real AI projects.
- Mentoring from industry experts.
- A collaborative, innovative and flexible work environment
After completion of the internship period, there is a chance to get a full-time opportunity as AI/ML engineer (Up to 12 LPA).
Compensation:
- Joining Bonus: A one-time bonus of INR 2,500 will be awarded upon joining.
- Stipend: Base is INR 8000/- & can increase up to 20000/- depending upon performance matrix.
Key Responsibilities
- Experience working with python, LLM, Deep Learning, NLP, etc..
- Utilize GitHub for version control, including pushing and pulling code updates.
- Work with Hugging Face and OpenAI platforms for deploying models and exploring open-source AI models.
- Engage in prompt engineering and the fine-tuning process of AI models.
Requirements
- Proficiency in Python programming.
- Experience with GitHub and version control workflows.
- Familiarity with AI platforms such as Hugging Face and OpenAI.
- Understanding of prompt engineering and model fine-tuning.
- Excellent problem-solving abilities and a keen interest in AI technology.
To Apply Click below link and submit the Assignment
Salary: ₹3.5 LPA( Based on the performance)
Experience: 1–3 Years (ONLY FOR FEMALES)
We are looking for a Technical Trainer skilled in HTML, Java, Python, and AI to conduct technical trainer. The trainer will create learning materials, deliver sessions, assess student performance, and support learners throughout the training. Strong communication skills and the ability to explain technical concepts clearly are essential.
About Us:
MyOperator and Heyo are India’s leading conversational platforms, empowering 40,000+ businesses with Call and WhatsApp-based engagement. We’re a product-led SaaS company scaling rapidly, and we’re looking for a skilled Software Developer to help build the next generation of scalable backend systems.
Role Overview:
We’re seeking a passionate Python Developer with strong experience in backend development and cloud infrastructure. This role involves building scalable microservices, integrating AI tools like LangChain/LLMs, and optimizing backend performance for high-growth B2B products.
Key Responsibilities:
- Develop robust backend services using Python, Django, and FastAPI
- Design and maintain a scalable microservices architecture
- Integrate LangChain/LLMs into AI-powered features
- Write clean, tested, and maintainable code with pytest
- Manage and optimize databases (MySQL/Postgres)
- Deploy and monitor services on AWS
- Collaborate across teams to define APIs, data flows, and system architecture
Must-Have Skills:
- Python and Django
- MySQL or Postgres
- Microservices architecture
- AWS (EC2, RDS, Lambda, etc.)
- Unit testing using pytest
- LangChain or Large Language Models (LLM)
- Strong grasp of Data Structures & Algorithms
- AI coding assistant tools (e.g., Chat GPT & Gemini)
Good to Have:
- MongoDB or ElasticSearch
- Go or PHP
- FastAPI
- React, Bootstrap (basic frontend support)
- ETL pipelines, Jenkins, Terraform
Why Join Us?
- 100% Remote role with a collaborative team
- Work on AI-first, high-scale SaaS products
- Drive real impact in a fast-growing tech company
- Ownership and growth from day one
About the Company
Hypersonix.ai is disrupting the e-commerce space with AI, ML, and advanced decision-making capabilities to drive real-time business insights. Built from the ground up using modern technologies, Hypersonix simplifies data consumption for customers across various industry verticals. We are seeking a well-rounded, hands-on product leader to help manage key capabilities and features in our platform.
Position Overview
We are seeking a highly skilled Web Scraping Architect to join our team. The successful candidate will be responsible for designing, implementing, and maintaining web scraping processes to gather data from various online sources efficiently and accurately. As a Web Scraping Specialist, you will play a crucial role in collecting data for competitor analysis and other business intelligence purposes.
Responsibilities
- Scalability/Performance: Lead and provide expertise in scraping at scale e-commerce marketplaces.
- Data Source Identification: Identify relevant websites and online sources from which data needs to be scraped. Collaborate with the team to understand data requirements and objectives.
- Web Scraping Design: Develop and implement effective web scraping strategies to extract data from targeted websites. This includes selecting appropriate tools, libraries, or frameworks for the task.
- Data Extraction: Create and maintain web scraping scripts or programs to extract the required data. Ensure the code is optimized, reliable, and can handle changes in the website's structure.
- Data Cleansing and Validation: Cleanse and validate the collected data to eliminate errors, inconsistencies, and duplicates. Ensure data integrity and accuracy throughout the process.
- Monitoring and Maintenance: Continuously monitor and maintain the web scraping processes. Address any issues that arise due to website changes, data format modifications, or anti-scraping mechanisms.
- Scalability and Performance: Optimize web scraping procedures for efficiency and scalability, especially when dealing with a large volume of data or multiple data sources.
- Compliance and Legal Considerations: Stay up-to-date with legal and ethical considerations related to web scraping, including website terms of service, copyright, and privacy regulations.
- Documentation: Maintain detailed documentation of web scraping processes, data sources, and methodologies. Create clear and concise instructions for others to follow.
- Collaboration: Collaborate with other teams such as data analysts, developers, and business stakeholders to understand data requirements and deliver insights effectively.
- Security: Implement security measures to ensure the confidentiality and protection of sensitive data throughout the scraping process.
Requirements
- Proven experience of 6+ years as a Web Scraping Specialist or similar role, with a track record of successful web scraping projects
- Expertise in handling dynamic content, user-agent rotation, bypassing CAPTCHAs, rate limits, and use of proxy services
- Knowledge of browser fingerprinting
- Has leadership experience
- Proficiency in programming languages commonly used for web scraping, such as Python, BeautifulSoup, Scrapy, or Selenium
- Strong knowledge of HTML, CSS, XPath, and other web technologies relevant to web scraping and coding
- Knowledge and experience in best-of-class data storage and retrieval for large volumes of scraped data
- Understanding of web scraping best practices, including handling dynamic content, user-agent rotation, and IP address management
- Attention to detail and ability to handle and process large volumes of data accurately
- Familiarity with data cleansing techniques and data validation processes
- Good communication skills and ability to collaborate effectively with cross-functional teams
- Knowledge of web scraping ethics, legal considerations, and compliance with website terms of service
- Strong problem-solving skills and adaptability to changing web environments
Preferred Qualifications
- Bachelor’s degree in Computer Science, Data Science, Information Technology, or related fields
- Experience with cloud-based solutions and distributed web scraping systems
- Familiarity with APIs and data extraction from non-public sources
- Knowledge of machine learning techniques for data extraction and natural language processing is desired but not mandatory
- Prior experience in handling large-scale data projects and working with big data frameworks
- Understanding of various data formats such as JSON, XML, CSV, etc.
- Experience with version control systems like Git
About the Company
Hypersonix.ai is disrupting the e-commerce space with AI, ML and advanced decision capabilities to drive real-time business insights. Hypersonix.ai has been built ground up with new age technology to simplify the consumption of data for our customers in various industry verticals. Hypersonix.ai is seeking a well-rounded, hands-on product leader to help lead product management of key capabilities and features.
About the Role
We are looking for talented and driven Data Engineers at various levels to work with customers to build the data warehouse, analytical dashboards and ML capabilities as per customer needs.
Roles and Responsibilities
- Create and maintain optimal data pipeline architecture
- Assemble large, complex data sets that meet functional / non-functional business requirements; should write complex queries in an optimized way
- Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
- Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and AWS ‘big data’ technologies
- Run ad-hoc analysis utilizing the data pipeline to provide actionable insights
- Work with stakeholders including the Executive, Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs
- Keep our data separated and secure across national boundaries through multiple data centers and AWS regions
- Work with analytics and data scientist team members and assist them in building and optimizing our product into an innovative industry leader
Requirements
- Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases
- Experience building and optimizing ‘big data’ data pipelines, architectures and data sets
- Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement
- Strong analytic skills related to working with unstructured datasets
- Build processes supporting data transformation, data structures, metadata, dependency and workload management
- A successful history of manipulating, processing and extracting value from large disconnected datasets
- Working knowledge of message queuing, stream processing, and highly scalable ‘big data’ data stores
- Experience supporting and working with cross-functional teams in a dynamic environment
- We are looking for a candidate with 4+ years of experience in a Data Engineer role, who has attained a Graduate degree in Computer Science, Information Technology or completed MCA.
About Role
We are looking for a highly driven Full Stack Developer who can build scalable, high-performance applications across both frontend and backend. You will be working closely with our engineering team to develop seamless user experiences, robust APIs, and production-ready systems. This role is perfect for someone who wants to work in a fast-growing AI automation company, take ownership of end-to-end development, and contribute to products used by enterprises, agencies, and SMBs globally.
Key Responsibilities
- Develop responsive and scalable frontend applications using React Native and Next.js.
- Build and maintain backend services using Python and Node.js.
- Develop structured, well-documented REST APIs.
- Work with databases such as MongoDB and PostgreSQL for efficient data storage and retrieval.
- Implement clean authentication workflows (JWT preferred).
- Collaborate with UI/UX and product teams to deliver intuitive user experiences.
- Maintain high code quality through modular development, linting, and optimized folder structure.
- Debug, optimize, and enhance existing features and systems.
- Participate in code reviews and ensure best practices.
- Deploy, test, and monitor applications for performance and reliability.
Required Qualifications
- Bachelor’s degree in Computer Science, Engineering, or related discipline (or equivalent experience).
- Proven experience as a Full Stack Developer with hands-on work in React Native and Next.js.
- Strong backend experience with Python (Fast API preferred) and Node.js (Express.js preferred).
- Experience working with REST APIs, MongoDB, and PostgreSQL.
- Strong understanding of authentication flows (JWT, OAuth, or similar).
- Ability to write clean, maintainable, and well-documented code.
- Experience with Git/GitHub workflows.
Perks and Benefits
- Opportunity to work at a fast-scaling AI-driven product company.
- Work on advanced growth automation and CRM technologies.
- High ownership and autonomy in product development.
- Flexible remote work for the first 6 months.
- Skill development through real-world, high-impact projects.
- Collaborative culture with mentorship and growth opportunities.
Job Title: Full-Stack Developer
Location: Remote
Job Type: Full-Time
Experience: 3 year’s
Company: PGAGI Consultancy Pvt. Ltd.
Job Overview:
PGAGI Consultancy Pvt. Ltd. is seeking a highly skilled Full-stack projects manager to lead and manage AI projects The ideal candidate will be responsible for overseeing the entire project lifecycle, from planning and architecture to development, deployment, and maintenance. This role requires strong leadership abilities, technical expertise in both front-end and back-end development, and experience in managing cross-functional teams.
Key Responsibilities:
Project Management:
• Lead and manage multiple software development projects, ensuring timely delivery within scope and budget.
• Define project requirements, milestones, and deliverables in collaboration with stakeholders.
• Create and maintain project roadmaps, sprint plans, and technical documentation.
• Oversee project risks, dependencies, and resource allocation to optimize workflow.
• Conduct regular status meetings, report progress to senior management, and ensure alignment with business goals.
• Implement and enforce Agile, Scrum, or Kanban methodologies for efficient project execution.
Technical Leadership & Full-Stack Development:
• Lead a team of frontend and backend developers, providing technical guidance and mentorship.
• Design, develop, and maintain scalable, high-performance web applications.
• Write clean, efficient, and maintainable code for both front-end and back-end systems.
• Develop and optimize RESTful APIs, database schemas, and server-side logic.
• Integrate third-party APIs, cloud services, and microservices architecture.
• Ensure application performance, security, and scalability best practices.
• Troubleshoot and resolve technical issues, ensuring minimal downtime and optimal functionality.
Technical Skills Required:
Front-End Technologies:
• React.js, Next.js, Vue.js, Angular
• HTML5, CSS3, TypeScript, JavaScript
Back-End Technologies:
• python,Node.js, Express.js, Django, Flask, FastAPI
Database Management:
• MongoDB, PostgreSQL, MySQL, Firebase
DevOps & Cloud Technologies:
• AWS, Docker, Kubernetes, CI/CD pipelines
Version Control & Collaboration Tools:
• Git, GitHub/GitLab, Bitbucket
• Jira, Trello, Slack
Preferred Skills
• Experience leading AI/ML projects.
• Knowledge of microservices architecture.
• Previous experience working in a startup environment.
• Strong problem-solving and decision-making skills.
Qualifications & Experience:
• Bachelor’s or Master’s degree in Computer Science, Software Engineering, or a related field.
• [2+ minimum ] years of experience in full-stack development and project management.
• Proven experience leading and managing development teams.
• Strong understanding of Agile/Scrum methodologies.
Why Join Us?
• Work in a dynamic and innovative environment.
• Opportunity to lead cutting-edge projects.
• Growth-oriented role with leadership opportunities.
If you are passionate about leading software development projects while remaining hands-on with coding, we encourage you to apply!
Position Overview: The Lead Software Architect - Python & Data Engineering is a senior technical leadership role responsible for designing and owning end-to-end architecture for data-intensive, AI/ML, and analytics platforms, while mentoring developers and ensuring technical excellence across the organization.
Key Responsibilities:
- Design end-to-end software architecture for data-intensive applications, AI/ML pipelines, and analytics platforms
- Evaluate trade-offs between competing technical approaches
- Define data models, API approach, and integration patterns across systems
- Create technical specifications and architecture documentation
- Lead by example through production-grade Python code and mentor developers on engineering fundamentals
- Conduct design and code reviews focused on architectural soundness
- Establish engineering standards, coding practices, and design patterns for the team
- Translate business requirements into technical architecture
- Collaborate with data scientists, analysts, and other teams to design integrated solutions
- Whiteboard and defend system design and architectural choices
- Take responsibility for system performance, reliability, and maintainability
- Identify and resolve architectural bottlenecks proactively
Required Skills:
- 8+ years of experience in software architecture and development
- Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field
- Strong foundations in data structures, algorithms, and computational complexity
- Experience in system design for scale, including caching strategies, load balancing, and asynchronous processing
- 6+ years of Python development experience
- Deep knowledge of Django, Flask, or FastAPI
- Expert understanding of Python internals including GIL and memory management
- Experience with RESTful API design and event-driven architectures (Kafka, RabbitMQ)
- Proficiency in data processing frameworks such as Pandas, Apache Spark, and Airflow
- Strong SQL optimization and database design experience (PostgreSQL, MySQL, MongoDB) Experience with AWS, GCP, or Azure cloud platforms
- Knowledge of containerization (Docker) and orchestration (Kubernetes)
- Hands-on experience designing CI/CD pipelines Preferred (Bonus)
Skills:
- Experience deploying ML models to production (MLOps, model serving, monitoring) Understanding of ML system design including feature stores and model versioning
- Familiarity with ML frameworks such as scikit-learn, TensorFlow, and PyTorch
- Open-source contributions or technical blogging demonstrating architectural depth
- Experience with modern front-end frameworks for full-stack perspective
Why Middleware? 💡
Sick of the endless waiting?
Waiting on code reviews, QA feedback, or that "quick call"?
At Middleware, we’re all about freeing up engineers like you to do what you love—build.
We’ve created a cockpit that gives engineering leaders the insights they need to unblock teams, cut bottlenecks, and let engineers focus on impact.
What You’ll Do 🎨
- Own the Product: Shape a product that engineers rely on daily.
- Build Stunning UIs: Craft data-rich, intuitive designs that solve real problems.
- Shape Middleware’s Architecture: Make our systems robust, seamless, and introduce mechanisms that allow high visibility into our automated pipelines.
What We’re Looking For 🔍
- React + Typescript: You know your way around these tools and have launched awesome projects.
- Python + Postgres: You've build complete backend systems, not just basic CRUD apps.
- Passionate Builder: Hungry to grow, build, and make an impact.
Bonus Points ⭐️
- Eye for Design: You have a sense for clean, user-friendly visuals.
- Understanding of distributed systems: Not everything runs on a single machine, and you know how to make things work across a lot of those.
- DSA Know-how: Familiarity with data structures (graphs, linked lists, etc.) because our product (even frontend) actually uses DSA concepts.
Why You'll Love Working with Us ❤️
We’re engineers at heart.
Middleware was founded by ex-Uber and Maersk engineers who know what it’s like to be stuck in meeting loops and endless waiting. If you're here to build, to make things happen, and to change the game for engineering teams everywhere, let’s chat!
Ready to jump in? Explore Middleware (https://www.middlewarehq.com/) or check out our demo (https://demo.middlewarehq.com/).
Role & Responsibilities:
We are seeking a Software Developer with 5-10 year’s experience with strong foundations in Python, databases, and AI technologies.
The ideal candidate will support the development of AI-powered solutions, focusing on LLM integration, prompt engineering, and database-driven workflows.
This is a hands-on role with opportunities to learn and grow into advanced AI engineering responsibilities.
Key Responsibilities :
• Develop, test, and maintain Python-based applications and APIs.
• Design and optimize prompts for Large Language Models (LLMs) to improve accuracy and performance.
• Work with JSON-based data structures for request/response handling.
• Integrate and manage PostgreSQL (pgSQL) databases, including writing queries and handling data pipelines.
• Collaborate with the product and AI teams to implement new features.
• Debug, troubleshoot, and optimize performance of applications and workflows.
• Stay updated on advancements in LLMs, AI frameworks, and generative AI tools.
Role & Responsibilities:
We are seeking a Software Developer with 2-10 year’s experience with strong foundations in Python, databases, and AI technologies. The ideal candidate will support the development of AI-powered solutions, focusing on LLM integration, prompt engineering, and database-driven workflows. This is a hands-on role with opportunities to learn and grow into advanced AI engineering responsibilities.
Key Responsibilities
• Develop, test, and maintain Python-based applications and APIs.
• Design and optimize prompts for Large Language Models (LLMs) to improve accuracy and performance.
• Work with JSON-based data structures for request/response handling. • Integrate and manage PostgreSQL (pgSQL) databases, including writing queries and handling data pipelines.
• Collaborate with the product and AI teams to implement new features. • Debug, troubleshoot, and optimize performance of applications and workflows.
• Stay updated on advancements in LLMs, AI frameworks, and generative AI tools.
Required Skills & Qualifications
• Strong knowledge of Python (scripting, APIs, data handling).
• Basic understanding of Large Language Models (LLMs) and prompt engineering techniques.
• Experience with JSON data parsing and transformations.
• Familiarity with PostgreSQL or other relational databases.
• Ability to write clean, maintainable, and well-documented code.
• Strong problem-solving skills and eagerness to learn.
• Bachelor’s degree in Computer Science, Engineering, or related field (or equivalent practical experience).
Nice-to-Have (Preferred)
• Exposure to AI/ML frameworks (e.g., LangChain, Hugging Face, OpenAI APIs).
• Experience working in startups or fast-paced environments.
• Familiarity with version control (Git/GitHub) and cloud platforms (AWS, GCP, or Azure).
What We Offer
• The opportunity to define the future of GovTech through AI-powered solutions.
• A strategic leadership role in a fast-scaling startup with direct impact on product direction and market success.
• Collaborative and innovative environment with cross-functional exposure.
• Growth opportunities backed by a strong leadership team.
• Remote flexibility and work-life balance.
Role: Azure AI Tech Lead
Exp-3.5-7 Years
Location: Remote / Noida (NCR)
Notice Period: Immediate to 15 days
Mandatory Skills: Python, Azure AI/ML, PyTorch, TensorFlow, JAX, HuggingFace, LangChain, Kubeflow, MLflow, LLMs, RAG, MLOps, Docker, Kubernetes, Generative AI, Model Deployment, Prometheus, Grafana
JOB DESCRIPTION
As the Azure AI Tech Lead, you will serve as the principal technical expert leading the design, development, and deployment of advanced AI and ML solutions on the Microsoft Azure platform. You will guide a team of engineers, establish robust architectures, and drive end-to-end implementation of AI projects—transforming proof-of-concepts into scalable, production-ready systems.
Key Responsibilities:
- Lead architectural design and development of AI/ML solutions using Azure AI, Azure OpenAI, and Cognitive Services.
- Develop and deploy scalable AI systems with best practices in MLOps across the full model lifecycle.
- Mentor and upskill AI/ML engineers through technical reviews, training, and guidance.
- Implement advanced generative AI techniques including LLM fine-tuning, RAG systems, and diffusion models.
- Collaborate cross-functionally to translate business goals into innovative AI solutions.
- Enforce governance, responsible AI practices, and performance optimization standards.
- Stay ahead of trends in LLMs, agentic AI, and applied research to shape next-gen solutions.
Qualifications:
- Bachelor’s or Master’s in Computer Science or related field.
- 3.5–7 years of experience delivering end-to-end AI/ML solutions.
- Strong expertise in Azure AI ecosystem and production-grade model deployment.
- Deep technical understanding of ML, DL, Generative AI, and MLOps pipelines.
- Excellent analytical and problem-solving abilities; applied research or open-source contributions preferred.
About Ven Analytics
At Ven Analytics, we don’t just crunch numbers — we decode them to uncover insights that drive real business impact. We’re a data-driven analytics company that partners with high-growth startups and enterprises to build powerful data products, business intelligence systems, and scalable reporting solutions. With a focus on innovation, collaboration, and continuous learning, we empower our teams to solve real-world business problems using the power of data.
Role Overview
We’re looking for a Power BI Data Analyst who is not just proficient in tools but passionate about building insightful, scalable, and high-performing dashboards. The ideal candidate should have strong fundamentals in data modeling, a flair for storytelling through data, and the technical skills to implement robust data solutions using Power BI, Python, and SQL..
Key Responsibilities
- Technical Expertise: Develop scalable, accurate, and maintainable data models using Power BI, with a clear understanding of Data Modeling, DAX, Power Query, and visualization principles.
- Programming Proficiency: Use SQL and Python for complex data manipulation, automation, and analysis.
- Business Problem Translation: Collaborate with stakeholders to convert business problems into structured data-centric solutions considering performance, scalability, and commercial goals.
- Hypothesis Development: Break down complex use-cases into testable hypotheses and define relevant datasets required for evaluation.
- Solution Design: Create wireframes, proof-of-concepts (POC), and final dashboards in line with business requirements.
- Dashboard Quality: Ensure dashboards meet high standards of data accuracy, visual clarity, performance, and support SLAs.
- Performance Optimization: Continuously enhance user experience by improving performance, maintainability, and scalability of Power BI solutions.
- Troubleshooting & Support: Quick resolution of access, latency, and data issues as per defined SLAs.
- Power BI Development: Use power BI desktop for report building and service for distribution
- Backend development: Develop optimized SQL queries that are easy to consume, maintain and debug.
- Version Control: Strict control on versions by tracking CRs and Bugfixes. Ensuring the maintenance of Prod and Dev dashboards.
- Client Servicing : Engage with clients to understand their data needs, gather requirements, present insights, and ensure timely, clear communication throughout project cycles.
- Team Management : Lead and mentor a small team by assigning tasks, reviewing work quality, guiding technical problem-solving, and ensuring timely delivery of dashboards and reports..
Must-Have Skills
- Strong experience building robust data models in Power BI
- Hands-on expertise with DAX (complex measures and calculated columns)
- Proficiency in M Language (Power Query) beyond drag-and-drop UI
- Clear understanding of data visualization best practices (less fluff, more insight)
- Solid grasp of SQL and Python for data processing
- Strong analytical thinking and ability to craft compelling data stories
- Client Servicing Background.
Good-to-Have (Bonus Points)
- Experience using DAX Studio and Tabular Editor
- Prior work in a high-volume data processing production environment
- Exposure to modern CI/CD practices or version control with BI tools
Why Join Ven Analytics?
- Be part of a fast-growing startup that puts data at the heart of every decision.
- Opportunity to work on high-impact, real-world business challenges.
- Collaborative, transparent, and learning-oriented work environment.
- Flexible work culture and focus on career development.
Requires that any candidate know the M365 Collaboration environment. SharePoint Online, MS Teams. Exchange Online, Entra and Purview. Need developer that possess a strong understanding of Data Structure, Problem Solving abilities, SQL, PowerShell, MS Teams App Development, Python, Visual Basic, C##, JavaScript, Java, HTML, PHP, C.
Need a strong understanding of the development lifecycle, and possess debugging skills time management, business acumen, and have a positive attitude is a must and open to continual growth.
Capability to code appropriate solutions will be tested in any interview.
Knowledge of a wide variety of Generative AI models
Conceptual understanding of how large language models work
Proficiency in coding languages for data manipulation (e.g., SQL) and machine learning & AI development (e.g., Python)
Experience with dashboarding tools such as Power BI and Tableau (beneficial but not essential)
About Forbes Advisor
Forbes Digital Marketing Inc. is a high-growth digital media and technology company dedicated to helping consumers make confident, informed decisions about their money, health, careers, and everyday life.
We do this by combining data-driven content, rigorous product comparisons, and user-first design — all built on top of a modern, scalable platform. Our global teams bring deep expertise across journalism, product, performance marketing, data, and analytics.
The Role
We’re hiring a Data Scientist to help us unlock growth through advanced analytics and machine learning. This role sits at the intersection of marketing performance, product optimization, and decision science.
You’ll partner closely with Paid Media, Product, and Engineering to build models, generate insight, and influence how we acquire, retain, and monetize users. From campaign ROI to user segmentation and funnel optimization, your work will directly shape how we grow.This role is ideal for someone who thrives on business impact, communicates clearly, and wants to build re-usable, production-ready insights — not just run one-off analyses.
What You’ll Do
Marketing & Revenue Modelling
• Own end-to-end modelling of LTV, user segmentation, retention, and marketing
efficiency to inform media optimization and value attribution.
• Collaborate with Paid Media and RevOps to optimize SEM performance, predict high-
value cohorts, and power strategic bidding and targeting.
Product & Growth Analytics
• Work closely with Product Insights and General Managers (GMs) to define core metrics, KPIs, and success frameworks for new launches and features.
• Conduct deep-dive analysis of user behaviour, funnel performance, and product engagement to uncover actionable insights.
• Monitor and explain changes in key product metrics, identifying root causes and business impact.
• Work closely with Data Engineering to design and maintain scalable data pipelines that
support machine learning workflows, model retraining, and real-time inference.
Predictive Modelling & Machine Learning
• Build predictive models for conversion, churn, revenue, and engagement using regression, classification, or time-series approaches.
• Identify opportunities for prescriptive analytics and automation in key product and marketing workflows.
• Support development of reusable ML pipelines for production-scale use cases in product recommendation, lead scoring, and SEM planning.
Collaboration & Communication
• Present insights and recommendations to a variety of stakeholders — from ICs to executives — in a clear and compelling manner.
• Translate business needs into data problems, and complex findings into strategic action plans.
• Work cross-functionally with Engineering, Product, BI, and Marketing to deliver and deploy your work.
What You’ll Bring
Minimum Qualifications
• Bachelor’s degree in a quantitative field (Mathematics, Statistics, CS, Engineering, etc.).
• 4+ years in data science, growth analytics, or decision science roles.
• Strong SQL and Python skills (Pandas, Scikit-learn, NumPy).
• Hands-on experience with Tableau, Looker, or similar BI tools.
• Familiarity with LTV modelling, retention curves, cohort analysis, and media attribution.
• Experience with GA4, Google Ads, Meta, or other performance marketing platforms.
• Clear communication skills and a track record of turning data into decisions.
Nice to Have
• Experience with BigQuery and Google Cloud Platform (or equivalent).
• Familiarity with affiliate or lead-gen business models.
• Exposure to NLP, LLMs, embeddings, or agent-based analytics.
• Ability to contribute to model deployment workflows (e.g., using Vertex AI, Airflow, or Composer).
Why Join Us?
• Remote-first and flexible — work from anywhere in India with global exposure.
• Monthly long weekends (every third Friday off).
• Generous wellness stipends and parental leave.
• A collaborative team where your voice is heard and your work drives real impact.
• Opportunity to help shape the future of data science at one of the world’s most trusted
brands.
We are looking for an 4-5 years of experienced AI/ML Engineer who can design, develop, and deploy scalable machine learning models and AI-driven solutions. The ideal candidate should have strong expertise in data processing, model building, and production deployment, along with solid programming and problem-solving skills.
Key Responsibilities
- Develop and deploy machine learning, deep learning, and NLP models for various business use cases.
- Build end-to-end ML pipelines including data preprocessing, feature engineering, training, evaluation, and production deployment.
- Optimize model performance and ensure scalability in production environments.
- Work closely with data scientists, product teams, and engineers to translate business requirements into AI solutions.
- Conduct data analysis to identify trends and insights.
- Implement MLOps practices for versioning, monitoring, and automating ML workflows.
- Research and evaluate new AI/ML techniques, tools, and frameworks.
- Document system architecture, model design, and development processes.
Required Skills
- Strong programming skills in Python (NumPy, Pandas, Scikit-learn, TensorFlow, PyTorch, Keras).
- Hands-on experience in building and deploying ML/DL models in production.
- Good understanding of machine learning algorithms, neural networks, NLP, and computer vision.
- Experience with REST APIs, Docker, Kubernetes, and cloud platforms (AWS/GCP/Azure).
- Working knowledge of MLOps tools such as MLflow, Airflow, DVC, or Kubeflow.
- Familiarity with data pipelines and big data technologies (Spark, Hadoop) is a plus.
- Strong analytical skills and ability to work with large datasets.
- Excellent communication and problem-solving abilities.
Preferred Qualifications
- Bachelor’s or Master’s degree in Computer Science, Data Science, AI, Machine Learning, or related fields.
- Experience in deploying models using cloud services (AWS Sagemaker, GCP Vertex AI, etc.).
- Experience in LLM fine-tuning or generative AI is an added advantage.
We’re looking for a skilled Senior Machine Learning Engineer to help us transform the Insurtech space. You’ll build intelligent agents and models that read, reason, and act.
Insurance ops are broken. Underwriters drown in PDFs. Risk clearance is chaos. Emails go in circles. We’ve lived it – and we’re fixing it. Bound AI is building agentic AI workflows that go beyond chat. We orchestrate intelligent agents to handle policy operations end-to-end:
• Risk clearance.
• SOV ingestion.
• Loss run summarization.
• Policy issuance.
• Risk triage.
No hallucinations. No handwaving. Just real-world AI that executes – in production, at scale.
Join us to help shape the future of insurance through advanced technology!
We’re Looking For:
- Deep experience in GenAI, LLM fine-tuning, and multi-agent orchestration (LangChain, DSPy, or similar).
- 5+ years of proven experience in the field
- Strong ML/AI engineering background in both foundational modeling (NLP, transformers, RAG) and traditional ML.
- Solid Python engineering chops – you write production-ready code, not just notebooks.
- A startup mindset – curiosity, speed, and obsession with shipping things that matter.
- Bonus – Experience with insurance or document intelligence (SOVs, Loss Runs, ACORDs).
What You’ll Be Doing:
- Develop foundation-model-based pipelines to read and understand insurance documents.
- Develop GenAI agents that handle real-time decision-making and workflow orchestration, and modular, composable agent architectures that interact with humans, APIs, and other agents.
- Work on auto-adaptive workflows that optimize around data quality, context, and risk signals.
Position: QA Engineer – Machine Learning Systems (5 - 7 years)
Location: Remote (Company in Mumbai)
Company: Big Rattle Technologies Private Limited
Immediate Joiners only.
Summary:
The QA Engineer will own quality assurance across the ML lifecycle—from raw data validation through feature engineering checks, model training/evaluation verification, batch prediction/optimization validation, and end-to-end (E2E) workflow testing. The role is hands-on with Python automation, data profiling, and pipeline test harnesses in Azure ML and Azure DevOps. Success means probably correct data, models, and outputs at production scale and cadence.
Key Responsibilities:
Test Strategy & Governance
- ○ Define an ML-specific Test Strategy covering data quality KPIs, feature consistency
- checks, model acceptance gates (metrics + guardrails), and E2E run acceptance
- (timeliness, completeness, integrity).
- ○ Establish versioned test datasets & golden baselines for repeatable regression of
- features, models, and optimizers.
Data Quality & Transformation
- Validate raw data extracts and landed data lake data: schema/contract checks, null/outlier thresholds, time-window completeness, duplicate detection, site/material coverage.
- Validate transformed/feature datasets: deterministic feature generation, leakage detection, drift vs. historical distributions, feature parity across runs (hash or statistical similarity tests).
- Implement automated data quality checks (e.g., Great Expectations/pytest + Pandas/SQL) executed in CI and AML pipelines.
Model Training & Evaluation
- Verify training inputs (splits, windowing, target leakage prevention) and hyperparameter configs per site/cluster.
- Automate metric verification (e.g., MAPE/MAE/RMSE, uplift vs. last model, stability tests) with acceptance thresholds and champion/challenger logic.
- Validate feature importance stability and sensitivity/elasticity sanity checks (price/volume monotonicity where applicable).
- Gate model registration/promotion in AML based on signed test artifacts and reproducible metrics.
Predictions, Optimization & Guardrails
- Validate batch predictions: result shapes, coverage, latency, and failure handling.
- Test model optimization outputs and enforced guardrails: detect violations and prove idempotent writes to DB.
- Verify API push to third party system (idempotency keys, retry/backoff, delivery receipts).
Pipelines & E2E
- Build pipeline test harnesses for AML pipelines (data-gen nightly, training weekly,
- prediction/optimization) including orchestrated synthetic runs and fault injection
- (missing slice, late competitor data, SB backlog).
- Run E2E tests from raw data store -> ADLS -> AML -> RDBMS -> APIM/Frontend, assert
- freshness SLOs and audit event completeness (Event Hubs -> ADLS immutable).
Automation & Tooling
- Develop Python-based automated tests (pytest) for data checks, model metrics, and API contracts; integrate with Azure DevOps (pipelines, badges, gates).
- Implement data-driven test runners (parameterized by site/material/model-version) and store signed test artifacts alongside models in AML Registry.
- Create synthetic test data generators and golden fixtures to cover edge cases (price gaps, competitor shocks, cold starts).
Reporting & Quality Ops
- Publish weekly test reports and go/no-go recommendations for promotions; maintain a defect taxonomy (data vs. model vs. serving vs. optimization).
- Contribute to SLI/SLO dashboards (prediction timeliness, queue/DLQ, push success, data drift) used for release gates.
Required Skills (hands-on experience in the following):
- Python automation (pytest, pandas, NumPy), SQL (PostgreSQL/Snowflake), and CI/CD (Azure
- DevOps) for fully automated ML QA.
- Strong grasp of ML validation: leakage checks, proper splits, metric selection
- (MAE/MAPE/RMSE), drift detection, sensitivity/elasticity sanity checks.
- Experience testing AML pipelines (pipelines/jobs/components), and message-driven integrations
- (Service Bus/Event Hubs).
- API test skills (FastAPI/OpenAPI, contract tests, Postman/pytest-httpx) + idempotency and retry
- patterns.
- Familiar with feature stores/feature engineering concepts and reproducibility.
- Solid understanding of observability (App Insights/Log Analytics) and auditability requirements.
Required Qualifications:
- Bachelor’s or Master’s degree in Computer Science, Information Technology, or related field.
- 5–7+ years in QA with 3+ years focused on ML/Data systems (data pipelines + model validation).
- Certification in Azure Data or ML Engineer Associate is a plus.
Why should you join Big Rattle?
Big Rattle Technologies specializes in AI/ ML Products and Solutions as well as Mobile and Web Application Development. Our clients include Fortune 500 companies. Over the past 13 years, we have delivered multiple projects for international and Indian clients from various industries like FMCG, Banking and Finance, Automobiles, Ecommerce, etc. We also specialise in Product Development for our clients.
Big Rattle Technologies Private Limited is ISO 27001:2022 certified and CyberGRX certified.
What We Offer:
- Opportunity to work on diverse projects for Fortune 500 clients.
- Competitive salary and performance-based growth.
- Dynamic, collaborative, and growth-oriented work environment.
- Direct impact on product quality and client satisfaction.
- 5-day hybrid work week.
- Certification reimbursement.
- Healthcare coverage.
How to Apply:
Interested candidates are invited to submit their resume detailing their experience. Please detail out your work experience and the kind of projects you have worked on. Ensure you highlight your contributions and accomplishments to the projects.
Job Title: Python Developer
Experience Level: 4+ years
Job Summary:
We are seeking a skilled Python Developer with strong experience in developing and maintaining APIs. Familiarity with 2D and 3D geometry concepts is a strong plus. The ideal candidate will be passionate about clean code, scalable systems, and solving complex geometric and computational problems.
Key Responsibilities:
· Design, develop, and maintain robust and scalable APIs using Python.
· Work with geometric data structures and algorithms (2D/3D).
· Collaborate with cross-functional teams including front-end developers, designers, and product managers.
· Optimize code for performance and scalability.
· Write unit and integration tests to ensure code quality.
· Participate in code reviews and contribute to best practices.
Required Skills:
· Strong proficiency in Python.
· Experience with RESTful API development (e.g., Flask, FastAPI, Django REST Framework).
· Good understanding of 2D/3D geometry, computational geometry, or CAD-related concepts.
· Familiarity with libraries such as NumPy, SciPy, Shapely, Open3D, or PyMesh.
· Experience with version control systems (e.g., Git).
· Strong problem-solving and analytical skills.
Good to Have:
· Experience with 3D visualization tools or libraries (e.g., VTK, Blender API, Three.js via Python bindings).
· Knowledge of mathematical modeling or simulation.
· Exposure to cloud platforms (AWS, Azure, GCP).
· Familiarity with CI/CD pipelines.
Education:
· Bachelor’s or Master’s degree in Computer Science, Engineering, Mathematics, or a related field.
About the Role
We’re looking for a Data Engineer who can turn messy, unstructured information into clean, usable insights. You’ll be building crawlers, integrating APIs, and setting up data flows that power our analytics and AI layers. If you love data plumbing as much as data puzzles — this role is for you.
Responsibilities
- Build and maintain Python-based data pipelines, crawlers, and integrations with 3rd-party APIs.
Perform brute-force analytics and exploratory data work on crawled datasets to surface trends and anomalies.
- Develop and maintain ETL workflows — from raw ingestion to clean, structured outputs.
- Collaborate with product and ML teams to make data discoverable, queryable, and actionable.
- Optimize data collection for performance, reliability, and scalability.
Requirements
- Strong proficiency in Python and Jupyter notebooks.
- Experience building web crawlers / scrapers and integrating with REST / GraphQL APIs.
- Solid understanding of data structures and algorithms (DSA).
- Comfort with quick, hands-on analytics — slicing and validating data directly.
Good to Have
- Experience with schema design and database modeling.
- Exposure to both SQL and NoSQL databases; familiarity with vector databases is a plus.
- Knowledge of data orchestration tools (Dagster preferred).
- Understanding of data lifecycle management — from raw to enriched layers.
Why Join Us
We’re not offering employment — we’re offering ownership.
If you’re here for a job, this isn’t your place.
We’re building the data spine of a new-age supply chain intelligence platform — and we need people who can crush constraints, move fast, and make impossible things work.
You’ll have room to think, build, break, and reinvent — not follow.
If you thrive in chaos and create clarity, you’ll fit right in.
Screening Challenge
Before we schedule a call, we have an exciting challenge for you. Please go through the link below and submit your solution to us with the Subject line: SUB: [Role] [Full Name]
About Synorus
Synorus is building a next-generation ecosystem of AI-first products. Our flagship legal-AI platform LexVault is redefining legal research, drafting, knowledge retrieval, and case intelligence using domain-tuned LLMs, private RAG pipelines, and secure reasoning systems.
If you are passionate about AI, legaltech, and training high-performance models — this internship will put you on the front line of innovation.
Role Overview
We are seeking passionate AI/LLM Engineering Interns who can:
- Fine-tune LLMs for legal domain use-cases
- Train and experiment with open-source foundation models
- Work with large datasets efficiently
- Build RAG pipelines and text-processing frameworks
- Run model training workflows on Google Colab / Kaggle / Cloud GPUs
This is a hands-on engineering and research internship — you will work directly with senior founders & technical leadership.
Key Responsibilities
- Fine-tune transformer-based models (Llama, Mistral, Gemma, etc.)
- Build and preprocess legal datasets at scale
- Develop efficient inference & training pipelines
- Evaluate models for accuracy, hallucinations, and trustworthiness
- Implement RAG architectures (vector DBs + embeddings)
- Work with GPU environments (Colab/Kaggle/Cloud)
- Contribute to model improvements, prompt engineering & safety tuning
Must-Have Skills
- Strong knowledge of Python & PyTorch
- Understanding of LLMs, Transformers, Tokenization
- Hands-on experience with HuggingFace Transformers
- Familiarity with LoRA/QLoRA, PEFT training
- Data wrangling: Pandas, NumPy, tokenizers
- Ability to handle multi-GB datasets efficiently
Bonus Skills
(Not mandatory — but a strong plus)
- Experience with RAG / vector DBs (Chroma, Qdrant, LanceDB)
- Familiarity with vLLM, llama.cpp, GGUF
- Worked on summarization, Q&A or document-AI projects
- Knowledge of legal texts (Indian laws/case-law/statutes)
- Open-source contributions or research work
What You Will Gain
- Real-world training on LLM fine-tuning & legal AI
- Exposure to production-grade AI pipelines
- Direct mentorship from engineering leadership
- Research + industry project portfolio
- Letter of experience + potential full-time offer
Ideal Candidate
- You experiment with models on weekends
- You love pushing GPUs to their limits
- You prefer research + implementation over theory alone
- You want to build AI that matters — not just demos
Location - Remote
Stipend - 5K - 10K
At Palcode.ai, We're on a mission to fix the massive inefficiencies in pre-construction. Think about it - in a $10 trillion industry, estimators still spend weeks analyzing bids, project managers struggle with scattered data, and costly mistakes slip through complex contracts. We're fixing this with purpose-built AI agents that work. Our platform can do “magic” to Preconstruction workflows from Weeks to Hours. It's not just about AI – it's about bringing real, measurable impact to an industry ready for change. We are backed by names like AWS for Startups, Upekkha Accelerator, and Microsoft for Startups.
Why Palcode.ai
Tackle Complex Problems: Build AI that reads between the lines of construction bids, spots hidden risks in contracts, and makes sense of fragmented project data
High-Impact Code: Your code won't sit in a backlog – it goes straight to estimators and project managers who need it yesterday
Tech Challenges That Matter: Design systems that process thousands of construction documents, handle real-time pricing data, and make intelligent decisions
Build & Own: Shape our entire tech stack, from data processing pipelines to AI model deployment
Quick Impact: Small team, huge responsibility. Your solutions directly impact project decisions worth millions
Learn & Grow: Master the intersection of AI, cloud architecture, and construction tech while working with founders who've built and scaled construction software
Your Role:
- Design and build our core AI services and APIs using Python
- Create reliable, scalable backend systems that handle complex data
- Help set up cloud infrastructure and deployment pipelines
- Collaborate with our AI team to integrate machine learning models
- Write clean, tested, production-ready code
You'll fit right in if:
- You have 1 year of hands-on Python development experience
- You're comfortable with full-stack development and cloud services
- You write clean, maintainable code and follow good engineering practices
- You're curious about AI/ML and eager to learn new technologies
- You enjoy fast-paced startup environments and take ownership of your work
How we will set you up for success
- You will work closely with the Founding team to understand what we are building.
- You will be given comprehensive training about the tech stack, with an opportunity to avail virtual training as well.
- You will be involved in a monthly one-on-one with the founders to discuss feedback
- A unique opportunity to learn from the best - we are Gold partners of AWS, Razorpay, and Microsoft Startup programs, having access to rich talent to discuss and brainstorm ideas.
- You’ll have a lot of creative freedom to execute new ideas. As long as you can convince us, and you’re confident in your skills, we’re here to back you in your execution.
Location: Bangalore, Remote
Compensation: Competitive salary + Meaningful equity
If you get excited about solving hard problems that have real-world impact, we should talk.
All the best!!
About the Role
We are looking for a passionate AI Engineer Intern (B.Tech, M.Tech / M.S. or equivalent) with strong foundations in Artificial Intelligence, Computer Vision, and Deep Learning to join our R&D team.
You will help us build and train realistic face-swap and deepfake video models, powering the next generation of AI-driven video synthesis technology.
This is a remote, individual-contributor role offering exposure to cutting-edge AI model development in a startup-like environment.
Key Responsibilities
- Research, implement, and fine-tune face-swap / deepfake architectures (e.g., FaceSwap, SimSwap, DeepFaceLab, LatentSync, Wav2Lip).
- Train and optimize models for realistic facial reenactment and temporal consistency.
- Work with GANs, VAEs, and diffusion models for video synthesis.
- Handle dataset creation, cleaning, and augmentation for face-video tasks.
- Collaborate with the AI core team to deploy trained models in production environments.
- Maintain clean, modular, and reproducible pipelines using Git and experiment-tracking tools.
Required Qualifications
- B.Tech, M.Tech / M.S. (or equivalent) in AI / ML / Computer Vision / Deep Learning.
- Certifications in AI or Deep Learning (DeepLearning.AI, NVIDIA DLI, Coursera, etc.).
- Proficiency in PyTorch or TensorFlow, OpenCV, FFmpeg.
- Understanding of CNNs, Autoencoders, GANs, Diffusion Models.
- Familiarity with datasets like CelebA, VoxCeleb, FFHQ, DFDC, etc.
- Good grasp of data preprocessing, model evaluation, and performance tuning.
Preferred Skills
- Prior hands-on experience with face-swap or lip-sync frameworks.
- Exposure to 3D morphable models, NeRF, motion transfer, or facial landmark tracking.
- Knowledge of multi-GPU training and model optimization.
- Familiarity with Rust / Python backend integration for inference pipelines.
What We Offer
- Work directly on production-grade AI video synthesis systems.
- Remote-first, flexible working hours.
- Mentorship from senior AI researchers and engineers.
- Opportunity to transition into a full-time role upon outstanding performance.
Location: Remote | Stipend: ₹10,000/month | Duration: 3–6 months
We are building an AI-powered chatbot platform and looking for an AI/ML Engineer with strong backend skills as our first technical hire. You will be responsible for developing the core chatbot engine using LLMs, creating backend APIs, and building scalable RAG pipelines.
You should be comfortable working independently, shipping fast, and turning ideas into real product features. This role is ideal for someone who loves building with modern AI tools and wants to be part of a fast-growing product from day one.
Responsibilities
• Build the core AI chatbot engine using LLMs (OpenAI, Claude, Gemini, Llama etc.)
• Develop backend services and APIs using Python (FastAPI/Flask)
• Create RAG pipelines using vector databases (Pinecone, FAISS, Chroma)
• Implement embeddings, prompt flows, and conversation logic
• Integrate chatbot with web apps, WhatsApp, CRMs and 3rd-party APIs
• Ensure system reliability, performance, and scalability
• Work directly with the founder in shaping the product and roadmap
Requirements
• Strong experience with LLMs & Generative AI
• Excellent Python skills with FastAPI/Flask
• Hands-on experience with LangChain or RAG architectures
• Vector database experience (Pinecone/FAISS/Chroma)
• Strong understanding of REST APIs and backend development
• Ability to work independently, experiment fast, and deliver clean code
Nice to Have
• Experience with cloud (AWS/GCP)
• Node.js knowledge
• LangGraph, LlamaIndex
• MLOps or deployment experience
Mission
Own architecture across web + backend, ship reliably, and establish patterns the team can scale on.
Responsibilities
- Lead system architecture for Next.js (web) and FastAPI (backend); own code quality, reviews, and release cadence.
- Build and maintain the web app (marketing, auth, dashboard) and a shared TS SDK (@revilo/contracts, @revilo/sdk).
- Integrate Stripe, Maps, analytics; enforce accessibility and performance baselines.
- Define CI/CD (GitHub Actions), containerization (Docker), env/promotions (staging → prod).
- Partner with Mobile and AI engineers on API/tool schemas and developer experience.
Requirements
- 6–10+ years; expert TypeScript, strong Python.
- Next.js (App Router), TanStack Query, shadcn/ui; FastAPI, Postgres, pydantic/SQLModel.
- Auth (OTP/JWT/OAuth), payments, caching, pagination, API versioning.
- Practical CI/CD and observability (logs/metrics/traces).
Nice-to-haves
- OpenAPI typegen (Zod), feature flags, background jobs/queues, Vercel/EAS.
Key Outcomes (ongoing)
- Stable architecture with typed contracts; <2% crash/error on web, p95 API latency in budget, reliable weekly releases.
Responsibilities:
- Develop and maintain RPA workflows using Selenium, AWS Lambda, and message queues (SQS/RabbitMQ/Kafka).
- Build and evolve internal automation frameworks, reusable libraries, and CI-integrated test suites to accelerate developer productivity.
- Develop comprehensive test strategies (unit, integration, end-to-end), optimize performance, handle exceptions, and ensure high reliability of automation scripts.
- Monitor automation health and maintain dashboards/logging via cloud tools (CloudWatch, ELK, etc. ).
- Champion automation standards workshops, write documentation, and coach other engineers on test-driven development and behavior-driven automation.
Requirements:
- 4-5 years of experience in automation engineering with deep/hands-on experience in Python and modern browser automation frameworks (Selenium/PythonRPA).
- Solid background with desktop-automation solutions (UiPath, PythonRPA) and scripting legacy applications.
- Strong debugging skills, with an eye for edge cases and race conditions in distributed, asynchronous systems.
- Hands-on experience with AWS services like Lambda, S3 and API Gateway.
- Familiarity with REST APIs, webhooks, and queue-based async processing.
- Experience integrating with third-party platforms or enterprise systems.
- Ability to translate business workflows into technical automation logic.
- Able to evangelize automation best practices, present complex ideas clearly, and drive cross-team alignment.
Nice to Have:
- Experience with RPA frameworks (UiPath, BluePrism, etc. ).
- Familiarity with building LLM-based workflows (LangChain, LlamaIndex) or custom agent loops to automate cognitive tasks.
- Exposure to automotive dealer management or VMS platforms.
- Understanding of cloud security and IAM practices.
Role Overview
We are seeking a Junior Developer with 1-3 year’s experience with strong foundations in Python, databases, and AI technologies. The ideal candidate will support the development of AI-powered solutions, focusing on LLM integration, prompt engineering, and database-driven workflows. This is a hands-on role with opportunities to learn and grow into advanced AI engineering responsibilities.
Key Responsibilities
- Develop, test, and maintain Python-based applications and APIs.
- Design and optimize prompts for Large Language Models (LLMs) to improve accuracy and performance.
- Work with JSON-based data structures for request/response handling.
- Integrate and manage PostgreSQL (pgSQL) databases, including writing queries and handling data pipelines.
- Collaborate with the product and AI teams to implement new features.
- Debug, troubleshoot, and optimize performance of applications and workflows.
- Stay updated on advancements in LLMs, AI frameworks, and generative AI tools.
Required Skills & Qualifications
- Strong knowledge of Python (scripting, APIs, data handling).
- Basic understanding of Large Language Models (LLMs) and prompt engineering techniques.
- Experience with JSON data parsing and transformations.
- Familiarity with PostgreSQL or other relational databases.
- Ability to write clean, maintainable, and well-documented code.
- Strong problem-solving skills and eagerness to learn.
- Bachelor’s degree in Computer Science, Engineering, or related field (or equivalent practical experience).
Nice-to-Have (Preferred)
- Exposure to AI/ML frameworks (e.g., LangChain, Hugging Face, OpenAI APIs).
- Experience working in startups or fast-paced environments.
- Familiarity with version control (Git/GitHub) and cloud platforms (AWS, GCP, or Azure).
What We Offer
- Opportunity to work on cutting-edge AI applications in permitting & compliance.
- Collaborative, growth-focused, and innovation-driven work culture.
- Mentorship and learning opportunities in AI/LLM development.
- Competitive compensation with performance-based growth.
About koolio.ai
Website: www.koolio.ai
koolio Inc. is a cutting-edge Silicon Valley startup dedicated to transforming how stories are told through audio. Our mission is to democratize audio content creation by empowering individuals and businesses to effortlessly produce high-quality, professional-grade content. Leveraging AI and intuitive web-based tools, koolio.ai enables creators to craft, edit, and distribute audio content—from storytelling to educational materials, brand marketing, and beyond—easily. We are passionate about helping people and organizations share their voices, fostering creativity, collaboration, and engaging storytelling for a wide range of use cases.
About the Full-Time Position
We are looking for a Junior QA Engineer (Fresher) to join our team on a full-time, hybrid basis. This is an exciting opportunity for a motivated fresher who is eager to learn and grow in the field of backend testing and quality assurance. You will work closely with senior engineers to ensure the reliability, performance, and scalability of koolio.ai’s backend services. This role is perfect for recent graduates who want to kickstart their career in a dynamic, innovative environment.
Key Responsibilities:
- Assist in the design and execution of test cases for backend services, APIs, and databases
- Perform manual and automated testing to validate the functionality and performance of backend systems
- Help identify, log, and track bugs, working closely with developers for issue resolution
- Contribute to developing automated test scripts to ensure continuous integration and deployment
- Document test cases, results, and issues in a clear and organized manner
- Continuously learn and apply testing methodologies and tools under the guidance of senior engineers
Requirements and Skills:
- Education: Degree in Computer Science or a related field
- Work Experience: No prior work experience required; internships or academic projects related to software testing or backend development are a plus
- Technical Skills:
- Basic understanding of backend systems and APIs
- Familiarity with SQL for basic database testing
- Exposure to any programming or scripting language (e.g., Python, JavaScript, Java)
- Interest in learning test automation tools and frameworks such as Selenium, JUnit, or Pytest
- Familiarity with basic version control systems (e.g., Git)
- Soft Skills:
- Eagerness to learn and apply new technologies in a fast-paced environment
- Strong analytical and problem-solving skills
- Excellent attention to detail and a proactive mindset
- Ability to communicate effectively and work in a collaborative, remote team
- Other Skills:
- Familiarity with API testing tools (e.g., Postman) or automation tools is a bonus but not mandatory
- Basic knowledge of testing methodologies and the software development life cycle is helpful
Compensation and Benefits:
- Total Yearly Compensation: ₹4.5-6 LPA based on skills and experience
- Health Insurance: Comprehensive health coverage provided by the company
Why Join Us?
- Be a part of a passionate and visionary team at the forefront of audio content creation
- Work on an exciting, evolving product that is reshaping the way audio content is created and consumed
- Thrive in a fast-moving, self-learning startup environment that values innovation, adaptability, and continuous improvement
- Enjoy the flexibility of a full-time hybrid position with opportunities to grow professionally and expand your skills
- Collaborate with talented professionals from around the world, contributing to a product that has a real-world impact
We are building cutting-edge AI products in the Construction Tech space – transforming how General Contractors, Estimators, and Project Managers manage bids, RFIs, and scope gaps. Our platform integrates AI Agents, voice automation, and vision systems to reduce hours of manual work and unlock new efficiencies for construction teams.
Joining us means you will be part of a lean, high-impact team working on production-ready AI workflows that touch real projects in the field.
Role Overview
We are seeking a part-time consultant (10–15 hours/week) with strong Backend development skills in Python (backend APIs) and ReactJS (frontend UI). You will work closely with the founding team to design, develop, and deploy features across the stack, directly contributing to AI-driven modules like:
Key Responsibilities
- Build and maintain modular Python APIs (FastAPI/Flask) with clean architecture.
- You must have at least 24 hours monthly backend Python expertise (excluding training, any Internships)
- We are ONLY looking for Backend Developers, Python-based Data Science, Analyst Role are not a match.
- Integrate AI services (OpenAI, LangChain, OCR/vision libraries) into production flows.
- Work with AWS services (Lambda, S3, RDS/Postgres, CloudWatch) for deployment.
- Collaborate with founders to convert fuzzy product ideas into technical deliverables.
- Ensure production readiness: logging, CI/CD pipelines, error handling, and test coverage.
Part-Time Eligibility Check -
- This is a fixed monthly paid role - NOT hourly
- We are a funded startup, and by compliance, Payments are generally prorated to your current monthly drawings (No negotiations on it)
- You should have 2-3 hours per day to code
- You should be a pro in AI-based Coding. We ship code really fast.
- You need to know Tools Like ChatGPT to generate solutions (Not Code) - use of the Cursor to build those solutions. Job ID 319083
- You will be assigned an independent task every week - we run 2 weeks of sprints
- I read the requirements, and I'm okay to proceed (Removing Spam applications).
What You’ll Be Doing:
● Own the architecture and roadmap for scalable, secure, and high-quality data pipelines
and platforms.
● Lead and mentor a team of data engineers while establishing engineering best practices,
coding standards, and governance models.
● Design and implement high-performance ETL/ELT pipelines using modern Big Data
technologies for diverse internal and external data sources.
● Drive modernization initiatives including re-architecting legacy systems to support
next-generation data products, ML workloads, and analytics use cases.
● Partner with Product, Engineering, and Business teams to translate requirements into
robust technical solutions that align with organizational priorities.
● Champion data quality, monitoring, metadata management, and observability across the
ecosystem.
● Lead initiatives to improve cost efficiency, data delivery SLAs, automation, and
infrastructure scalability.
● Provide technical leadership on data modeling, orchestration, CI/CD for data workflows,
and cloud-based architecture improvements.
Qualifications:
● Bachelor's degree in Engineering, Computer Science, or relevant field.
● 8+ years of relevant and recent experience in a Data Engineer role.
● 5+ years recent experience with Apache Spark and solid understanding of the
fundamentals.
● Deep understanding of Big Data concepts and distributed systems.
● Demonstrated ability to design, review, and optimize scalable data architectures across
ingestion.
● Strong coding skills with Scala, Python and the ability to quickly switch between them with
ease.
● Advanced working SQL knowledge and experience working with a variety of relational
databases such as Postgres and/or MySQL.
● Cloud Experience with DataBricks.
● Strong understanding of Delta Lake architecture and working with Parquet, JSON, CSV,
and similar formats.
● Experience establishing and enforcing data engineering best practices, including CI/CD
for data, orchestration and automation, and metadata management.
● Comfortable working in an Agile environment
● Machine Learning knowledge is a plus.
● Demonstrated ability to operate independently, take ownership of deliverables, and lead
technical decisions.
● Excellent written and verbal communication skills in English.
● Experience supporting and working with cross-functional teams in a dynamic
environment.
REPORTING: This position will report to Sr. Technical Manager or Director of Engineering as
assigned by Management.
EMPLOYMENT TYPE: Full-Time, Permanent
SHIFT TIMINGS: 10:00 AM - 07:00 PM IST
At Pipaltree, we’re building an AI-enabled platform that helps brands understand how they’re truly perceived — not through surveys or static dashboards, but through real conversations happening across the world.
We’re a small team solving deep technical and product challenges: orchestrating large-scale conversation data, applying reasoning and summarization models, and turning this into insights that businesses can trust.
Requirements:
- Deep understanding of distributed systems and asynchronous programming in Python
- Experience with building scalable applications using LLMs or traditional ML techniques
- Experience with Databases, Cache, and Micro services
- Experience with DevOps is a huge plus
Python is a high-level, general-purpose programming language. Its design philosophy emphasizes code readability with the use of significant indentation. Python is dynamically type-checked and garbage-collected.
About Us
We are a company where the ‘HOW’ of building software is just as important as the ‘WHAT.’ We partner with large organizations to modernize legacy codebases and collaborate with startups to launch MVPs, scale, or act as extensions of their teams. Guided by Software Craftsmanship values and eXtreme Programming Practices, we deliver high-quality, reliable software solutions tailored to our clients' needs.
We thrive to:
- Bring our clients' dreams to life by being their trusted engineering partners, crafting innovative software solutions.
- Challenge offshore development stereotypes by delivering exceptional quality, and proving the value of craftsmanship.
- Empower clients to deliver value quickly and frequently to their end users.
- Ensure long-term success for our clients by building reliable, sustainable, and impactful solutions.
- Raise the bar of software craft by setting a new standard for the community.
Job Description
This is a remote position.
Our Core Values
- Quality with Pragmatism: We aim for excellence with a focus on practical solutions.
- Extreme Ownership: We own our work and its outcomes fully.
- Proactive Collaboration: Teamwork elevates us all.
- Pursuit of Mastery: Continuous growth drives us.
- Effective Feedback: Honest, constructive feedback fosters improvement.
- Client Success: Our clients’ success is our success.
Experience Level
This role is ideal for engineers with 6+ years of hands-on software development experience, particularly in Python and ReactJs at scale.
Role Overview
If you’re a Software Craftsperson who takes pride in clean, test-driven code and believes in Extreme Programming principles, we’d love to meet you. At Incubyte, we’re a DevOps organization where developers own the entire release cycle, meaning you’ll get hands-on experience across programming, cloud infrastructure, client communication, and everything in between. Ready to level up your craft and join a team that’s as quality-obsessed as you are? Read on!
What You'll Do
- Write Tests First: Start by writing tests to ensure code quality
- Clean Code: Produce self-explanatory, clean code with predictable results
- Frequent Releases: Make frequent, small releases
- Pair Programming: Work in pairs for better results
- Peer Reviews: Conduct peer code reviews for continuous improvement
- Product Team: Collaborate in a product team to build and rapidly roll out new features and fixes
- Full Stack Ownership: Handle everything from the front end to the back end, including infrastructure and DevOps pipelines
- Never Stop Learning: Commit to continuous learning and improvement
- AI-First Development Focus
- Leverage AI tools like GitHub Copilot, Cursor, Augment, Claude Code, etc., to accelerate development and automate repetitive tasks.
- Use AI to detect potential bugs, code smells, and performance bottlenecks early in the development process.
- Apply prompt engineering techniques to get the best results from AI coding assistants.
- Evaluate AI generated code/tools for correctness, performance, and security before merging.
- Continuously explore, stay ahead by experimenting and integrating new AI powered tools and workflows as they emerge.
Requirements
What We're Looking For
- Proficiency in some or all of the following: ReactJS, JavaScript, Object Oriented Programming in JS
- 5+ years of Object-Oriented Programming with Python or equivalent
- 5+ years of experience working with relational (SQL) databases
- 5+ years of experience using Git to contribute code as part of a team of Software Craftspeople
- AI Skills & Mindset
- Power user of AI assisted coding tools (e.g., GitHub Copilot, Cursor, Augment, Claude Code).
- Strong prompt engineering skills to effectively guide AI in crafting relevant, high-quality code.
- Ability to critically evaluate AI generated code for logic, maintainability, performance, and security.
- Curiosity and adaptability to quickly learn and apply new AI tools and workflows.
- AI evaluation mindset balancing AI speed with human judgment for robust solutions.
Benefits
What We Offer
- Dedicated Learning & Development Budget: Fuel your growth with a budget dedicated solely to learning.
- Conference Talks Sponsorship: Amplify your voice! If you’re speaking at a conference, we’ll fully sponsor and support your talk.
- Cutting-Edge Projects: Work on exciting projects with the latest AI technologies
- Employee-Friendly Leave Policy: Recharge with ample leave options designed for a healthy work-life balance.
- Comprehensive Medical & Term Insurance: Full coverage for you and your family’s peace of mind.
- And More: Extra perks to support your well-being and professional growth.
Work Environment
- Remote-First Culture: At Incubyte, we thrive on a culture of structured flexibility — while you have control over where and how you work, everyone commits to a consistent rhythm that supports their team during core working hours for smooth collaboration and timely project delivery. By striking the perfect balance between freedom and responsibility, we enable ourselves to deliver high-quality standards our customers recognize us by. With asynchronous tools and push for active participation, we foster a vibrant, hands-on environment where each team member’s engagement and contributions drive impactful results.
- Work-In-Person: Twice a year, we come together for two-week sprints to collaborate in person, foster stronger team bonds, and align on goals. Additionally, we host an annual retreat to recharge and connect as a team. All travel expenses are covered.
- Proactive Collaboration: Collaboration is central to our work. Through daily pair programming sessions, we focus on mentorship, continuous learning, and shared problem-solving. This hands-on approach keeps us innovative and aligned as a team.
Incubyte is an equal opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all employees.
Detailed JD (Roles and Responsibilities)
Full stack (Backend focused) Ownership. Programing - Python, react (Good to have - C#, Node),Agile .Flexible to learn new things




















