50+ Remote Python Jobs in India
Apply to 50+ Remote Python Jobs on CutShort.io. Find your next job, effortlessly. Browse Python Jobs and apply today!
At Palcode.ai, We're on a mission to fix the massive inefficiencies in pre-construction. Think about it - in a $10 trillion industry, estimators still spend weeks analyzing bids, project managers struggle with scattered data, and costly mistakes slip through complex contracts. We're fixing this with purpose-built AI agents that work. Our platform can do “magic” to Preconstruction workflows from Weeks to Hours. It's not just about AI – it's about bringing real, measurable impact to an industry ready for change. We are backed by names like AWS for Startups, Upekkha Accelerator, and Microsoft for Startups.
Why Palcode.ai
Tackle Complex Problems: Build AI that reads between the lines of construction bids, spots hidden risks in contracts, and makes sense of fragmented project data
High-Impact Code: Your code won't sit in a backlog – it goes straight to estimators and project managers who need it yesterday
Tech Challenges That Matter: Design systems that process thousands of construction documents, handle real-time pricing data, and make intelligent decisions
Build & Own: Shape our entire tech stack, from data processing pipelines to AI model deployment
Quick Impact: Small team, huge responsibility. Your solutions directly impact project decisions worth millions
Learn & Grow: Master the intersection of AI, cloud architecture, and construction tech while working with founders who've built and scaled construction software
Your Role:
- Design and build our core AI services and APIs using Python
- Create reliable, scalable backend systems that handle complex data
- Help set up cloud infrastructure and deployment pipelines
- Collaborate with our AI team to integrate machine learning models
- Write clean, tested, production-ready code
You'll fit right in if:
- You have 1 year of hands-on Python development experience
- You're comfortable with full-stack development and cloud services
- You write clean, maintainable code and follow good engineering practices
- You're curious about AI/ML and eager to learn new technologies
- You enjoy fast-paced startup environments and take ownership of your work
How we will set you up for success
- You will work closely with the Founding team to understand what we are building.
- You will be given comprehensive training about the tech stack, with an opportunity to avail virtual training as well.
- You will be involved in a monthly one-on-one with the founders to discuss feedback
- A unique opportunity to learn from the best - we are Gold partners of AWS, Razorpay, and Microsoft Startup programs, having access to rich talent to discuss and brainstorm ideas.
- You’ll have a lot of creative freedom to execute new ideas. As long as you can convince us, and you’re confident in your skills, we’re here to back you in your execution.
Location: Bangalore, Remote
Compensation: Competitive salary + Meaningful equity
If you get excited about solving hard problems that have real-world impact, we should talk.
All the best!!
Read less
Job Description: DevOps Engineer
Location: Bangalore / Hybrid / Remote
Company: LodgIQ
Industry: Hospitality / SaaS / Machine Learning
About LodgIQ
Headquartered in New York, LodgIQ delivers a revolutionary B2B SaaS platform to the
travel industry. By leveraging machine learning and artificial intelligence, we enable precise
forecasting and optimized pricing for hotel revenue management. Backed by Highgate
Ventures and Trilantic Capital Partners, LodgIQ is a well-funded, high-growth startup with a
global presence.
Role Summary:
We are seeking a Senior DevOps Engineer with 5+ years of strong hands-on experience in
AWS, Kubernetes, CI/CD, infrastructure as code, and cloud-native technologies. This
role involves designing and implementing scalable infrastructure, improving system
reliability, and driving automation across our cloud ecosystem.
Key Responsibilities:
• Architect, implement, and manage scalable, secure, and resilient cloud
infrastructure on AWS
• Lead DevOps initiatives including CI/CD pipelines, infrastructure automation,
and monitoring
• Deploy and manage Kubernetes clusters and containerized microservices
• Define and implement infrastructure as code using
Terraform/CloudFormation
• Monitor production and staging environments using tools like CloudWatch,
Prometheus, and Grafana
• Support MongoDB and MySQL database administration and optimization
• Ensure high availability, performance tuning, and cost optimization
• Guide and mentor junior engineers, and enforce DevOps best practices
• Drive system security, compliance, and audit readiness in cloud environments
• Collaborate with engineering, product, and QA teams to streamline release
processes
Required Qualifications:
• 5+ years of DevOps/Infrastructure experience in production-grade environments
• Strong expertise in AWS services: EC2, EKS, IAM, S3, RDS, Lambda, VPC, etc.
• Proven experience with Kubernetes and Docker in production
• Proficient with Terraform, CloudFormation, or similar IaC tools
• Hands-on experience with CI/CD pipelines using Jenkins, GitHub Actions, or
similar
• Advanced scripting in Python, Bash, or Go
• Solid understanding of networking, firewalls, DNS, and security protocols
• Exposure to monitoring and logging stacks (e.g., ELK, Prometheus, Grafana)
• Experience with MongoDB and MySQL in cloud environments
Preferred Qualifications:
• AWS Certified DevOps Engineer or Solutions Architect
• Experience with service mesh (Istio, Linkerd), Helm, or ArgoCD
• Familiarity with Zero Downtime Deployments, Canary Releases, and Blue/Green
Deployments
• Background in high-availability systems and incident response
• Prior experience in a SaaS, ML, or hospitality-tech environment
Tools and Technologies You’ll Use:
• Cloud: AWS
• Containers: Docker, Kubernetes, Helm
• CI/CD: Jenkins, GitHub Actions
• IaC: Terraform, CloudFormation
• Monitoring: Prometheus, Grafana, CloudWatch
• Databases: MongoDB, MySQL
• Scripting: Bash, Python
• Collaboration: Git, Jira, Confluence, Slack
Why Join Us?
• Competitive salary and performance bonuses.
• Remote-friendly work culture.
• Opportunity to work on cutting-edge tech in AI and ML.
• Collaborative, high-growth startup environment.
• For more information, visit http://www.lodgiq.com
Job Description
We are looking for motivated IT professionals with at least one year of industry experience. The ideal candidate should have hands-on experience in AWS, Azure, AI, or Cloud technologies, or should be enthusiastic and ready to upskill and shift to new and emerging technologies. This role is primarily remote; however, candidates may be required to visit the office occasionally for meetings or project needs.
Key Requirements
- Minimum 1 year of experience in the IT industry
- Exposure to AWS / Azure / AI / Cloud platforms (any one or more)
- Willingness to learn and adapt to new technologies
- Strong problem-solving and communication skills
- Ability to work independently in a remote setup
- Must have a proper work-from-home environment (laptop, stable internet, quiet workspace)
Education Qualification
- B.Tech / BE / MCA / M.Sc (IT) / equivalent
We are looking for a highly skilled Data Engineer. The ideal candidate will have hands-on expertise in Big Data technologies, with a strong foundation in distributed data processing, real-time pipelines, and large-scale data systems. You will design, build, and optimize data solutions that power insights, reporting, and decision-making for our advertising ecosystem.
Key Responsibilities
- Design, develop, and maintain large-scale data pipelines to support Ads reporting, attribution, and analytics use cases.
- Work extensively with Hive, Spark, SQL, Scala, and Kafka to process and manage petabyte-scale datasets.
- Optimize data workflows for performance, scalability, and cost efficiency.
- Partner with data scientists, product managers, and platform engineers to deliver high-quality, reliable datasets and APIs.
- Ensure data quality, integrity, and consistency across multiple data sources.
- Troubleshoot and resolve issues in real-time streaming pipelines and batch data jobs.
- Continuously evaluate new technologies to enhance the Ads Data platform.
Requirements
- Strong programming experience in Scala (preferred), Java, or Python.
- Hands-on experience with Apache Spark (batch & streaming) for large-scale data processing.
- Proficiency in Hive, SQL, and data modeling for analytical workloads.
- Experience working with Kafka for real-time event streaming.
- Solid understanding of Big Data ecosystems (S4, Hive, Presto, Delta etc.).
- Strong debugging, performance tuning, and problem-solving skills.
- Bachelor’s or Master’s degree in Computer Science, Engineering, or related field.
Nice to Have
- Experience in AdTech, Attribution, or Campaign Analytics.
- Familiarity with cloud-based big data solutions (AWS EMR, GCP BigQuery, Databricks, etc.).
- Familiarity with scheduling services like AirFlow
- Knowledge of data governance, security, and compliance best practices.
Benefits
- Best in class salary: We hire only the best, and we pay accordingly.
- Proximity Talks: Meet other designers, engineers, and product geeks — and learn from experts in the field.
- Keep on learning with a world-class team: Work with the best in the field, challenge yourself constantly, and learn something new every day.
What We’re Looking For
• Hands-on experience in keyword research, competitor analysis and content gap identification
• Strong understanding of on-page SEO: meta tags, schema, internal linking, URL structure and content optimization
• Experience managing technical SEO: site audits, crawling issues, indexing, page speed and mobile optimization
• Ability to plan and execute backlink strategies using safe, high-quality methods
• Familiarity with tools like Google Search Console, Google Analytics, Ahrefs, SEMrush, Screaming Frog
• Experience working with blogs, landing pages and long-form content
• Ability to coordinate with writers, developers and designers for SEO requirements
• Proven experience in improving rankings for competitive keywords
• Understanding of local SEO and structured data markup
• Comfortable working in a fast-moving, bootstrapped startup environment
• Bonus: Experience with Django or basic HTML/CSS is useful but not mandatory
What You Will Work On
• Improving rankings for keywords like “Top boarding schools in India”, “Best boarding schools”, etc.
• Conducting monthly audits and pushing technical SEO fixes
• Growing EduPowerPro’s organic traffic through structured content planning
• Managing backlink acquisition and partnerships
• Tracking performance and presenting monthly insights
You will architect, implement, and operationalize the core systems that power our platform. Specifically, you will:
- Design and implement a correctness-critical ledger for tracking value movements, state transitions, and automated operations.
- Build and own event-driven pipelines that process transactions, orchestrate workflows, and ensure reliable execution across internal and external systems.
- Develop orchestration services powering automated, agent-driven decisioning and multi-step execution flows.
- Integrate external providers for identity, verification, and digital value movement with an emphasis on security and predictable behavior.
- Define schemas, invariants, error models, and execution guarantees to uphold system integrity.
- Establish operational tooling including monitoring dashboards, alerting, logs, and distributed tracing.
- Lead engineering standards and processes, including code review practices, architectural rigor, and release discipline.
- Mentor engineers and support the hiring process as we expand the team.
- Ensure delivery by breaking down complex requirements into clear milestones and driving them to completion.
BASIC QUALIFICATIONS
- 7+ years of backend or distributed systems engineering experience.
- Extensive experience designing correctness-critical systems (ledgers, orchestration engines, transactional backends, etc.).
- Expert-level proficiency in Go, Rust, or similar systems languages.
- Deep experience with schema design, data modeling, consistency models, and fault-tolerant systems.
- Proven experience integrating multiple high-reliability external systems.
- Demonstrated ability to lead technically, mentor engineers, and influence engineering practices.
- Ability to ship complex systems end-to-end with minimal oversight.
PREFERRED QUALIFICATIONS
- Experience with verification, trust-minimized execution, or blockchain-integrated systems.
- Experience leading or setting foundational processes for an engineering team.
- Strong intuition around agentic or automated decisioning flows.
- Prior work building consumer-scale systems or financial-grade infrastructure.
WHAT WE OFFER
- Competitive compensation.
- High ownership and the opportunity to shape product direction.
- Direct impact on foundational cryptographic and blockchain infrastructure.
- A collaborative team that values clarity, autonomy, and velocity.
Note: This role can be remote; however, Bengaluru or Mumbai candidates will be prioritized.
🚀 About Us
At Remedo, we're building the future of digital healthcare marketing. We help doctors grow their online presence, connect with patients, and drive real-world outcomes like higher appointment bookings and better Google reviews — all while improving their SEO.
We’re also the creators of Convertlens, our generative AI-powered engagement engine that transforms how clinics interact with patients across the web. Think hyper-personalized messaging, automated conversion funnels, and insights that actually move the needle.
We’re a lean, fast-moving team with startup DNA. If you like ownership, impact, and tech that solves real problems — you’ll fit right in.
🛠️ What You’ll Do
- Build and maintain scalable Python back-end systems that power Convertlens and internal applications.
- Develop Agentic AI applications and workflows to drive automation and insights.
- Design and implement connectors to third-party systems (APIs, CRMs, marketing tools) to source and unify data.
- Ensure system reliability with strong practices in observability, monitoring, and troubleshooting.
⚙️ What You Bring
- 2+ years of hands-on experience in Python back-end development.
- Strong understanding of REST API design and integration.
- Proficiency with relational databases (MySQL/PostgreSQL).
- Familiarity with observability tools (logging, monitoring, tracing — e.g., OpenTelemetry, Prometheus, Grafana, ELK).
- Experience maintaining production systems with a focus on reliability and scalability.
- Bonus: Exposure to Node.js and modern front-end frameworks like ReactJs.
- Strong problem-solving skills and comfort working in a startup/product environment.
- A builder mindset — scrappy, curious, and ready to ship.
💼 Perks & Culture
- Flexible work setup — remote-first for most, hybrid if you’re in Delhi NCR.
- A high-growth, high-impact environment where your code goes live fast.
- Opportunities to work with Agentic AI and cutting-edge tech.
- Small team, big vision — your work truly matters here.
Responsibilities
Develop and maintain web and backend components using Python, Node.js, and Zoho tools
Design and implement custom workflows and automations in Zoho
Perform code reviews to maintain quality standards and best practices
Debug and resolve technical issues promptly
Collaborate with teams to gather and analyze requirements for effective solutions
Write clean, maintainable, and well-documented code
Manage and optimize databases to support changing business needs
Contribute individually while mentoring and supporting team members
Adapt quickly to a fast-paced environment and meet expectations within the first month
Leadership Opportunities
Lead and mentor junior developers in the team
Drive projects independently while collaborating with the broader team
Act as a technical liaison between the team and stakeholders to deliver effective solutions
Selection Process
1. HR Screening: Review of qualifications and experience
2. Online Technical Assessment: Test coding and problem-solving skills
3. Technical Interview: Assess expertise in web development, Python, Node.js, APIs, and Zoho
4. Leadership Evaluation: Evaluate team collaboration and leadership abilities
5. Management Interview: Discuss cultural fit and career opportunities
6. Offer Discussion: Finalize compensation and role specifics
Experience Required
2-6 years of relevant experience as a Software Developer
Proven ability to work as a self-starter and contribute individually
Strong technical and interpersonal skills to support team members effectively
Role: Technical Co-Founder
Experience: 3+ years (Mandatory)
Compensation: Equity Only (No Salary)
Requirements:
Strong full-stack development skills
Experience building web applications from scratch
Able to manage complete tech independently
Startup mindset & owne
rship attitude
The Senior Software Developer is responsible for development of CFRA’s report generation framework using a modern technology stack: Python on AWS cloud infrastructure, SQL, and Web technologies. This is an opportunity to make an impact on both the team and the organization by being part of the design and development of a new customer-facing report generation framework that will serve as the foundation for all future report development at CFRA.
The ideal candidate has a passion for solving business problems with technology and can effectively communicate business and technical needs to stakeholders. We are looking for candidates that value collaboration with colleagues and having an immediate, tangible impact for a leading global independent financial insights and data company.
Key Responsibilities
- Analyst Workflows: Design and development of CFRA’s integrated content publishing platform using a proprietary 3rd party editorial and publishing platform for integrated digital publishing.
- Designing and Developing APIs: Design and development of robust, scalable, and secure APIs on AWS, considering factors like performance, reliability, and cost-efficiency.
- AWS Service Integration: Integrate APIs with various AWS services such as AWS Lambda, Amazon API Gateway, Amazon SQS, Amazon SNS, AWS Glue, and others, to build comprehensive and efficient solutions.
- Performance Optimization: Identify and implement optimizations to improve performance, scalability, and efficiency, leveraging AWS services and tools.
- Security and Compliance: Ensure APIs are developed following best security practices, including authentication, authorization, encryption, and compliance with relevant standards and regulations.
- Monitoring and Logging: Implement monitoring and logging solutions for APIs using AWS CloudWatch, AWS X-Ray, or similar tools, to ensure availability, performance, and reliability.
- Continuous Integration and Deployment (CI/CD): Establish and maintain CI/CD pipelines for API development, automating testing, deployment, and monitoring processes on AWS.
- Documentation and Training: Create and maintain comprehensive documentation for internal and external users, and provide training and support to developers and stakeholders.
- Team Collaboration: Collaborate effectively with cross-functional teams, including product managers, designers, and other developers, to deliver high-quality solutions that meet business requirements.
- Problem Solving: troubleshooting efforts, identifying root causes and implementing solutions to ensure system stability and performance.
- Stay Updated: Stay updated with the latest trends, tools, and technologies related to development on AWS, and continuously improve your skills and knowledge.
Desired Skills and Experience
- Development: 5+ years of extensive experience in designing, developing, and deploying using modern technologies, with a focus on scalability, performance, and security.
- AWS Services: proficiency in using AWS services such as AWS Lambda, Amazon API Gateway, Amazon SQS, Amazon SNS, Amazon SES, Amazon RDS, Amazon DynamoDB, and others, to build and deploy API solutions.
- Programming Languages: Proficiency in programming languages commonly used for development, such as Python, Node.js, or others, as well as experience with serverless frameworks like AWS.
- Architecture Design: Ability to design scalable and resilient API architectures using microservices, serverless, or other modern architectural patterns, considering factors like performance, reliability, and cost-efficiency.
- Security: Strong understanding of security principles and best practices, including authentication, authorization, encryption, and compliance with standards like OAuth, OpenID Connect, and AWS IAM.
- DevOps Practices: Familiarity with DevOps practices and tools, including CI/CD pipelines, infrastructure as code (IaC), and automated testing, to ensure efficient and reliable deployment on AWS.
- Problem-solving Skills: Excellent problem-solving skills, with the ability to troubleshoot complex issues, identify root causes, and implement effective solutions to ensure the stability and performance.
- Communication Skills: Strong communication skills, with the ability to effectively communicate technical concepts to both technical and non-technical stakeholders, and collaborate with cross- functional teams.
- Agile Methodologies: Experience working in Agile development environments, following practices like Scrum or Kanban, and ability to adapt to changing requirements and priorities.
- Continuous Learning: A commitment to continuous learning and staying updated with the latest trends, tools, and technologies related to development and AWS services.
- Bachelor's Degree: A bachelor's degree in Computer Science, Software Engineering, or a related field is often preferred, although equivalent experience and certifications can also be valuable.
The Lead Software Developer is responsible for development of CFRA’s report generation framework using a modern technology stack: Python on AWS cloud infrastructure, SQL, and Web technologies. This is an opportunity to make an impact on both the team and the organization by being part of the design and development of a new customer-facing report generation framework that will serve as the foundation for all future report development at CFRA.
The ideal candidate has a passion for solving business problems with technology and can effectively communicate business and technical needs to stakeholders. We are looking for candidates that value collaboration with colleagues and having an immediate, tangible impact for a leading global independent financial insights and data company.
Key Responsibilities
- Analyst Workflows: Lead the design and development of CFRA’s integrated content publishing platform using a proprietary 3rd party editorial and publishing platform for integrated digital publishing.
- Designing and Developing APIs: Lead the design and development of robust, scalable, and secure APIs on AWS, considering factors like performance, reliability, and cost-efficiency.
- Architecture Planning: Collaborate with architects and stakeholders to define architecture, including API gateway, microservices, and serverless components, ensuring alignment with business goals and AWS best practices.
- Technical Leadership: Provide technical guidance and leadership to the development team, ensuring adherence to coding standards, best practices, and AWS guidelines.
- AWS Service Integration: Integrate APIs with various AWS services such as AWS Lambda, Amazon API Gateway, Amazon SQS, Amazon SNS, AWS Glue, and others, to build comprehensive and efficient solutions.
- Performance Optimization: Identify and implement optimizations to improve performance, scalability, and efficiency, leveraging AWS services and tools.
- Security and Compliance: Ensure APIs are developed following best security practices, including authentication, authorization, encryption, and compliance with relevant standards and regulations.
- Monitoring and Logging: Implement monitoring and logging solutions for APIs using AWS CloudWatch, AWS X-Ray, or similar tools, to ensure availability, performance, and reliability.
- Continuous Integration and Deployment (CI/CD): Establish and maintain CI/CD pipelines for API development, automating testing, deployment, and monitoring processes on AWS.
- Documentation and Training: Create and maintain comprehensive documentation for internal and external users, and provide training and support to developers and stakeholders.
- Team Collaboration: Collaborate effectively with cross-functional teams, including product managers, designers, and other developers, to deliver high-quality solutions that meet business requirements.
- Problem Solving: Lead troubleshooting efforts, identifying root causes and implementing solutions to ensure system stability and performance.
- Stay Updated: Stay updated with the latest trends, tools, and technologies related to development on AWS, and continuously improve your skills and knowledge.
Desired Skills and Experience
- Development: 10+ years of extensive experience in designing, developing, and deploying using modern technologies, with a focus on scalability, performance, and security.
- AWS Services: Strong proficiency in using AWS services such as AWS Lambda, Amazon API Gateway, Amazon SQS, Amazon SNS, Amazon SES, Amazon RDS, Amazon DynamoDB, and others, to build and deploy API solutions.
- Programming Languages: Proficiency in programming languages commonly used for development, such as Python, Node.js, or others, as well as experience with serverless frameworks like AWS.
- Architecture Design: Ability to design scalable and resilient API architectures using microservices, serverless, or other modern architectural patterns, considering factors like performance, reliability, and cost-efficiency.
- Security: Strong understanding of security principles and best practices, including authentication, authorization, encryption, and compliance with standards like OAuth, OpenID Connect, and AWS IAM.
- DevOps Practices: Familiarity with DevOps practices and tools, including CI/CD pipelines, infrastructure as code (IaC), and automated testing, to ensure efficient and reliable deployment on AWS.
- Problem-solving Skills: Excellent problem-solving skills, with the ability to troubleshoot complex issues, identify root causes, and implement effective solutions to ensure the stability and performance.
- Team Leadership: Experience leading and mentoring a team of developers, providing technical guidance, code reviews, and fostering a collaborative and innovative environment.
- Communication Skills: Strong communication skills, with the ability to effectively communicate technical concepts to both technical and non-technical stakeholders, and collaborate with cross- functional teams.
- Agile Methodologies: Experience working in Agile development environments, following practices like Scrum or Kanban, and ability to adapt to changing requirements and priorities.
- Continuous Learning: A commitment to continuous learning and staying updated with the latest trends, tools, and technologies related to development and AWS services.
- Bachelor's Degree: A bachelor's degree in Computer Science, Software Engineering, or a related field is often preferred, although equivalent experience and certifications can also be valuable.
Job Summary:
Full-time 6+ Years We are looking for a Lead Data Scientist with the ability to lead a data science team and help us gain useful insight out of raw data. Lead Data Scientist responsibilities include managing the client, data science team, planning projects and building analytics models. You should have a strong problem-solving ability and a knack for statistical analysis. If you’re also able to align our data products with our business goals, we’d like to talk to you. Responsibilities
● You would be required to Identify, develop and implement the appropriate statistical techniques, algorithms and Deep learning / ML Models to create new, scalable solutions that address business challenges across industry domains.
● Define, develop, maintain and evolve data models, tools and capabilities.
● Communicate your findings to the appropriate teams through visualizations
● Provide solutions for but not limited to: Object detection/Image recognition, natural language processing, Sentiment Analysis, Topic Modeling, Concept Extraction, Recommender Systems, Text Classification, Clustering , Customer Segmentation & Targeting, Propensity Modeling, Churn Modeling, Lifetime Value Estimation, Forecasting, Modeling Response to Incentives, Marketing Mix Optimization, Price Optimization, GenAI and LLMs.
● Ability to build, train and lead a team of data scientists
. ● Follow/maintain an agile methodology for delivering on project milestones.
● Experience applying statistical ideas and methods to data sets to answer business problems.
● Mine and analyze data to drive optimization and improvement of product development, marketing techniques and business strategies.
● Working on ML Model Containerisation.
● Creating ML Model Inference Pipelines.
● Testing and Monitoring ML Models. Tech Stack
: ● Strong Python programming Knowledge- Data Science and Advanced Concepts like Abstract Class, Function Overloading + Overriding, Inheritance, Modular Programming and Reusability
● Working knowledge of Transformers, Hugging face etc.
. ● Working knowledge in implementing large language modes for enterprise applications.
● Cloud Experience in using Azure services,
● REST API using Flask or fastapi frameworks
● Good to have - Pyspark: Spark Dataframe Operations, SQL functions API, Parallel Execution
● Good to have - Unit testing using Python pytest or Unittest Preferred Qualifications:
● Bachelors/ Masters/ PhD degree in Math, Computer Science, Information Systems, Machine Learning, Statistics, Applied Mathematics or related technical degree.
● Minimum of 6 years of experience in a related position, as a senior data scientist building Predictive analytics, Computer Vision, NLP and GenAI solutions for various types of business problems.
● Advanced knowledge of statistical techniques, machine learning algorithms and deep learning frameworks like Tensorflow, Keras, Pytorch, etc..
● Strong planning and project management skills, able to
Key Responsibilities
● Design and maintain high-performance backend applications and microservices
● Architect scalable, cloud-native systems and collaborate across engineering teams
● Write high-quality, performant code and conduct thorough code reviews
● Build and operate CI/CD pipelines and production systems
● Work with databases, containerization (Docker/Kubernetes), and cloud platforms
● Lead agile practices and continuously improve service reliability
Required Qualifications
● 4+ years of professional software development experience
● 2+ years contributing to service design and architecture
● Strong expertise in modern languages like Golang, Python
● Deep understanding of scalable, cloud-native architectures and microservices
● Production experience with distributed systems and database technologies
● Experience with Docker, software engineering best practices
● Bachelor's Degree in Computer Science or related technical field
Preferred Qualifications
● Experience with Golang, AWS, and Kubernetes
● CI/CD pipeline experience with GitHub Actions
● Start-up environment experience
We’re looking for a full-stack generalist who can handle the entire product lifecycle from frontend, backend, APIs, AI integrations, deployment, and everything in between. Someone who enjoys owning features from concept to production and working across the entire stack
Job Summary: Lead/Senior ML Data Engineer (Cloud-Native, Healthcare AI)
Experience Required: 8+ Years
Work Mode: Remote
We are seeking a highly autonomous and experienced Lead/Senior ML Data Engineer to drive the critical data foundation for our AI analytics and Generative AI platforms. This is a specialized hybrid position, focusing on designing, building, and optimizing scalable data pipelines (ETL/ELT) that transform complex, messy clinical and healthcare data into high-quality, production-ready feature stores for Machine Learning and NLP models.
The successful candidate will own technical work streams end-to-end, ensuring data quality, governance, and low-latency delivery in a cloud-native environment.
Key Responsibilities & Focus Areas:
- ML Data Pipeline Ownership (70-80% Focus): Design and implement high-performance, scalable ETL/ELT pipelines using PySpark and a Lakehouse architecture (such as Databricks) to ingest, clean, and transform large-scale healthcare datasets.
- AI Data Preparation: Specialize in Feature Engineering and data preparation for complex ML workloads, including transforming unstructured clinical data (e.g., medical notes) for Generative AI and NLP model training.
- Cloud Architecture & Orchestration: Deploy, manage, and optimize data workflows using Airflow in a production AWS environment.
- Data Governance & Compliance: Mandatorily implement pipelines with robust data masking, pseudonymization, and security controls to ensure continuous adherence to HIPAA and other relevant health data privacy regulations.
- Technical Leadership: Lead and define technical requirements from ambiguous business problems, acting as a key contributor to the data architecture strategy for the core AI platform.
Non-Negotiable Requirements (The "Must-Haves"):
- 5+ years of progressive experience as a Data Engineer, with a clear focus on ML/AI support.
- Deep expertise in PySpark/Python for distributed data processing.
- Mandatory proficiency with Lakehouse platforms (e.g., Databricks) in an AWS production environment.
- Proven experience handling complex clinical/healthcare data (EHR, Claims), including unstructured text.
- Hands-on experience with HIPAA/GDPR compliance in data pipeline design.
Job Summary
We are seeking an experienced Databricks Developer with strong skills in PySpark, SQL, Python, and hands-on experience deploying data solutions on AWS (preferred), Azure. The role involves designing, developing, and optimizing scalable data pipelines and analytics workflows on the Databricks platform.
Key Responsibilities
- Develop and optimize ETL/ELT pipelines using Databricks and PySpark.
- Build scalable data workflows on AWS (EC2, S3, Glue, Lambda, IAM) or Azure (ADF, ADLS, Synapse).
- Implement and manage Delta Lake (ACID, schema evolution, time travel).
- Write efficient, complex SQL for transformation and analytics.
- Build and support batch and streaming ingestion (Kafka, Kinesis, EventHub).
- Optimize Databricks clusters, jobs, notebooks, and PySpark performance.
- Collaborate with cross-functional teams to deliver reliable data solutions.
- Ensure data governance, security, and compliance.
- Troubleshoot pipelines and support CI/CD deployments.
Required Skills & Experience
- 4–8 years in Data Engineering / Big Data development.
- Strong hands-on experience with Databricks (clusters, jobs, workflows).
- Advanced PySpark and strong Python skills.
- Expert-level SQL (complex queries, window functions).
- Practical experience with AWS (preferred) or Azure cloud services.
- Experience with Delta Lake, Parquet, and data lake architectures.
- Familiarity with CI/CD tools (GitHub Actions, Azure DevOps, Jenkins).
- Good understanding of data modeling, optimization, and distributed systems.
Summary:
We are seeking a highly skilled Python Backend Developer with proven expertise in FastAPI to join our team as a full-time contractor for 12 months. The ideal candidate will have 5+ years of experience in backend development, a strong understanding of API design, and the ability to deliver scalable, secure solutions. Knowledge of front-end technologies is an added advantage. Immediate joiners are preferred. This role requires full-time commitment—please apply only if you are not engaged in other projects.
Job Type:
Full-Time Contractor (12 months)
Location:
Remote / On-site (Jaipur preferred, as per project needs)
Experience:
5+ years in backend development
Key Responsibilities:
- Design, develop, and maintain robust backend services using Python and FastAPI.
- Implement and manage Prisma ORM for database operations.
- Build scalable APIs and integrate with SQL databases and third-party services.
- Deploy and manage backend services using Azure Function Apps and Microsoft Azure Cloud.
- Collaborate with front-end developers and other team members to deliver high-quality web applications.
- Ensure application performance, security, and reliability.
- Participate in code reviews, testing, and deployment processes.
Required Skills:
- Expertise in Python backend development with strong experience in FastAPI.
- Solid understanding of RESTful API design and implementation.
- Proficiency in SQL databases and ORM tools (preferably Prisma)
- Hands-on experience with Microsoft Azure Cloud and Azure Function Apps.
- Familiarity with CI/CD pipelines and containerization (Docker).
- Knowledge of cloud architecture best practices.
Added Advantage:
- Front-end development knowledge (React, Angular, or similar frameworks).
- Exposure to AWS/GCP cloud platforms.
- Experience with NoSQL databases.
Eligibility:
- Minimum 5 years of professional experience in backend development.
- Available for full-time engagement.
- Please excuse if you are currently engaged in other projects—we require dedicated availability.
About Grey chain (https://greychaindesign.com/)
A Generative AI-as-a-service, Mobile & Digital Transformation firm helping organizations reimagine user experiences with Disruptive & Transformational thinking and partnership.
Trusted by: UNICEF, BOSE, KFINTECH, WHO and many Fortune 500 Companies
Home of over 110 Product Engineering nerds building the next generation of Digital Products.Primary Industries: Banking & Financial Services, Non-Profits, Retail & eCommerce, Consumer Goods and Consulting.
Location: Remote
Job Summary:
We're seeking an experienced QA Automation Engineer to join our team. The ideal candidate will have strong technical skills, excellent communication, and a proactive approach to issue resolution. The QA Automation Engineer will design, develop, and maintain automated testing solutions to ensure the quality and reliability of our software applications.
Key Responsibilities:
- Design, develop, and maintain automated testing frameworks using Python (Pytest)
- Create and execute automated test scripts to ensure thorough coverage of application functionality
- Collaborate with cross-functional teams to identify and prioritize testing needs
- Proactively identify and report issues, risks, and opportunities for process improvement
- Work independently with minimal supervision, taking ownership of tasks and deliverables
- Develop and maintain pipelines, integrating automated testing into CI/CD workflows
- Apply OOPS concepts, code reusability principles, and fuzzy logic to write efficient, modular code
- Analyze application flows and functionalities to identify potential issues and improve test coverage
Requirements:
- 3-4 years of experience in automation testing
- Proficient in Python (2-3 years) with expertise in Pytest
- Strong understanding of pipelines and ability to make required changes or learn quickly
- Excellent communication and collaboration skills
- Ability to work independently with minimal supervision
- Proactive approach to issue identification and risk management
- Strong grasp of OOPS concepts, code reusability, and fuzzy logic
Inflection.io is a venture-backed B2B marketing automation company, enabling to communicate with their customers and prospects from one platform. We're used by leading SaaS companies like Sauce Labs, Sigma Computing, BILL, Mural, and Elastic, many of which pay more than $100K/yr (1 crore rupee).
And,... it’s working! We have world-class stats: our largest deal is over 3 crore, we have a 5 star rating on G2, over 100% NRR, and constantly break sales and customer records. We’ve raised $14M in total since 2021 with $7.6M of fresh funding in 2024, giving us many years of runway.
However, we’re still in startup mode with approximately 30 employees and looking for the next SDE3 to help propel Inflection forward. Do you want to join a fast growing startup that is aiming to build a very large company?
Key Responsibilities:
- Lead the design, development, and deployment of complex software systems and applications.
- Collaborate with engineers and product managers to define and implement innovative solutions
- Provide technical leadership and mentorship to junior engineers, promoting best practices and fostering a culture of continuous improvement.
- Write clean, maintainable and efficient code, ensuring high performance and scalability of the software.
- Conduct code reviews and provide constructive feedback to ensure code quality and adherence to coding standards.
- Troubleshoot and resolve complex technical issues, optimizing system performance and reliability.
- Stay updated with the latest industry trends and technologies, evaluating their potential for adoption in our projects.
- Participate in the full software development lifecycle, from requirements gathering to deployment and monitoring.
Qualifications:
- 5+ years of professional software development experience, with a strong focus on backend development.
- Proficiency in one or more programming languages such as Java, Python, Golang or C#
- Strong understanding of database systems, both relational (e.g., MySQL, PostgreSQL) and NoSQL (e.g., MongoDB, Cassandra).
- Hands-on experience with message brokers such as Kafka, RabbitMQ, or Amazon SQS.
- Experience with cloud platforms (AWS or Azure or Google Cloud) and containerization technologies (Docker, Kubernetes).
- Proven track record of designing and implementing scalable, high-performance systems.
- Excellent problem-solving skills and the ability to think critically and creatively.
- Strong communication and collaboration skills, with the ability to work effectively in a fast-paced, team-oriented environment.
We're at the forefront of creating advanced AI systems, from fully autonomous agents that provide intelligent customer interaction to data analysis tools that offer insightful business solutions. We are seeking enthusiastic interns who are passionate about AI and ready to tackle real-world problems using the latest technologies.
Duration: 6 months
Perks:
- Hands-on experience with real AI projects.
- Mentoring from industry experts.
- A collaborative, innovative and flexible work environment
After completion of the internship period, there is a chance to get a full-time opportunity as AI/ML engineer (Up to 12 LPA).
Compensation:
- Joining Bonus: A one-time bonus of INR 2,500 will be awarded upon joining.
- Stipend: Base is INR 8000/- & can increase up to 20000/- depending upon performance matrix.
Key Responsibilities
- Experience working with python, LLM, Deep Learning, NLP, etc..
- Utilize GitHub for version control, including pushing and pulling code updates.
- Work with Hugging Face and OpenAI platforms for deploying models and exploring open-source AI models.
- Engage in prompt engineering and the fine-tuning process of AI models.
Requirements
- Proficiency in Python programming.
- Experience with GitHub and version control workflows.
- Familiarity with AI platforms such as Hugging Face and OpenAI.
- Understanding of prompt engineering and model fine-tuning.
- Excellent problem-solving abilities and a keen interest in AI technology.
To Apply Click below link and submit the Assignment
Salary: ₹3.5 LPA( Based on the performance)
Experience: 1–3 Years (ONLY FOR FEMALES)
We are looking for a Technical Trainer skilled in HTML, Java, Python, and AI to conduct technical trainer. The trainer will create learning materials, deliver sessions, assess student performance, and support learners throughout the training. Strong communication skills and the ability to explain technical concepts clearly are essential.
About Us:
MyOperator and Heyo are India’s leading conversational platforms, empowering 40,000+ businesses with Call and WhatsApp-based engagement. We’re a product-led SaaS company scaling rapidly, and we’re looking for a skilled Software Developer to help build the next generation of scalable backend systems.
Role Overview:
We’re seeking a passionate Python Developer with strong experience in backend development and cloud infrastructure. This role involves building scalable microservices, integrating AI tools like LangChain/LLMs, and optimizing backend performance for high-growth B2B products.
Key Responsibilities:
- Develop robust backend services using Python, Django, and FastAPI
- Design and maintain a scalable microservices architecture
- Integrate LangChain/LLMs into AI-powered features
- Write clean, tested, and maintainable code with pytest
- Manage and optimize databases (MySQL/Postgres)
- Deploy and monitor services on AWS
- Collaborate across teams to define APIs, data flows, and system architecture
Must-Have Skills:
- Python and Django
- MySQL or Postgres
- Microservices architecture
- AWS (EC2, RDS, Lambda, etc.)
- Unit testing using pytest
- LangChain or Large Language Models (LLM)
- Strong grasp of Data Structures & Algorithms
- AI coding assistant tools (e.g., Chat GPT & Gemini)
Good to Have:
- MongoDB or ElasticSearch
- Go or PHP
- FastAPI
- React, Bootstrap (basic frontend support)
- ETL pipelines, Jenkins, Terraform
Why Join Us?
- 100% Remote role with a collaborative team
- Work on AI-first, high-scale SaaS products
- Drive real impact in a fast-growing tech company
- Ownership and growth from day one
About the Company
Hypersonix.ai is disrupting the e-commerce space with AI, ML and advanced decision capabilities to drive real-time business insights. Hypersonix.ai has been built ground up with new age technology to simplify the consumption of data for our customers in various industry verticals. Hypersonix.ai is seeking a well-rounded, hands-on product leader to help lead product management of key capabilities and features.
About the Role
We are looking for talented and driven Data Engineers at various levels to work with customers to build the data warehouse, analytical dashboards and ML capabilities as per customer needs.
Roles and Responsibilities
- Create and maintain optimal data pipeline architecture
- Assemble large, complex data sets that meet functional / non-functional business requirements; should write complex queries in an optimized way
- Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
- Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and AWS ‘big data’ technologies
- Run ad-hoc analysis utilizing the data pipeline to provide actionable insights
- Work with stakeholders including the Executive, Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs
- Keep our data separated and secure across national boundaries through multiple data centers and AWS regions
- Work with analytics and data scientist team members and assist them in building and optimizing our product into an innovative industry leader
Requirements
- Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases
- Experience building and optimizing ‘big data’ data pipelines, architectures and data sets
- Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement
- Strong analytic skills related to working with unstructured datasets
- Build processes supporting data transformation, data structures, metadata, dependency and workload management
- A successful history of manipulating, processing and extracting value from large disconnected datasets
- Working knowledge of message queuing, stream processing, and highly scalable ‘big data’ data stores
- Experience supporting and working with cross-functional teams in a dynamic environment
- We are looking for a candidate with 4+ years of experience in a Data Engineer role, who has attained a Graduate degree in Computer Science, Information Technology or completed MCA.
About Role
We are looking for a highly driven Full Stack Developer who can build scalable, high-performance applications across both frontend and backend. You will be working closely with our engineering team to develop seamless user experiences, robust APIs, and production-ready systems. This role is perfect for someone who wants to work in a fast-growing AI automation company, take ownership of end-to-end development, and contribute to products used by enterprises, agencies, and SMBs globally.
Key Responsibilities
- Develop responsive and scalable frontend applications using React Native and Next.js.
- Build and maintain backend services using Python and Node.js.
- Develop structured, well-documented REST APIs.
- Work with databases such as MongoDB and PostgreSQL for efficient data storage and retrieval.
- Implement clean authentication workflows (JWT preferred).
- Collaborate with UI/UX and product teams to deliver intuitive user experiences.
- Maintain high code quality through modular development, linting, and optimized folder structure.
- Debug, optimize, and enhance existing features and systems.
- Participate in code reviews and ensure best practices.
- Deploy, test, and monitor applications for performance and reliability.
Required Qualifications
- Bachelor’s degree in Computer Science, Engineering, or related discipline (or equivalent experience).
- Proven experience as a Full Stack Developer with hands-on work in React Native and Next.js.
- Strong backend experience with Python (Fast API preferred) and Node.js (Express.js preferred).
- Experience working with REST APIs, MongoDB, and PostgreSQL.
- Strong understanding of authentication flows (JWT, OAuth, or similar).
- Ability to write clean, maintainable, and well-documented code.
- Experience with Git/GitHub workflows.
Perks and Benefits
- Opportunity to work at a fast-scaling AI-driven product company.
- Work on advanced growth automation and CRM technologies.
- High ownership and autonomy in product development.
- Flexible remote work for the first 6 months.
- Skill development through real-world, high-impact projects.
- Collaborative culture with mentorship and growth opportunities.
Position Overview: The Lead Software Architect - Python & Data Engineering is a senior technical leadership role responsible for designing and owning end-to-end architecture for data-intensive, AI/ML, and analytics platforms, while mentoring developers and ensuring technical excellence across the organization.
Key Responsibilities:
- Design end-to-end software architecture for data-intensive applications, AI/ML pipelines, and analytics platforms
- Evaluate trade-offs between competing technical approaches
- Define data models, API approach, and integration patterns across systems
- Create technical specifications and architecture documentation
- Lead by example through production-grade Python code and mentor developers on engineering fundamentals
- Conduct design and code reviews focused on architectural soundness
- Establish engineering standards, coding practices, and design patterns for the team
- Translate business requirements into technical architecture
- Collaborate with data scientists, analysts, and other teams to design integrated solutions
- Whiteboard and defend system design and architectural choices
- Take responsibility for system performance, reliability, and maintainability
- Identify and resolve architectural bottlenecks proactively
Required Skills:
- 8+ years of experience in software architecture and development
- Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field
- Strong foundations in data structures, algorithms, and computational complexity
- Experience in system design for scale, including caching strategies, load balancing, and asynchronous processing
- 6+ years of Python development experience
- Deep knowledge of Django, Flask, or FastAPI
- Expert understanding of Python internals including GIL and memory management
- Experience with RESTful API design and event-driven architectures (Kafka, RabbitMQ)
- Proficiency in data processing frameworks such as Pandas, Apache Spark, and Airflow
- Strong SQL optimization and database design experience (PostgreSQL, MySQL, MongoDB) Experience with AWS, GCP, or Azure cloud platforms
- Knowledge of containerization (Docker) and orchestration (Kubernetes)
- Hands-on experience designing CI/CD pipelines Preferred (Bonus)
Skills:
- Experience deploying ML models to production (MLOps, model serving, monitoring) Understanding of ML system design including feature stores and model versioning
- Familiarity with ML frameworks such as scikit-learn, TensorFlow, and PyTorch
- Open-source contributions or technical blogging demonstrating architectural depth
- Experience with modern front-end frameworks for full-stack perspective
Why Middleware? 💡
Sick of the endless waiting?
Waiting on code reviews, QA feedback, or that "quick call"?
At Middleware, we’re all about freeing up engineers like you to do what you love—build.
We’ve created a cockpit that gives engineering leaders the insights they need to unblock teams, cut bottlenecks, and let engineers focus on impact.
What You’ll Do 🎨
- Own the Product: Shape a product that engineers rely on daily.
- Build Stunning UIs: Craft data-rich, intuitive designs that solve real problems.
- Shape Middleware’s Architecture: Make our systems robust, seamless, and introduce mechanisms that allow high visibility into our automated pipelines.
What We’re Looking For 🔍
- React + Typescript: You know your way around these tools and have launched awesome projects.
- Python + Postgres: You've build complete backend systems, not just basic CRUD apps.
- Passionate Builder: Hungry to grow, build, and make an impact.
Bonus Points ⭐️
- Eye for Design: You have a sense for clean, user-friendly visuals.
- Understanding of distributed systems: Not everything runs on a single machine, and you know how to make things work across a lot of those.
- DSA Know-how: Familiarity with data structures (graphs, linked lists, etc.) because our product (even frontend) actually uses DSA concepts.
Why You'll Love Working with Us ❤️
We’re engineers at heart.
Middleware was founded by ex-Uber and Maersk engineers who know what it’s like to be stuck in meeting loops and endless waiting. If you're here to build, to make things happen, and to change the game for engineering teams everywhere, let’s chat!
Ready to jump in? Explore Middleware (https://www.middlewarehq.com/) or check out our demo (https://demo.middlewarehq.com/).
Role & Responsibilities:
We are seeking a Software Developer with 5-10 year’s experience with strong foundations in Python, databases, and AI technologies.
The ideal candidate will support the development of AI-powered solutions, focusing on LLM integration, prompt engineering, and database-driven workflows.
This is a hands-on role with opportunities to learn and grow into advanced AI engineering responsibilities.
Key Responsibilities :
• Develop, test, and maintain Python-based applications and APIs.
• Design and optimize prompts for Large Language Models (LLMs) to improve accuracy and performance.
• Work with JSON-based data structures for request/response handling.
• Integrate and manage PostgreSQL (pgSQL) databases, including writing queries and handling data pipelines.
• Collaborate with the product and AI teams to implement new features.
• Debug, troubleshoot, and optimize performance of applications and workflows.
• Stay updated on advancements in LLMs, AI frameworks, and generative AI tools.
Role & Responsibilities:
We are seeking a Software Developer with 2-10 year’s experience with strong foundations in Python, databases, and AI technologies. The ideal candidate will support the development of AI-powered solutions, focusing on LLM integration, prompt engineering, and database-driven workflows. This is a hands-on role with opportunities to learn and grow into advanced AI engineering responsibilities.
Key Responsibilities
• Develop, test, and maintain Python-based applications and APIs.
• Design and optimize prompts for Large Language Models (LLMs) to improve accuracy and performance.
• Work with JSON-based data structures for request/response handling. • Integrate and manage PostgreSQL (pgSQL) databases, including writing queries and handling data pipelines.
• Collaborate with the product and AI teams to implement new features. • Debug, troubleshoot, and optimize performance of applications and workflows.
• Stay updated on advancements in LLMs, AI frameworks, and generative AI tools.
Required Skills & Qualifications
• Strong knowledge of Python (scripting, APIs, data handling).
• Basic understanding of Large Language Models (LLMs) and prompt engineering techniques.
• Experience with JSON data parsing and transformations.
• Familiarity with PostgreSQL or other relational databases.
• Ability to write clean, maintainable, and well-documented code.
• Strong problem-solving skills and eagerness to learn.
• Bachelor’s degree in Computer Science, Engineering, or related field (or equivalent practical experience).
Nice-to-Have (Preferred)
• Exposure to AI/ML frameworks (e.g., LangChain, Hugging Face, OpenAI APIs).
• Experience working in startups or fast-paced environments.
• Familiarity with version control (Git/GitHub) and cloud platforms (AWS, GCP, or Azure).
What We Offer
• The opportunity to define the future of GovTech through AI-powered solutions.
• A strategic leadership role in a fast-scaling startup with direct impact on product direction and market success.
• Collaborative and innovative environment with cross-functional exposure.
• Growth opportunities backed by a strong leadership team.
• Remote flexibility and work-life balance.
Role: Azure AI Tech Lead
Exp-3.5-7 Years
Location: Remote / Noida (NCR)
Notice Period: Immediate to 15 days
Mandatory Skills: Python, Azure AI/ML, PyTorch, TensorFlow, JAX, HuggingFace, LangChain, Kubeflow, MLflow, LLMs, RAG, MLOps, Docker, Kubernetes, Generative AI, Model Deployment, Prometheus, Grafana
JOB DESCRIPTION
As the Azure AI Tech Lead, you will serve as the principal technical expert leading the design, development, and deployment of advanced AI and ML solutions on the Microsoft Azure platform. You will guide a team of engineers, establish robust architectures, and drive end-to-end implementation of AI projects—transforming proof-of-concepts into scalable, production-ready systems.
Key Responsibilities:
- Lead architectural design and development of AI/ML solutions using Azure AI, Azure OpenAI, and Cognitive Services.
- Develop and deploy scalable AI systems with best practices in MLOps across the full model lifecycle.
- Mentor and upskill AI/ML engineers through technical reviews, training, and guidance.
- Implement advanced generative AI techniques including LLM fine-tuning, RAG systems, and diffusion models.
- Collaborate cross-functionally to translate business goals into innovative AI solutions.
- Enforce governance, responsible AI practices, and performance optimization standards.
- Stay ahead of trends in LLMs, agentic AI, and applied research to shape next-gen solutions.
Qualifications:
- Bachelor’s or Master’s in Computer Science or related field.
- 3.5–7 years of experience delivering end-to-end AI/ML solutions.
- Strong expertise in Azure AI ecosystem and production-grade model deployment.
- Deep technical understanding of ML, DL, Generative AI, and MLOps pipelines.
- Excellent analytical and problem-solving abilities; applied research or open-source contributions preferred.
About Ven Analytics
At Ven Analytics, we don’t just crunch numbers — we decode them to uncover insights that drive real business impact. We’re a data-driven analytics company that partners with high-growth startups and enterprises to build powerful data products, business intelligence systems, and scalable reporting solutions. With a focus on innovation, collaboration, and continuous learning, we empower our teams to solve real-world business problems using the power of data.
Role Overview
We’re looking for a Power BI Data Analyst who is not just proficient in tools but passionate about building insightful, scalable, and high-performing dashboards. The ideal candidate should have strong fundamentals in data modeling, a flair for storytelling through data, and the technical skills to implement robust data solutions using Power BI, Python, and SQL..
Key Responsibilities
- Technical Expertise: Develop scalable, accurate, and maintainable data models using Power BI, with a clear understanding of Data Modeling, DAX, Power Query, and visualization principles.
- Programming Proficiency: Use SQL and Python for complex data manipulation, automation, and analysis.
- Business Problem Translation: Collaborate with stakeholders to convert business problems into structured data-centric solutions considering performance, scalability, and commercial goals.
- Hypothesis Development: Break down complex use-cases into testable hypotheses and define relevant datasets required for evaluation.
- Solution Design: Create wireframes, proof-of-concepts (POC), and final dashboards in line with business requirements.
- Dashboard Quality: Ensure dashboards meet high standards of data accuracy, visual clarity, performance, and support SLAs.
- Performance Optimization: Continuously enhance user experience by improving performance, maintainability, and scalability of Power BI solutions.
- Troubleshooting & Support: Quick resolution of access, latency, and data issues as per defined SLAs.
- Power BI Development: Use power BI desktop for report building and service for distribution
- Backend development: Develop optimized SQL queries that are easy to consume, maintain and debug.
- Version Control: Strict control on versions by tracking CRs and Bugfixes. Ensuring the maintenance of Prod and Dev dashboards.
- Client Servicing : Engage with clients to understand their data needs, gather requirements, present insights, and ensure timely, clear communication throughout project cycles.
- Team Management : Lead and mentor a small team by assigning tasks, reviewing work quality, guiding technical problem-solving, and ensuring timely delivery of dashboards and reports..
Must-Have Skills
- Strong experience building robust data models in Power BI
- Hands-on expertise with DAX (complex measures and calculated columns)
- Proficiency in M Language (Power Query) beyond drag-and-drop UI
- Clear understanding of data visualization best practices (less fluff, more insight)
- Solid grasp of SQL and Python for data processing
- Strong analytical thinking and ability to craft compelling data stories
- Client Servicing Background.
Good-to-Have (Bonus Points)
- Experience using DAX Studio and Tabular Editor
- Prior work in a high-volume data processing production environment
- Exposure to modern CI/CD practices or version control with BI tools
Why Join Ven Analytics?
- Be part of a fast-growing startup that puts data at the heart of every decision.
- Opportunity to work on high-impact, real-world business challenges.
- Collaborative, transparent, and learning-oriented work environment.
- Flexible work culture and focus on career development.
Requires that any candidate know the M365 Collaboration environment. SharePoint Online, MS Teams. Exchange Online, Entra and Purview. Need developer that possess a strong understanding of Data Structure, Problem Solving abilities, SQL, PowerShell, MS Teams App Development, Python, Visual Basic, C##, JavaScript, Java, HTML, PHP, C.
Need a strong understanding of the development lifecycle, and possess debugging skills time management, business acumen, and have a positive attitude is a must and open to continual growth.
Capability to code appropriate solutions will be tested in any interview.
Knowledge of a wide variety of Generative AI models
Conceptual understanding of how large language models work
Proficiency in coding languages for data manipulation (e.g., SQL) and machine learning & AI development (e.g., Python)
Experience with dashboarding tools such as Power BI and Tableau (beneficial but not essential)
About Forbes Advisor
Forbes Digital Marketing Inc. is a high-growth digital media and technology company dedicated to helping consumers make confident, informed decisions about their money, health, careers, and everyday life.
We do this by combining data-driven content, rigorous product comparisons, and user-first design — all built on top of a modern, scalable platform. Our global teams bring deep expertise across journalism, product, performance marketing, data, and analytics.
The Role
We’re hiring a Data Scientist to help us unlock growth through advanced analytics and machine learning. This role sits at the intersection of marketing performance, product optimization, and decision science.
You’ll partner closely with Paid Media, Product, and Engineering to build models, generate insight, and influence how we acquire, retain, and monetize users. From campaign ROI to user segmentation and funnel optimization, your work will directly shape how we grow.This role is ideal for someone who thrives on business impact, communicates clearly, and wants to build re-usable, production-ready insights — not just run one-off analyses.
What You’ll Do
Marketing & Revenue Modelling
• Own end-to-end modelling of LTV, user segmentation, retention, and marketing
efficiency to inform media optimization and value attribution.
• Collaborate with Paid Media and RevOps to optimize SEM performance, predict high-
value cohorts, and power strategic bidding and targeting.
Product & Growth Analytics
• Work closely with Product Insights and General Managers (GMs) to define core metrics, KPIs, and success frameworks for new launches and features.
• Conduct deep-dive analysis of user behaviour, funnel performance, and product engagement to uncover actionable insights.
• Monitor and explain changes in key product metrics, identifying root causes and business impact.
• Work closely with Data Engineering to design and maintain scalable data pipelines that
support machine learning workflows, model retraining, and real-time inference.
Predictive Modelling & Machine Learning
• Build predictive models for conversion, churn, revenue, and engagement using regression, classification, or time-series approaches.
• Identify opportunities for prescriptive analytics and automation in key product and marketing workflows.
• Support development of reusable ML pipelines for production-scale use cases in product recommendation, lead scoring, and SEM planning.
Collaboration & Communication
• Present insights and recommendations to a variety of stakeholders — from ICs to executives — in a clear and compelling manner.
• Translate business needs into data problems, and complex findings into strategic action plans.
• Work cross-functionally with Engineering, Product, BI, and Marketing to deliver and deploy your work.
What You’ll Bring
Minimum Qualifications
• Bachelor’s degree in a quantitative field (Mathematics, Statistics, CS, Engineering, etc.).
• 4+ years in data science, growth analytics, or decision science roles.
• Strong SQL and Python skills (Pandas, Scikit-learn, NumPy).
• Hands-on experience with Tableau, Looker, or similar BI tools.
• Familiarity with LTV modelling, retention curves, cohort analysis, and media attribution.
• Experience with GA4, Google Ads, Meta, or other performance marketing platforms.
• Clear communication skills and a track record of turning data into decisions.
Nice to Have
• Experience with BigQuery and Google Cloud Platform (or equivalent).
• Familiarity with affiliate or lead-gen business models.
• Exposure to NLP, LLMs, embeddings, or agent-based analytics.
• Ability to contribute to model deployment workflows (e.g., using Vertex AI, Airflow, or Composer).
Why Join Us?
• Remote-first and flexible — work from anywhere in India with global exposure.
• Monthly long weekends (every third Friday off).
• Generous wellness stipends and parental leave.
• A collaborative team where your voice is heard and your work drives real impact.
• Opportunity to help shape the future of data science at one of the world’s most trusted
brands.
We are looking for an 4-5 years of experienced AI/ML Engineer who can design, develop, and deploy scalable machine learning models and AI-driven solutions. The ideal candidate should have strong expertise in data processing, model building, and production deployment, along with solid programming and problem-solving skills.
Key Responsibilities
- Develop and deploy machine learning, deep learning, and NLP models for various business use cases.
- Build end-to-end ML pipelines including data preprocessing, feature engineering, training, evaluation, and production deployment.
- Optimize model performance and ensure scalability in production environments.
- Work closely with data scientists, product teams, and engineers to translate business requirements into AI solutions.
- Conduct data analysis to identify trends and insights.
- Implement MLOps practices for versioning, monitoring, and automating ML workflows.
- Research and evaluate new AI/ML techniques, tools, and frameworks.
- Document system architecture, model design, and development processes.
Required Skills
- Strong programming skills in Python (NumPy, Pandas, Scikit-learn, TensorFlow, PyTorch, Keras).
- Hands-on experience in building and deploying ML/DL models in production.
- Good understanding of machine learning algorithms, neural networks, NLP, and computer vision.
- Experience with REST APIs, Docker, Kubernetes, and cloud platforms (AWS/GCP/Azure).
- Working knowledge of MLOps tools such as MLflow, Airflow, DVC, or Kubeflow.
- Familiarity with data pipelines and big data technologies (Spark, Hadoop) is a plus.
- Strong analytical skills and ability to work with large datasets.
- Excellent communication and problem-solving abilities.
Preferred Qualifications
- Bachelor’s or Master’s degree in Computer Science, Data Science, AI, Machine Learning, or related fields.
- Experience in deploying models using cloud services (AWS Sagemaker, GCP Vertex AI, etc.).
- Experience in LLM fine-tuning or generative AI is an added advantage.
Position: QA Engineer – Machine Learning Systems (5 - 7 years)
Location: Remote (Company in Mumbai)
Company: Big Rattle Technologies Private Limited
Immediate Joiners only.
Summary:
The QA Engineer will own quality assurance across the ML lifecycle—from raw data validation through feature engineering checks, model training/evaluation verification, batch prediction/optimization validation, and end-to-end (E2E) workflow testing. The role is hands-on with Python automation, data profiling, and pipeline test harnesses in Azure ML and Azure DevOps. Success means probably correct data, models, and outputs at production scale and cadence.
Key Responsibilities:
Test Strategy & Governance
- ○ Define an ML-specific Test Strategy covering data quality KPIs, feature consistency
- checks, model acceptance gates (metrics + guardrails), and E2E run acceptance
- (timeliness, completeness, integrity).
- ○ Establish versioned test datasets & golden baselines for repeatable regression of
- features, models, and optimizers.
Data Quality & Transformation
- Validate raw data extracts and landed data lake data: schema/contract checks, null/outlier thresholds, time-window completeness, duplicate detection, site/material coverage.
- Validate transformed/feature datasets: deterministic feature generation, leakage detection, drift vs. historical distributions, feature parity across runs (hash or statistical similarity tests).
- Implement automated data quality checks (e.g., Great Expectations/pytest + Pandas/SQL) executed in CI and AML pipelines.
Model Training & Evaluation
- Verify training inputs (splits, windowing, target leakage prevention) and hyperparameter configs per site/cluster.
- Automate metric verification (e.g., MAPE/MAE/RMSE, uplift vs. last model, stability tests) with acceptance thresholds and champion/challenger logic.
- Validate feature importance stability and sensitivity/elasticity sanity checks (price/volume monotonicity where applicable).
- Gate model registration/promotion in AML based on signed test artifacts and reproducible metrics.
Predictions, Optimization & Guardrails
- Validate batch predictions: result shapes, coverage, latency, and failure handling.
- Test model optimization outputs and enforced guardrails: detect violations and prove idempotent writes to DB.
- Verify API push to third party system (idempotency keys, retry/backoff, delivery receipts).
Pipelines & E2E
- Build pipeline test harnesses for AML pipelines (data-gen nightly, training weekly,
- prediction/optimization) including orchestrated synthetic runs and fault injection
- (missing slice, late competitor data, SB backlog).
- Run E2E tests from raw data store -> ADLS -> AML -> RDBMS -> APIM/Frontend, assert
- freshness SLOs and audit event completeness (Event Hubs -> ADLS immutable).
Automation & Tooling
- Develop Python-based automated tests (pytest) for data checks, model metrics, and API contracts; integrate with Azure DevOps (pipelines, badges, gates).
- Implement data-driven test runners (parameterized by site/material/model-version) and store signed test artifacts alongside models in AML Registry.
- Create synthetic test data generators and golden fixtures to cover edge cases (price gaps, competitor shocks, cold starts).
Reporting & Quality Ops
- Publish weekly test reports and go/no-go recommendations for promotions; maintain a defect taxonomy (data vs. model vs. serving vs. optimization).
- Contribute to SLI/SLO dashboards (prediction timeliness, queue/DLQ, push success, data drift) used for release gates.
Required Skills (hands-on experience in the following):
- Python automation (pytest, pandas, NumPy), SQL (PostgreSQL/Snowflake), and CI/CD (Azure
- DevOps) for fully automated ML QA.
- Strong grasp of ML validation: leakage checks, proper splits, metric selection
- (MAE/MAPE/RMSE), drift detection, sensitivity/elasticity sanity checks.
- Experience testing AML pipelines (pipelines/jobs/components), and message-driven integrations
- (Service Bus/Event Hubs).
- API test skills (FastAPI/OpenAPI, contract tests, Postman/pytest-httpx) + idempotency and retry
- patterns.
- Familiar with feature stores/feature engineering concepts and reproducibility.
- Solid understanding of observability (App Insights/Log Analytics) and auditability requirements.
Required Qualifications:
- Bachelor’s or Master’s degree in Computer Science, Information Technology, or related field.
- 5–7+ years in QA with 3+ years focused on ML/Data systems (data pipelines + model validation).
- Certification in Azure Data or ML Engineer Associate is a plus.
Why should you join Big Rattle?
Big Rattle Technologies specializes in AI/ ML Products and Solutions as well as Mobile and Web Application Development. Our clients include Fortune 500 companies. Over the past 13 years, we have delivered multiple projects for international and Indian clients from various industries like FMCG, Banking and Finance, Automobiles, Ecommerce, etc. We also specialise in Product Development for our clients.
Big Rattle Technologies Private Limited is ISO 27001:2022 certified and CyberGRX certified.
What We Offer:
- Opportunity to work on diverse projects for Fortune 500 clients.
- Competitive salary and performance-based growth.
- Dynamic, collaborative, and growth-oriented work environment.
- Direct impact on product quality and client satisfaction.
- 5-day hybrid work week.
- Certification reimbursement.
- Healthcare coverage.
How to Apply:
Interested candidates are invited to submit their resume detailing their experience. Please detail out your work experience and the kind of projects you have worked on. Ensure you highlight your contributions and accomplishments to the projects.
Job Title: Python Developer
Experience Level: 4+ years
Job Summary:
We are seeking a skilled Python Developer with strong experience in developing and maintaining APIs. Familiarity with 2D and 3D geometry concepts is a strong plus. The ideal candidate will be passionate about clean code, scalable systems, and solving complex geometric and computational problems.
Key Responsibilities:
· Design, develop, and maintain robust and scalable APIs using Python.
· Work with geometric data structures and algorithms (2D/3D).
· Collaborate with cross-functional teams including front-end developers, designers, and product managers.
· Optimize code for performance and scalability.
· Write unit and integration tests to ensure code quality.
· Participate in code reviews and contribute to best practices.
Required Skills:
· Strong proficiency in Python.
· Experience with RESTful API development (e.g., Flask, FastAPI, Django REST Framework).
· Good understanding of 2D/3D geometry, computational geometry, or CAD-related concepts.
· Familiarity with libraries such as NumPy, SciPy, Shapely, Open3D, or PyMesh.
· Experience with version control systems (e.g., Git).
· Strong problem-solving and analytical skills.
Good to Have:
· Experience with 3D visualization tools or libraries (e.g., VTK, Blender API, Three.js via Python bindings).
· Knowledge of mathematical modeling or simulation.
· Exposure to cloud platforms (AWS, Azure, GCP).
· Familiarity with CI/CD pipelines.
Education:
· Bachelor’s or Master’s degree in Computer Science, Engineering, Mathematics, or a related field.
About the Role
We’re looking for a Data Engineer who can turn messy, unstructured information into clean, usable insights. You’ll be building crawlers, integrating APIs, and setting up data flows that power our analytics and AI layers. If you love data plumbing as much as data puzzles — this role is for you.
Responsibilities
- Build and maintain Python-based data pipelines, crawlers, and integrations with 3rd-party APIs.
Perform brute-force analytics and exploratory data work on crawled datasets to surface trends and anomalies.
- Develop and maintain ETL workflows — from raw ingestion to clean, structured outputs.
- Collaborate with product and ML teams to make data discoverable, queryable, and actionable.
- Optimize data collection for performance, reliability, and scalability.
Requirements
- Strong proficiency in Python and Jupyter notebooks.
- Experience building web crawlers / scrapers and integrating with REST / GraphQL APIs.
- Solid understanding of data structures and algorithms (DSA).
- Comfort with quick, hands-on analytics — slicing and validating data directly.
Good to Have
- Experience with schema design and database modeling.
- Exposure to both SQL and NoSQL databases; familiarity with vector databases is a plus.
- Knowledge of data orchestration tools (Dagster preferred).
- Understanding of data lifecycle management — from raw to enriched layers.
Why Join Us
We’re not offering employment — we’re offering ownership.
If you’re here for a job, this isn’t your place.
We’re building the data spine of a new-age supply chain intelligence platform — and we need people who can crush constraints, move fast, and make impossible things work.
You’ll have room to think, build, break, and reinvent — not follow.
If you thrive in chaos and create clarity, you’ll fit right in.
Screening Challenge
Before we schedule a call, we have an exciting challenge for you. Please go through the link below and submit your solution to us with the Subject line: SUB: [Role] [Full Name]
About Synorus
Synorus is building a next-generation ecosystem of AI-first products. Our flagship legal-AI platform LexVault is redefining legal research, drafting, knowledge retrieval, and case intelligence using domain-tuned LLMs, private RAG pipelines, and secure reasoning systems.
If you are passionate about AI, legaltech, and training high-performance models — this internship will put you on the front line of innovation.
Role Overview
We are seeking passionate AI/LLM Engineering Interns who can:
- Fine-tune LLMs for legal domain use-cases
- Train and experiment with open-source foundation models
- Work with large datasets efficiently
- Build RAG pipelines and text-processing frameworks
- Run model training workflows on Google Colab / Kaggle / Cloud GPUs
This is a hands-on engineering and research internship — you will work directly with senior founders & technical leadership.
Key Responsibilities
- Fine-tune transformer-based models (Llama, Mistral, Gemma, etc.)
- Build and preprocess legal datasets at scale
- Develop efficient inference & training pipelines
- Evaluate models for accuracy, hallucinations, and trustworthiness
- Implement RAG architectures (vector DBs + embeddings)
- Work with GPU environments (Colab/Kaggle/Cloud)
- Contribute to model improvements, prompt engineering & safety tuning
Must-Have Skills
- Strong knowledge of Python & PyTorch
- Understanding of LLMs, Transformers, Tokenization
- Hands-on experience with HuggingFace Transformers
- Familiarity with LoRA/QLoRA, PEFT training
- Data wrangling: Pandas, NumPy, tokenizers
- Ability to handle multi-GB datasets efficiently
Bonus Skills
(Not mandatory — but a strong plus)
- Experience with RAG / vector DBs (Chroma, Qdrant, LanceDB)
- Familiarity with vLLM, llama.cpp, GGUF
- Worked on summarization, Q&A or document-AI projects
- Knowledge of legal texts (Indian laws/case-law/statutes)
- Open-source contributions or research work
What You Will Gain
- Real-world training on LLM fine-tuning & legal AI
- Exposure to production-grade AI pipelines
- Direct mentorship from engineering leadership
- Research + industry project portfolio
- Letter of experience + potential full-time offer
Ideal Candidate
- You experiment with models on weekends
- You love pushing GPUs to their limits
- You prefer research + implementation over theory alone
- You want to build AI that matters — not just demos
Location - Remote
Stipend - 5K - 10K
At Palcode.ai, We're on a mission to fix the massive inefficiencies in pre-construction. Think about it - in a $10 trillion industry, estimators still spend weeks analyzing bids, project managers struggle with scattered data, and costly mistakes slip through complex contracts. We're fixing this with purpose-built AI agents that work. Our platform can do “magic” to Preconstruction workflows from Weeks to Hours. It's not just about AI – it's about bringing real, measurable impact to an industry ready for change. We are backed by names like AWS for Startups, Upekkha Accelerator, and Microsoft for Startups.
Why Palcode.ai
Tackle Complex Problems: Build AI that reads between the lines of construction bids, spots hidden risks in contracts, and makes sense of fragmented project data
High-Impact Code: Your code won't sit in a backlog – it goes straight to estimators and project managers who need it yesterday
Tech Challenges That Matter: Design systems that process thousands of construction documents, handle real-time pricing data, and make intelligent decisions
Build & Own: Shape our entire tech stack, from data processing pipelines to AI model deployment
Quick Impact: Small team, huge responsibility. Your solutions directly impact project decisions worth millions
Learn & Grow: Master the intersection of AI, cloud architecture, and construction tech while working with founders who've built and scaled construction software
Your Role:
- Design and build our core AI services and APIs using Python
- Create reliable, scalable backend systems that handle complex data
- Help set up cloud infrastructure and deployment pipelines
- Collaborate with our AI team to integrate machine learning models
- Write clean, tested, production-ready code
You'll fit right in if:
- You have 1 year of hands-on Python development experience
- You're comfortable with full-stack development and cloud services
- You write clean, maintainable code and follow good engineering practices
- You're curious about AI/ML and eager to learn new technologies
- You enjoy fast-paced startup environments and take ownership of your work
How we will set you up for success
- You will work closely with the Founding team to understand what we are building.
- You will be given comprehensive training about the tech stack, with an opportunity to avail virtual training as well.
- You will be involved in a monthly one-on-one with the founders to discuss feedback
- A unique opportunity to learn from the best - we are Gold partners of AWS, Razorpay, and Microsoft Startup programs, having access to rich talent to discuss and brainstorm ideas.
- You’ll have a lot of creative freedom to execute new ideas. As long as you can convince us, and you’re confident in your skills, we’re here to back you in your execution.
Location: Bangalore, Remote
Compensation: Competitive salary + Meaningful equity
If you get excited about solving hard problems that have real-world impact, we should talk.
All the best!!
About the Role
We are looking for a passionate AI Engineer Intern (B.Tech, M.Tech / M.S. or equivalent) with strong foundations in Artificial Intelligence, Computer Vision, and Deep Learning to join our R&D team.
You will help us build and train realistic face-swap and deepfake video models, powering the next generation of AI-driven video synthesis technology.
This is a remote, individual-contributor role offering exposure to cutting-edge AI model development in a startup-like environment.
Key Responsibilities
- Research, implement, and fine-tune face-swap / deepfake architectures (e.g., FaceSwap, SimSwap, DeepFaceLab, LatentSync, Wav2Lip).
- Train and optimize models for realistic facial reenactment and temporal consistency.
- Work with GANs, VAEs, and diffusion models for video synthesis.
- Handle dataset creation, cleaning, and augmentation for face-video tasks.
- Collaborate with the AI core team to deploy trained models in production environments.
- Maintain clean, modular, and reproducible pipelines using Git and experiment-tracking tools.
Required Qualifications
- B.Tech, M.Tech / M.S. (or equivalent) in AI / ML / Computer Vision / Deep Learning.
- Certifications in AI or Deep Learning (DeepLearning.AI, NVIDIA DLI, Coursera, etc.).
- Proficiency in PyTorch or TensorFlow, OpenCV, FFmpeg.
- Understanding of CNNs, Autoencoders, GANs, Diffusion Models.
- Familiarity with datasets like CelebA, VoxCeleb, FFHQ, DFDC, etc.
- Good grasp of data preprocessing, model evaluation, and performance tuning.
Preferred Skills
- Prior hands-on experience with face-swap or lip-sync frameworks.
- Exposure to 3D morphable models, NeRF, motion transfer, or facial landmark tracking.
- Knowledge of multi-GPU training and model optimization.
- Familiarity with Rust / Python backend integration for inference pipelines.
What We Offer
- Work directly on production-grade AI video synthesis systems.
- Remote-first, flexible working hours.
- Mentorship from senior AI researchers and engineers.
- Opportunity to transition into a full-time role upon outstanding performance.
Location: Remote | Stipend: ₹10,000/month | Duration: 3–6 months
We are building an AI-powered chatbot platform and looking for an AI/ML Engineer with strong backend skills as our first technical hire. You will be responsible for developing the core chatbot engine using LLMs, creating backend APIs, and building scalable RAG pipelines.
You should be comfortable working independently, shipping fast, and turning ideas into real product features. This role is ideal for someone who loves building with modern AI tools and wants to be part of a fast-growing product from day one.
Responsibilities
• Build the core AI chatbot engine using LLMs (OpenAI, Claude, Gemini, Llama etc.)
• Develop backend services and APIs using Python (FastAPI/Flask)
• Create RAG pipelines using vector databases (Pinecone, FAISS, Chroma)
• Implement embeddings, prompt flows, and conversation logic
• Integrate chatbot with web apps, WhatsApp, CRMs and 3rd-party APIs
• Ensure system reliability, performance, and scalability
• Work directly with the founder in shaping the product and roadmap
Requirements
• Strong experience with LLMs & Generative AI
• Excellent Python skills with FastAPI/Flask
• Hands-on experience with LangChain or RAG architectures
• Vector database experience (Pinecone/FAISS/Chroma)
• Strong understanding of REST APIs and backend development
• Ability to work independently, experiment fast, and deliver clean code
Nice to Have
• Experience with cloud (AWS/GCP)
• Node.js knowledge
• LangGraph, LlamaIndex
• MLOps or deployment experience
Mission
Own architecture across web + backend, ship reliably, and establish patterns the team can scale on.
Responsibilities
- Lead system architecture for Next.js (web) and FastAPI (backend); own code quality, reviews, and release cadence.
- Build and maintain the web app (marketing, auth, dashboard) and a shared TS SDK (@revilo/contracts, @revilo/sdk).
- Integrate Stripe, Maps, analytics; enforce accessibility and performance baselines.
- Define CI/CD (GitHub Actions), containerization (Docker), env/promotions (staging → prod).
- Partner with Mobile and AI engineers on API/tool schemas and developer experience.
Requirements
- 6–10+ years; expert TypeScript, strong Python.
- Next.js (App Router), TanStack Query, shadcn/ui; FastAPI, Postgres, pydantic/SQLModel.
- Auth (OTP/JWT/OAuth), payments, caching, pagination, API versioning.
- Practical CI/CD and observability (logs/metrics/traces).
Nice-to-haves
- OpenAPI typegen (Zod), feature flags, background jobs/queues, Vercel/EAS.
Key Outcomes (ongoing)
- Stable architecture with typed contracts; <2% crash/error on web, p95 API latency in budget, reliable weekly releases.
Responsibilities:
- Design, build, and maintain backend services and APIs using Python frameworks such as FastAPI or Django.
- Implement RAG-based features and services, including document ingestion pipelines, vector indexing, and retrieval logic using modern LLM tooling.
- Build robust data ingestion, scraping, and automation workflows (web scraping, headless browsers, APIs) to integrate with third-party systems and internal tools.
- Develop and operate ETL/ELT pipelines to move, clean, and transform data across databases, file stores, and external platforms.
- Own reliability, performance, and observability of services: logging, metrics, alerting, and debugging in production.
- Collaborate closely with product and business stakeholders to translate ambiguous workflows into clear technical designs and automation logic.
- Write clean, testable code with solid unit/integration coverage, and contribute to internal libraries, tooling, and best practices documentation.
- Participate in code reviews, architectural discussions, and mentor junior engineers on Python, RAG patterns, and automation best practices.
Requirements:
- 4–6 years of hands-on experience as a Python engineer building production systems (FastAPI, Django, or similar).
- Strong understanding of backend fundamentals: REST APIs, authentication/authorisation, async patterns, background jobs, and task queues.
- Experience with event-driven architectures (Kafka, SQS, RabbitMQ) and workflow engines (e.g., Temporal, Airflow, Prefect).
- Practical experience with at least one LLM/RAG stack (e.g., LangChain, LlamaIndex, custom vector store integrations) and working with embeddings, chunking, and retrieval.
- Solid experience with web scraping and automation: requests/HTTP clients, Selenium/Playwright or similar, rate-limiting, anti-bot handling, and resilient scraping patterns.
- Experience building data pipelines or ETLs: extracting from APIs/files/DBs, transforming/cleaning, and loading into relational or NoSQL stores.
- Hands-on experience with AWS or similar cloud platforms (e.g., Lambda, S3, API Gateway, ECS/Fargate, or equivalent).
- Strong debugging skills and comfort with distributed, asynchronous systems and eventual consistency.
- Ability to take loosely defined business workflows and design clean, maintainable technical solutions.
- Strong communication skills and a habit of documenting decisions, APIs, and workflows.
Nice to Have:
- Experience with vector databases (e.g., Pinecone, Weaviate, Qdrant, OpenSearch vector, etc.) and search tuning.
- Exposure to building internal tools or low-code-like automation platforms for operations or support teams.
- Prior experience integrating with ERP/CRM/marketplace or other enterprise/legacy systems.
- Understanding of cloud security, IAM, and secret management best practices.
Role Overview
We are seeking a Junior Developer with 1-3 year’s experience with strong foundations in Python, databases, and AI technologies. The ideal candidate will support the development of AI-powered solutions, focusing on LLM integration, prompt engineering, and database-driven workflows. This is a hands-on role with opportunities to learn and grow into advanced AI engineering responsibilities.
Key Responsibilities
- Develop, test, and maintain Python-based applications and APIs.
- Design and optimize prompts for Large Language Models (LLMs) to improve accuracy and performance.
- Work with JSON-based data structures for request/response handling.
- Integrate and manage PostgreSQL (pgSQL) databases, including writing queries and handling data pipelines.
- Collaborate with the product and AI teams to implement new features.
- Debug, troubleshoot, and optimize performance of applications and workflows.
- Stay updated on advancements in LLMs, AI frameworks, and generative AI tools.
Required Skills & Qualifications
- Strong knowledge of Python (scripting, APIs, data handling).
- Basic understanding of Large Language Models (LLMs) and prompt engineering techniques.
- Experience with JSON data parsing and transformations.
- Familiarity with PostgreSQL or other relational databases.
- Ability to write clean, maintainable, and well-documented code.
- Strong problem-solving skills and eagerness to learn.
- Bachelor’s degree in Computer Science, Engineering, or related field (or equivalent practical experience).
Nice-to-Have (Preferred)
- Exposure to AI/ML frameworks (e.g., LangChain, Hugging Face, OpenAI APIs).
- Experience working in startups or fast-paced environments.
- Familiarity with version control (Git/GitHub) and cloud platforms (AWS, GCP, or Azure).
What We Offer
- Opportunity to work on cutting-edge AI applications in permitting & compliance.
- Collaborative, growth-focused, and innovation-driven work culture.
- Mentorship and learning opportunities in AI/LLM development.
- Competitive compensation with performance-based growth.
About koolio.ai
Website: www.koolio.ai
koolio Inc. is a cutting-edge Silicon Valley startup dedicated to transforming how stories are told through audio. Our mission is to democratize audio content creation by empowering individuals and businesses to effortlessly produce high-quality, professional-grade content. Leveraging AI and intuitive web-based tools, koolio.ai enables creators to craft, edit, and distribute audio content—from storytelling to educational materials, brand marketing, and beyond—easily. We are passionate about helping people and organizations share their voices, fostering creativity, collaboration, and engaging storytelling for a wide range of use cases.
About the Full-Time Position
We are looking for a Junior QA Engineer (Fresher) to join our team on a full-time, hybrid basis. This is an exciting opportunity for a motivated fresher who is eager to learn and grow in the field of backend testing and quality assurance. You will work closely with senior engineers to ensure the reliability, performance, and scalability of koolio.ai’s backend services. This role is perfect for recent graduates who want to kickstart their career in a dynamic, innovative environment.
Key Responsibilities:
- Assist in the design and execution of test cases for backend services, APIs, and databases
- Perform manual and automated testing to validate the functionality and performance of backend systems
- Help identify, log, and track bugs, working closely with developers for issue resolution
- Contribute to developing automated test scripts to ensure continuous integration and deployment
- Document test cases, results, and issues in a clear and organized manner
- Continuously learn and apply testing methodologies and tools under the guidance of senior engineers
Requirements and Skills:
- Education: Degree in Computer Science or a related field
- Work Experience: No prior work experience required; internships or academic projects related to software testing or backend development are a plus
- Technical Skills:
- Basic understanding of backend systems and APIs
- Familiarity with SQL for basic database testing
- Exposure to any programming or scripting language (e.g., Python, JavaScript, Java)
- Interest in learning test automation tools and frameworks such as Selenium, JUnit, or Pytest
- Familiarity with basic version control systems (e.g., Git)
- Soft Skills:
- Eagerness to learn and apply new technologies in a fast-paced environment
- Strong analytical and problem-solving skills
- Excellent attention to detail and a proactive mindset
- Ability to communicate effectively and work in a collaborative, remote team
- Other Skills:
- Familiarity with API testing tools (e.g., Postman) or automation tools is a bonus but not mandatory
- Basic knowledge of testing methodologies and the software development life cycle is helpful
Compensation and Benefits:
- Total Yearly Compensation: ₹4.5-6 LPA based on skills and experience
- Health Insurance: Comprehensive health coverage provided by the company
Why Join Us?
- Be a part of a passionate and visionary team at the forefront of audio content creation
- Work on an exciting, evolving product that is reshaping the way audio content is created and consumed
- Thrive in a fast-moving, self-learning startup environment that values innovation, adaptability, and continuous improvement
- Enjoy the flexibility of a full-time hybrid position with opportunities to grow professionally and expand your skills
- Collaborate with talented professionals from around the world, contributing to a product that has a real-world impact
We are building cutting-edge AI products in the Construction Tech space – transforming how General Contractors, Estimators, and Project Managers manage bids, RFIs, and scope gaps. Our platform integrates AI Agents, voice automation, and vision systems to reduce hours of manual work and unlock new efficiencies for construction teams.
Joining us means you will be part of a lean, high-impact team working on production-ready AI workflows that touch real projects in the field.
Role Overview
We are seeking a part-time consultant (10–15 hours/week) with strong Backend development skills in Python (backend APIs) and ReactJS (frontend UI). You will work closely with the founding team to design, develop, and deploy features across the stack, directly contributing to AI-driven modules like:
Key Responsibilities
- Build and maintain modular Python APIs (FastAPI/Flask) with clean architecture.
- You must have at least 24 hours monthly backend Python expertise (excluding training, any Internships)
- We are ONLY looking for Backend Developers, Python-based Data Science, Analyst Role are not a match.
- Integrate AI services (OpenAI, LangChain, OCR/vision libraries) into production flows.
- Work with AWS services (Lambda, S3, RDS/Postgres, CloudWatch) for deployment.
- Collaborate with founders to convert fuzzy product ideas into technical deliverables.
- Ensure production readiness: logging, CI/CD pipelines, error handling, and test coverage.
Part-Time Eligibility Check -
- This is a fixed monthly paid role - NOT hourly
- We are a funded startup, and by compliance, Payments are generally prorated to your current monthly drawings (No negotiations on it)
- You should have 2-3 hours per day to code
- You should be a pro in AI-based Coding. We ship code really fast.
- You need to know Tools Like ChatGPT to generate solutions (Not Code) - use of the Cursor to build those solutions. Job ID 319083
- You will be assigned an independent task every week - we run 2 weeks of sprints
- I read the requirements, and I'm okay to proceed (Removing Spam applications).
What You’ll Be Doing:
● Own the architecture and roadmap for scalable, secure, and high-quality data pipelines
and platforms.
● Lead and mentor a team of data engineers while establishing engineering best practices,
coding standards, and governance models.
● Design and implement high-performance ETL/ELT pipelines using modern Big Data
technologies for diverse internal and external data sources.
● Drive modernization initiatives including re-architecting legacy systems to support
next-generation data products, ML workloads, and analytics use cases.
● Partner with Product, Engineering, and Business teams to translate requirements into
robust technical solutions that align with organizational priorities.
● Champion data quality, monitoring, metadata management, and observability across the
ecosystem.
● Lead initiatives to improve cost efficiency, data delivery SLAs, automation, and
infrastructure scalability.
● Provide technical leadership on data modeling, orchestration, CI/CD for data workflows,
and cloud-based architecture improvements.
Qualifications:
● Bachelor's degree in Engineering, Computer Science, or relevant field.
● 8+ years of relevant and recent experience in a Data Engineer role.
● 5+ years recent experience with Apache Spark and solid understanding of the
fundamentals.
● Deep understanding of Big Data concepts and distributed systems.
● Demonstrated ability to design, review, and optimize scalable data architectures across
ingestion.
● Strong coding skills with Scala, Python and the ability to quickly switch between them with
ease.
● Advanced working SQL knowledge and experience working with a variety of relational
databases such as Postgres and/or MySQL.
● Cloud Experience with DataBricks.
● Strong understanding of Delta Lake architecture and working with Parquet, JSON, CSV,
and similar formats.
● Experience establishing and enforcing data engineering best practices, including CI/CD
for data, orchestration and automation, and metadata management.
● Comfortable working in an Agile environment
● Machine Learning knowledge is a plus.
● Demonstrated ability to operate independently, take ownership of deliverables, and lead
technical decisions.
● Excellent written and verbal communication skills in English.
● Experience supporting and working with cross-functional teams in a dynamic
environment.
REPORTING: This position will report to Sr. Technical Manager or Director of Engineering as
assigned by Management.
EMPLOYMENT TYPE: Full-Time, Permanent
SHIFT TIMINGS: 10:00 AM - 07:00 PM IST
At Pipaltree, we’re building an AI-enabled platform that helps brands understand how they’re truly perceived — not through surveys or static dashboards, but through real conversations happening across the world.
We’re a small team solving deep technical and product challenges: orchestrating large-scale conversation data, applying reasoning and summarization models, and turning this into insights that businesses can trust.
Requirements:
- Deep understanding of distributed systems and asynchronous programming in Python
- Experience with building scalable applications using LLMs or traditional ML techniques
- Experience with Databases, Cache, and Micro services
- Experience with DevOps is a huge plus
Python is a high-level, general-purpose programming language. Its design philosophy emphasizes code readability with the use of significant indentation. Python is dynamically type-checked and garbage-collected.
About Us
We are a company where the ‘HOW’ of building software is just as important as the ‘WHAT.’ We partner with large organizations to modernize legacy codebases and collaborate with startups to launch MVPs, scale, or act as extensions of their teams. Guided by Software Craftsmanship values and eXtreme Programming Practices, we deliver high-quality, reliable software solutions tailored to our clients' needs.
We thrive to:
- Bring our clients' dreams to life by being their trusted engineering partners, crafting innovative software solutions.
- Challenge offshore development stereotypes by delivering exceptional quality, and proving the value of craftsmanship.
- Empower clients to deliver value quickly and frequently to their end users.
- Ensure long-term success for our clients by building reliable, sustainable, and impactful solutions.
- Raise the bar of software craft by setting a new standard for the community.
Job Description
This is a remote position.
Our Core Values
- Quality with Pragmatism: We aim for excellence with a focus on practical solutions.
- Extreme Ownership: We own our work and its outcomes fully.
- Proactive Collaboration: Teamwork elevates us all.
- Pursuit of Mastery: Continuous growth drives us.
- Effective Feedback: Honest, constructive feedback fosters improvement.
- Client Success: Our clients’ success is our success.
Experience Level
This role is ideal for engineers with 6+ years of hands-on software development experience, particularly in Python and ReactJs at scale.
Role Overview
If you’re a Software Craftsperson who takes pride in clean, test-driven code and believes in Extreme Programming principles, we’d love to meet you. At Incubyte, we’re a DevOps organization where developers own the entire release cycle, meaning you’ll get hands-on experience across programming, cloud infrastructure, client communication, and everything in between. Ready to level up your craft and join a team that’s as quality-obsessed as you are? Read on!
What You'll Do
- Write Tests First: Start by writing tests to ensure code quality
- Clean Code: Produce self-explanatory, clean code with predictable results
- Frequent Releases: Make frequent, small releases
- Pair Programming: Work in pairs for better results
- Peer Reviews: Conduct peer code reviews for continuous improvement
- Product Team: Collaborate in a product team to build and rapidly roll out new features and fixes
- Full Stack Ownership: Handle everything from the front end to the back end, including infrastructure and DevOps pipelines
- Never Stop Learning: Commit to continuous learning and improvement
- AI-First Development Focus
- Leverage AI tools like GitHub Copilot, Cursor, Augment, Claude Code, etc., to accelerate development and automate repetitive tasks.
- Use AI to detect potential bugs, code smells, and performance bottlenecks early in the development process.
- Apply prompt engineering techniques to get the best results from AI coding assistants.
- Evaluate AI generated code/tools for correctness, performance, and security before merging.
- Continuously explore, stay ahead by experimenting and integrating new AI powered tools and workflows as they emerge.
Requirements
What We're Looking For
- Proficiency in some or all of the following: ReactJS, JavaScript, Object Oriented Programming in JS
- 5+ years of Object-Oriented Programming with Python or equivalent
- 5+ years of experience working with relational (SQL) databases
- 5+ years of experience using Git to contribute code as part of a team of Software Craftspeople
- AI Skills & Mindset
- Power user of AI assisted coding tools (e.g., GitHub Copilot, Cursor, Augment, Claude Code).
- Strong prompt engineering skills to effectively guide AI in crafting relevant, high-quality code.
- Ability to critically evaluate AI generated code for logic, maintainability, performance, and security.
- Curiosity and adaptability to quickly learn and apply new AI tools and workflows.
- AI evaluation mindset balancing AI speed with human judgment for robust solutions.
Benefits
What We Offer
- Dedicated Learning & Development Budget: Fuel your growth with a budget dedicated solely to learning.
- Conference Talks Sponsorship: Amplify your voice! If you’re speaking at a conference, we’ll fully sponsor and support your talk.
- Cutting-Edge Projects: Work on exciting projects with the latest AI technologies
- Employee-Friendly Leave Policy: Recharge with ample leave options designed for a healthy work-life balance.
- Comprehensive Medical & Term Insurance: Full coverage for you and your family’s peace of mind.
- And More: Extra perks to support your well-being and professional growth.
Work Environment
- Remote-First Culture: At Incubyte, we thrive on a culture of structured flexibility — while you have control over where and how you work, everyone commits to a consistent rhythm that supports their team during core working hours for smooth collaboration and timely project delivery. By striking the perfect balance between freedom and responsibility, we enable ourselves to deliver high-quality standards our customers recognize us by. With asynchronous tools and push for active participation, we foster a vibrant, hands-on environment where each team member’s engagement and contributions drive impactful results.
- Work-In-Person: Twice a year, we come together for two-week sprints to collaborate in person, foster stronger team bonds, and align on goals. Additionally, we host an annual retreat to recharge and connect as a team. All travel expenses are covered.
- Proactive Collaboration: Collaboration is central to our work. Through daily pair programming sessions, we focus on mentorship, continuous learning, and shared problem-solving. This hands-on approach keeps us innovative and aligned as a team.
Incubyte is an equal opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all employees.
Detailed JD (Roles and Responsibilities)
Full stack (Backend focused) Ownership. Programing - Python, react (Good to have - C#, Node),Agile .Flexible to learn new things
























