50+ Remote Python Jobs in India
Apply to 50+ Remote Python Jobs on CutShort.io. Find your next job, effortlessly. Browse Python Jobs and apply today!
At Palcode.ai, We're on a mission to fix the massive inefficiencies in pre-construction. Think about it - in a $10 trillion industry, estimators still spend weeks analyzing bids, project managers struggle with scattered data, and costly mistakes slip through complex contracts. We're fixing this with purpose-built AI agents that work. Our platform can do “magic” to Preconstruction workflows from Weeks to Hours. It's not just about AI – it's about bringing real, measurable impact to an industry ready for change. We are backed by names like AWS for Startups, Upekkha Accelerator, and Microsoft for Startups.
Why Palcode.ai
Tackle Complex Problems: Build AI that reads between the lines of construction bids, spots hidden risks in contracts, and makes sense of fragmented project data
High-Impact Code: Your code won't sit in a backlog – it goes straight to estimators and project managers who need it yesterday
Tech Challenges That Matter: Design systems that process thousands of construction documents, handle real-time pricing data, and make intelligent decisions
Build & Own: Shape our entire tech stack, from data processing pipelines to AI model deployment
Quick Impact: Small team, huge responsibility. Your solutions directly impact project decisions worth millions
Learn & Grow: Master the intersection of AI, cloud architecture, and construction tech while working with founders who've built and scaled construction software
Your Role:
- Design and build our core AI services and APIs using Python
- Create reliable, scalable backend systems that handle complex data
- Help set up cloud infrastructure and deployment pipelines
- Collaborate with our AI team to integrate machine learning models
- Write clean, tested, production-ready code
You'll fit right in if:
- You have 1 year of hands-on Python development experience
- You're comfortable with full-stack development and cloud services
- You write clean, maintainable code and follow good engineering practices
- You're curious about AI/ML and eager to learn new technologies
- You enjoy fast-paced startup environments and take ownership of your work
How we will set you up for success
- You will work closely with the Founding team to understand what we are building.
- You will be given comprehensive training about the tech stack, with an opportunity to avail virtual training as well.
- You will be involved in a monthly one-on-one with the founders to discuss feedback
- A unique opportunity to learn from the best - we are Gold partners of AWS, Razorpay, and Microsoft Startup programs, having access to rich talent to discuss and brainstorm ideas.
- You’ll have a lot of creative freedom to execute new ideas. As long as you can convince us, and you’re confident in your skills, we’re here to back you in your execution.
Location: Bangalore, Remote
Compensation: Competitive salary + Meaningful equity
If you get excited about solving hard problems that have real-world impact, we should talk.
All the best!!
About the Role
We are looking for a passionate AI Engineer Intern (B.Tech, M.Tech / M.S. or equivalent) with strong foundations in Artificial Intelligence, Computer Vision, and Deep Learning to join our R&D team.
You will help us build and train realistic face-swap and deepfake video models, powering the next generation of AI-driven video synthesis technology.
This is a remote, individual-contributor role offering exposure to cutting-edge AI model development in a startup-like environment.
Key Responsibilities
- Research, implement, and fine-tune face-swap / deepfake architectures (e.g., FaceSwap, SimSwap, DeepFaceLab, LatentSync, Wav2Lip).
- Train and optimize models for realistic facial reenactment and temporal consistency.
- Work with GANs, VAEs, and diffusion models for video synthesis.
- Handle dataset creation, cleaning, and augmentation for face-video tasks.
- Collaborate with the AI core team to deploy trained models in production environments.
- Maintain clean, modular, and reproducible pipelines using Git and experiment-tracking tools.
Required Qualifications
- B.Tech, M.Tech / M.S. (or equivalent) in AI / ML / Computer Vision / Deep Learning.
- Certifications in AI or Deep Learning (DeepLearning.AI, NVIDIA DLI, Coursera, etc.).
- Proficiency in PyTorch or TensorFlow, OpenCV, FFmpeg.
- Understanding of CNNs, Autoencoders, GANs, Diffusion Models.
- Familiarity with datasets like CelebA, VoxCeleb, FFHQ, DFDC, etc.
- Good grasp of data preprocessing, model evaluation, and performance tuning.
Preferred Skills
- Prior hands-on experience with face-swap or lip-sync frameworks.
- Exposure to 3D morphable models, NeRF, motion transfer, or facial landmark tracking.
- Knowledge of multi-GPU training and model optimization.
- Familiarity with Rust / Python backend integration for inference pipelines.
What We Offer
- Work directly on production-grade AI video synthesis systems.
- Remote-first, flexible working hours.
- Mentorship from senior AI researchers and engineers.
- Opportunity to transition into a full-time role upon outstanding performance.
Location: Remote | Stipend: ₹10,000/month | Duration: 3–6 months
We are building an AI-powered chatbot platform and looking for an AI/ML Engineer with strong backend skills as our first technical hire. You will be responsible for developing the core chatbot engine using LLMs, creating backend APIs, and building scalable RAG pipelines.
You should be comfortable working independently, shipping fast, and turning ideas into real product features. This role is ideal for someone who loves building with modern AI tools and wants to be part of a fast-growing product from day one.
Responsibilities
• Build the core AI chatbot engine using LLMs (OpenAI, Claude, Gemini, Llama etc.)
• Develop backend services and APIs using Python (FastAPI/Flask)
• Create RAG pipelines using vector databases (Pinecone, FAISS, Chroma)
• Implement embeddings, prompt flows, and conversation logic
• Integrate chatbot with web apps, WhatsApp, CRMs and 3rd-party APIs
• Ensure system reliability, performance, and scalability
• Work directly with the founder in shaping the product and roadmap
Requirements
• Strong experience with LLMs & Generative AI
• Excellent Python skills with FastAPI/Flask
• Hands-on experience with LangChain or RAG architectures
• Vector database experience (Pinecone/FAISS/Chroma)
• Strong understanding of REST APIs and backend development
• Ability to work independently, experiment fast, and deliver clean code
Nice to Have
• Experience with cloud (AWS/GCP)
• Node.js knowledge
• LangGraph, LlamaIndex
• MLOps or deployment experience
Mission
Own architecture across web + backend, ship reliably, and establish patterns the team can scale on.
Responsibilities
- Lead system architecture for Next.js (web) and FastAPI (backend); own code quality, reviews, and release cadence.
- Build and maintain the web app (marketing, auth, dashboard) and a shared TS SDK (@revilo/contracts, @revilo/sdk).
- Integrate Stripe, Maps, analytics; enforce accessibility and performance baselines.
- Define CI/CD (GitHub Actions), containerization (Docker), env/promotions (staging → prod).
- Partner with Mobile and AI engineers on API/tool schemas and developer experience.
Requirements
- 6–10+ years; expert TypeScript, strong Python.
- Next.js (App Router), TanStack Query, shadcn/ui; FastAPI, Postgres, pydantic/SQLModel.
- Auth (OTP/JWT/OAuth), payments, caching, pagination, API versioning.
- Practical CI/CD and observability (logs/metrics/traces).
Nice-to-haves
- OpenAPI typegen (Zod), feature flags, background jobs/queues, Vercel/EAS.
Key Outcomes (ongoing)
- Stable architecture with typed contracts; <2% crash/error on web, p95 API latency in budget, reliable weekly releases.
Responsibilities:
- Develop and maintain RPA workflows using Selenium, AWS Lambda, and message queues (SQS/RabbitMQ/Kafka).
- Build and evolve internal automation frameworks, reusable libraries, and CI-integrated test suites to accelerate developer productivity.
- Develop comprehensive test strategies (unit, integration, end-to-end), optimize performance, handle exceptions, and ensure high reliability of automation scripts.
- Monitor automation health and maintain dashboards/logging via cloud tools (CloudWatch, ELK, etc. ).
- Champion automation standards workshops, write documentation, and coach other engineers on test-driven development and behavior-driven automation.
Requirements:
- 4-5 years of experience in automation engineering with deep/hands-on experience in Python and modern browser automation frameworks (Selenium/PythonRPA).
- Solid background with desktop-automation solutions (UiPath, PythonRPA) and scripting legacy applications.
- Strong debugging skills, with an eye for edge cases and race conditions in distributed, asynchronous systems.
- Hands-on experience with AWS services like Lambda, S3 and API Gateway.
- Familiarity with REST APIs, webhooks, and queue-based async processing.
- Experience integrating with third-party platforms or enterprise systems.
- Ability to translate business workflows into technical automation logic.
- Able to evangelize automation best practices, present complex ideas clearly, and drive cross-team alignment.
Nice to Have:
- Experience with RPA frameworks (UiPath, BluePrism, etc. ).
- Familiarity with building LLM-based workflows (LangChain, LlamaIndex) or custom agent loops to automate cognitive tasks.
- Exposure to automotive dealer management or VMS platforms.
- Understanding of cloud security and IAM practices.
We are looking for a Senior Data Engineer/Developer with over 10+ years of experience to be a key contributor to our data-driven initiatives. This role is 'primarily' focused on development, involving the design and construction of data models, writing complex SQL, developing ETL processes, and contributing to our data architecture. The 'secondary' focus involves applying your deep database knowledge to performance tuning, query optimization, and collaborating on DBA-related support activities on AWS environments (RDS, Redshift, SQL Server, Snowflake). The ideal candidate is a builder who understands how to get the most out of a database platform.Key Responsibilities Data Development & Engineering (Primary Focus):
· Design & Development: Architect, design, and implement efficient, scalable, and sustainable data models and database schemas.
· Advanced SQL Programming: Write sophisticated, highly-optimized SQL code for complex business logic, data retrieval, and manipulation within MySQL RDS, SQL Server, and AWS Redshift.
· Data Pipeline & ETL Development: Collaborate with engineering teams to design, build, and maintain robust ETL processes and data pipeline integrations.
· Automation & Scripting: Utilize Python as a primary tool for scripting, automation, data processing, and enhancing platform capabilities.
· CI/CD Ownership: Own and enhance CI/CD pipelines for database deployments, schema migrations, and automated testing, ensuring smooth and reliable releases.
· Solution Collaboration: Collaborate with application engineering teams to deliver scalable, secure, and performing data solutions and APIs.
Database Administration & Optimization (Secondary Focus):
· Performance Tuning: Proactively identify and resolve performance bottlenecks, including slow-running queries, indexing strategies, and resource contention. Use tools like SQL Sentry for deep diagnostics.
· Operational Support: Perform essential DBA activities such as supporting backup/recovery strategies, contributing to high-availability designs, and assisting with patch management plans.
· AWS Data Management: Administer and optimize AWS RDS and Redshift instances, leveraging knowledge of DB Clusters (Read Replicas, Multi-AZ) for development and testing.
· Monitoring & Reliability: Monitor data platform health using Amazon CloudWatch, xMatters, and other tools to ensure high availability and reliability, tackling issues as they arise.
Architecture & Mentorship:
· Contribute to architectural decisions and infrastructure modernization efforts on AWS and Snowflake.
· Provide technical guidance and mentorship to other developers on best practices in database design and SQL.
Required Qualifications & Experience
· 10+ years of experience in a data engineering, database development, or software development role with a heavy focus on data.
· Expert-level SQL programming skills with extensive experience in MySQL (AWS RDS) and Microsoft SQL Server ,Redshift and snowflake
· Strong development skills in Python for Lambda / glue development on
· Hands-on experience designing and optimizing for AWS Redshift
· Proven experience in performance tuning and optimization of complex queries and data models.
· Solid understanding of ETL concepts, processes, and tools.
· Experience with CI/CD tools (e.g., Bitbucket Pipelines, Jenkins) for automating database deployments.
· Experience managing production data environments and troubleshooting platform issues.
· Excellent written and verbal communication skills, with the ability to work effectively in a remote team.
Preferred Skills (Nice-to-Have)
· Understanding of Data Governance
· Experience with Snowflake, particularly around architecture, agents, data sharing, security, and performance.
· Knowledge of infrastructure-as-code (IaC) tools like CloudFormation.
Work Schedule & Conditions
· This is a 100% remote, long-term opportunity.
· The standard work week will be Wednesday through Sunday.
· Your designated days off will be Monday and Tuesday.
· You must be willing to work partially overlapping hours with Eastern Standard Time (EST) to ensure collaboration with the team and support during core business hours
Job Title: Python Backend Developer
Experience: 6+ Years
Location: Remote
Job Summary:
We are looking for an experienced Python Backend Developer to design, build, and maintain scalable, high-performance backend systems. The ideal candidate should have strong expertise in Python frameworks, database design, API development, and cloud-based architectures. You will collaborate closely with front-end developers, DevOps engineers, and product teams to deliver robust, secure, and efficient backend solutions.
Key Responsibilities:
- Design, develop, and maintain scalable and efficient backend services using Python.
- Build RESTful and GraphQL APIs for front-end and third-party integrations.
- Optimize application performance and ensure system reliability and scalability.
- Collaborate with cross-functional teams to define, design, and ship new features.
- Develop and maintain database schemas, stored procedures, and data models.
- Implement security and data protection best practices.
- Write clean, maintainable, and well-documented code following coding standards.
- Conduct code reviews, troubleshoot issues, and provide technical mentorship to junior developers.
- Integrate applications with cloud services (AWS / Azure / GCP) and CI/CD pipelines.
- Monitor system performance and handle production deployments.
Required Skills and Qualifications:
- 6+ years of hands-on experience in backend development using Python.
- Strong knowledge of Python frameworks such as Django, Flask, or FastAPI.
- Experience in RESTful API and microservices architecture.
- Proficiency in SQL and NoSQL databases (e.g., PostgreSQL, MySQL, MongoDB, Redis).
- Strong understanding of OOP concepts, design patterns, and asynchronous programming.
- Hands-on experience with Docker, Kubernetes, and CI/CD tools (Jenkins, GitHub Actions, etc.).
- Experience with cloud platforms (AWS, Azure, or GCP).
- Proficient in version control systems like Git.
- Solid understanding of unit testing, integration testing, and test automation frameworks (PyTest, Unittest).
- Familiarity with message brokers (RabbitMQ, Kafka, Celery) is a plus.
- Knowledge of containerized deployments and serverless architecture is an advantage.
Education:
- Bachelor’s or Master’s degree in Computer Science, Information Technology, or a related field.
Preferred Qualifications:
- Experience working in Agile/Scrum development environments.
- Exposure to DevOps tools and monitoring systems (Prometheus, Grafana, ELK).
- Contribution to open-source projects or community participation.
Soft Skills:
- Excellent problem-solving and analytical skills.
- Strong communication and teamwork abilities.
- Proactive attitude with attention to detail.
- Ability to work independently and mentor junior team members.
About Synorus
Synorus is building a next-generation ecosystem of AI-first products. Our flagship legal-AI platform LexVault is redefining legal research, drafting, knowledge retrieval, and case intelligence using domain-tuned LLMs, private RAG pipelines, and secure reasoning systems.
If you are passionate about AI, legaltech, and training high-performance models — this internship will put you on the front line of innovation.
Role Overview
We are seeking passionate AI/LLM Engineering Interns who can:
- Fine-tune LLMs for legal domain use-cases
- Train and experiment with open-source foundation models
- Work with large datasets efficiently
- Build RAG pipelines and text-processing frameworks
- Run model training workflows on Google Colab / Kaggle / Cloud GPUs
This is a hands-on engineering and research internship — you will work directly with senior founders & technical leadership.
Key Responsibilities
- Fine-tune transformer-based models (Llama, Mistral, Gemma, etc.)
- Build and preprocess legal datasets at scale
- Develop efficient inference & training pipelines
- Evaluate models for accuracy, hallucinations, and trustworthiness
- Implement RAG architectures (vector DBs + embeddings)
- Work with GPU environments (Colab/Kaggle/Cloud)
- Contribute to model improvements, prompt engineering & safety tuning
Must-Have Skills
- Strong knowledge of Python & PyTorch
- Understanding of LLMs, Transformers, Tokenization
- Hands-on experience with HuggingFace Transformers
- Familiarity with LoRA/QLoRA, PEFT training
- Data wrangling: Pandas, NumPy, tokenizers
- Ability to handle multi-GB datasets efficiently
Bonus Skills
(Not mandatory — but a strong plus)
- Experience with RAG / vector DBs (Chroma, Qdrant, LanceDB)
- Familiarity with vLLM, llama.cpp, GGUF
- Worked on summarization, Q&A or document-AI projects
- Knowledge of legal texts (Indian laws/case-law/statutes)
- Open-source contributions or research work
What You Will Gain
- Real-world training on LLM fine-tuning & legal AI
- Exposure to production-grade AI pipelines
- Direct mentorship from engineering leadership
- Research + industry project portfolio
- Letter of experience + potential full-time offer
Ideal Candidate
- You experiment with models on weekends
- You love pushing GPUs to their limits
- You prefer research + implementation over theory alone
- You want to build AI that matters — not just demos
Location - Remote
Stipend - 5K - 10K
Role Overview
We are seeking a Junior Developer with 1-3 year’s experience with strong foundations in Python, databases, and AI technologies. The ideal candidate will support the development of AI-powered solutions, focusing on LLM integration, prompt engineering, and database-driven workflows. This is a hands-on role with opportunities to learn and grow into advanced AI engineering responsibilities.
Key Responsibilities
- Develop, test, and maintain Python-based applications and APIs.
- Design and optimize prompts for Large Language Models (LLMs) to improve accuracy and performance.
- Work with JSON-based data structures for request/response handling.
- Integrate and manage PostgreSQL (pgSQL) databases, including writing queries and handling data pipelines.
- Collaborate with the product and AI teams to implement new features.
- Debug, troubleshoot, and optimize performance of applications and workflows.
- Stay updated on advancements in LLMs, AI frameworks, and generative AI tools.
Required Skills & Qualifications
- Strong knowledge of Python (scripting, APIs, data handling).
- Basic understanding of Large Language Models (LLMs) and prompt engineering techniques.
- Experience with JSON data parsing and transformations.
- Familiarity with PostgreSQL or other relational databases.
- Ability to write clean, maintainable, and well-documented code.
- Strong problem-solving skills and eagerness to learn.
- Bachelor’s degree in Computer Science, Engineering, or related field (or equivalent practical experience).
Nice-to-Have (Preferred)
- Exposure to AI/ML frameworks (e.g., LangChain, Hugging Face, OpenAI APIs).
- Experience working in startups or fast-paced environments.
- Familiarity with version control (Git/GitHub) and cloud platforms (AWS, GCP, or Azure).
What We Offer
- Opportunity to work on cutting-edge AI applications in permitting & compliance.
- Collaborative, growth-focused, and innovation-driven work culture.
- Mentorship and learning opportunities in AI/LLM development.
- Competitive compensation with performance-based growth.
We’re seeking a highly skilled, execution-focused Senior Data Scientist with a minimum of 5 years of experience. This role demands hands-on expertise in building, deploying, and optimizing machine learning models at scale, while working with big data technologies and modern cloud platforms. You will be responsible for driving data-driven solutions from experimentation to production, leveraging advanced tools and frameworks across Python, SQL, Spark, and AWS. The role requires strong technical depth, problem-solving ability, and ownership in delivering business impact through data science.
Responsibilities
- Design, build, and deploy scalable machine learning models into production systems.
- Develop advanced analytics and predictive models using Python, SQL, and popular ML/DL frameworks (Pandas, Scikit-learn, TensorFlow, PyTorch).
- Leverage Databricks, Apache Spark, and Hadoop for large-scale data processing and model training.
- Implement workflows and pipelines using Airflow and AWS EMR for automation and orchestration.
- Collaborate with engineering teams to integrate models into cloud-based applications on AWS.
- Optimize query performance, storage usage, and data pipelines for efficiency.
- Conduct end-to-end experiments, including data preprocessing, feature engineering, model training, validation, and deployment.
- Drive initiatives independently with high ownership and accountability.
- Stay up to date with industry best practices in machine learning, big data, and cloud-native deployments.
Requirements:
- Minimum 5 years of experience in Data Science or Applied Machine Learning.
- Strong proficiency in Python, SQL, and ML libraries (Pandas, Scikit-learn, TensorFlow, PyTorch).
- Proven expertise in deploying ML models into production systems.
- Experience with big data platforms (Hadoop, Spark) and distributed data processing.
- Hands-on experience with Databricks, Airflow, and AWS EMR.
- Strong knowledge of AWS cloud services (S3, Lambda, SageMaker, EC2, etc.).
- Solid understanding of query optimization, storage systems, and data pipelines.
- Excellent problem-solving skills, with the ability to design scalable solutions.
- Strong communication and collaboration skills to work in cross-functional teams.
Benefits:
- Best in class salary: We hire only the best, and we pay accordingly.
- Proximity Talks: Meet other designers, engineers, and product geeks — and learn from experts in the field.
- Keep on learning with a world-class team: Work with the best in the field, challenge yourself constantly, and learn something new every day.
About Us:
Proximity is the trusted technology, design, and consulting partner for some of the biggest Sports, Media, and Entertainment companies in the world! We’re headquartered in San Francisco and have offices in Palo Alto, Dubai, Mumbai, and Bangalore. Since 2019, Proximity has created and grown high-impact, scalable products used by 370 million daily users, with a total net worth of $45.7 billion among our client companies.
Today, we are a global team of coders, designers, product managers, geeks, and experts. We solve complex problems and build cutting-edge tech, at scale. Our team of Proxonauts is growing quickly, which means your impact on the company’s success will be huge. You’ll have the chance to work with experienced leaders who have built and led multiple tech, product, and design teams.
About Synorus
Synorus is building a next-generation ecosystem of AI-first products. Our flagship legal-AI platform LexVault is redefining legal research, drafting, knowledge retrieval, and case intelligence using domain-tuned LLMs, private RAG pipelines, and secure reasoning systems.
If you are passionate about AI, legaltech, and training high-performance models — this internship will put you on the front line of innovation.
Role Overview
We are seeking passionate AI/LLM Engineering Interns who can:
- Fine-tune LLMs for legal domain use-cases
- Train and experiment with open-source foundation models
- Work with large datasets efficiently
- Build RAG pipelines and text-processing frameworks
- Run model training workflows on Google Colab / Kaggle / Cloud GPUs
This is a hands-on engineering and research internship — you will work directly with senior founders & technical leadership.
Key Responsibilities
- Fine-tune transformer-based models (Llama, Mistral, Gemma, etc.)
- Build and preprocess legal datasets at scale
- Develop efficient inference & training pipelines
- Evaluate models for accuracy, hallucinations, and trustworthiness
- Implement RAG architectures (vector DBs + embeddings)
- Work with GPU environments (Colab/Kaggle/Cloud)
- Contribute to model improvements, prompt engineering & safety tuning
Must-Have Skills
- Strong knowledge of Python & PyTorch
- Understanding of LLMs, Transformers, Tokenization
- Hands-on experience with HuggingFace Transformers
- Familiarity with LoRA/QLoRA, PEFT training
- Data wrangling: Pandas, NumPy, tokenizers
- Ability to handle multi-GB datasets efficiently
Bonus Skills
(Not mandatory — but a strong plus)
- Experience with RAG / vector DBs (Chroma, Qdrant, LanceDB)
- Familiarity with vLLM, llama.cpp, GGUF
- Worked on summarization, Q&A or document-AI projects
- Knowledge of legal texts (Indian laws/case-law/statutes)
- Open-source contributions or research work
What You Will Gain
- Real-world training on LLM fine-tuning & legal AI
- Exposure to production-grade AI pipelines
- Direct mentorship from engineering leadership
- Research + industry project portfolio
- Letter of experience + potential full-time offer
Ideal Candidate
- You experiment with models on weekends
- You love pushing GPUs to their limits
- You prefer research + implementation over theory alone
- You want to build AI that matters — not just demos
Location - Remote
Stipend - 5K - 10K
We are building cutting-edge AI products in the Construction Tech space – transforming how General Contractors, Estimators, and Project Managers manage bids, RFIs, and scope gaps. Our platform integrates AI Agents, voice automation, and vision systems to reduce hours of manual work and unlock new efficiencies for construction teams.
Joining us means you will be part of a lean, high-impact team working on production-ready AI workflows that touch real projects in the field.
Role Overview
We are seeking a part-time consultant (10–15 hours/week) with strong Backend development skills in Python (backend APIs) and ReactJS (frontend UI). You will work closely with the founding team to design, develop, and deploy features across the stack, directly contributing to AI-driven modules like:
Key Responsibilities
- Build and maintain modular Python APIs (FastAPI/Flask) with clean architecture.
- You must have at least 24 hours monthly backend Python expertise (excluding training, any Internships)
- We are ONLY looking for Backend Developers, Python-based Data Science, Analyst Role are not a match.
- Integrate AI services (OpenAI, LangChain, OCR/vision libraries) into production flows.
- Work with AWS services (Lambda, S3, RDS/Postgres, CloudWatch) for deployment.
- Collaborate with founders to convert fuzzy product ideas into technical deliverables.
- Ensure production readiness: logging, CI/CD pipelines, error handling, and test coverage.
Part-Time Eligibility Check -
- This is a fixed monthly paid role - NOT hourly
- We are a funded startup, and by compliance, Payments are generally prorated to your current monthly drawings (No negotiations on it)
- You should have 2-3 hours per day to code
- You should be a pro in AI-based Coding. We ship code really fast.
- You need to know Tools Like ChatGPT to generate solutions (Not Code) - use of the Cursor to build those solutions. Job ID 319083
- You will be assigned an independent task every week - we run 2 weeks of sprints
- I read the requirements, and I'm okay to proceed (Removing Spam applications).
What You’ll Be Doing:
● Own the architecture and roadmap for scalable, secure, and high-quality data pipelines
and platforms.
● Lead and mentor a team of data engineers while establishing engineering best practices,
coding standards, and governance models.
● Design and implement high-performance ETL/ELT pipelines using modern Big Data
technologies for diverse internal and external data sources.
● Drive modernization initiatives including re-architecting legacy systems to support
next-generation data products, ML workloads, and analytics use cases.
● Partner with Product, Engineering, and Business teams to translate requirements into
robust technical solutions that align with organizational priorities.
● Champion data quality, monitoring, metadata management, and observability across the
ecosystem.
● Lead initiatives to improve cost efficiency, data delivery SLAs, automation, and
infrastructure scalability.
● Provide technical leadership on data modeling, orchestration, CI/CD for data workflows,
and cloud-based architecture improvements.
Qualifications:
● Bachelor's degree in Engineering, Computer Science, or relevant field.
● 8+ years of relevant and recent experience in a Data Engineer role.
● 5+ years recent experience with Apache Spark and solid understanding of the
fundamentals.
● Deep understanding of Big Data concepts and distributed systems.
● Demonstrated ability to design, review, and optimize scalable data architectures across
ingestion.
● Strong coding skills with Scala, Python and the ability to quickly switch between them with
ease.
● Advanced working SQL knowledge and experience working with a variety of relational
databases such as Postgres and/or MySQL.
● Cloud Experience with DataBricks.
● Strong understanding of Delta Lake architecture and working with Parquet, JSON, CSV,
and similar formats.
● Experience establishing and enforcing data engineering best practices, including CI/CD
for data, orchestration and automation, and metadata management.
● Comfortable working in an Agile environment
● Machine Learning knowledge is a plus.
● Demonstrated ability to operate independently, take ownership of deliverables, and lead
technical decisions.
● Excellent written and verbal communication skills in English.
● Experience supporting and working with cross-functional teams in a dynamic
environment.
REPORTING: This position will report to Sr. Technical Manager or Director of Engineering as
assigned by Management.
EMPLOYMENT TYPE: Full-Time, Permanent
SHIFT TIMINGS: 10:00 AM - 07:00 PM IST
At Pipaltree, we’re building an AI-enabled platform that helps brands understand how they’re truly perceived — not through surveys or static dashboards, but through real conversations happening across the world.
We’re a small team solving deep technical and product challenges: orchestrating large-scale conversation data, applying reasoning and summarization models, and turning this into insights that businesses can trust.
Requirements:
- Deep understanding of distributed systems and asynchronous programming in Python
- Experience with building scalable applications using LLMs or traditional ML techniques
- Experience with Databases, Cache, and Micro services
- Experience with DevOps is a huge plus
Python is a high-level, general-purpose programming language. Its design philosophy emphasizes code readability with the use of significant indentation. Python is dynamically type-checked and garbage-collected.
Role: Machine Learning Engineer
Employment Type: Permanent with VDart Digital
Work Location: Remote
Key Responsibilities:
- Design and implement machine learning workflows from data ingestion to model deployment.
- Develop, train, and fine-tune supervised and unsupervised ML models for business applications.
- Translate complex business problems into ML-based solutions and measurable outcomes.
- Build and automate model training, evaluation, and deployment pipelines.
- Collaborate with cross-functional teams to understand requirements and deliver production-grade models.
- Ensure scalability, reliability, and performance of deployed models.
- Monitor model accuracy, drift, and performance, and manage continuous improvement cycles.
- Document ML experiments, workflows, and results for reproducibility.
Required Skills & Qualifications:
- Bachelor’s or Master’s degree in Computer Science, Artificial Intelligence, or related field.
- Strong hands-on experience in building and deploying ML models end-to-end.
- Expertise in machine learning algorithms, model evaluation, and workflow orchestration.
- Experience with ML frameworks such as Scikit-learn, TensorFlow, or PyTorch.
- Good understanding of MLOps practices, including CI/CD pipelines and model versioning.
- Experience in deploying ML solutions using APIs, containers, or cloud platforms (AWS, Azure, or GCP).
- Proven experience delivering real-world ML use cases such as classification, regression, forecasting, or recommendation systems.
Good to Have:
- Exposure to automation frameworks or ML pipeline tools (MLflow, Airflow, Kubeflow, etc.).
- Familiarity with data versioning and monitoring tools.
- Understanding of data governance and model lifecycle management.
About Us
We are a company where the ‘HOW’ of building software is just as important as the ‘WHAT.’ We partner with large organizations to modernize legacy codebases and collaborate with startups to launch MVPs, scale, or act as extensions of their teams. Guided by Software Craftsmanship values and eXtreme Programming Practices, we deliver high-quality, reliable software solutions tailored to our clients' needs.
We thrive to:
- Bring our clients' dreams to life by being their trusted engineering partners, crafting innovative software solutions.
- Challenge offshore development stereotypes by delivering exceptional quality, and proving the value of craftsmanship.
- Empower clients to deliver value quickly and frequently to their end users.
- Ensure long-term success for our clients by building reliable, sustainable, and impactful solutions.
- Raise the bar of software craft by setting a new standard for the community.
Job Description
This is a remote position.
Our Core Values
- Quality with Pragmatism: We aim for excellence with a focus on practical solutions.
- Extreme Ownership: We own our work and its outcomes fully.
- Proactive Collaboration: Teamwork elevates us all.
- Pursuit of Mastery: Continuous growth drives us.
- Effective Feedback: Honest, constructive feedback fosters improvement.
- Client Success: Our clients’ success is our success.
Experience Level
This role is ideal for engineers with 4+ years of hands-on software development experience, particularly in Python and ReactJs at scale.
Role Overview
If you’re a Software Craftsperson who takes pride in clean, test-driven code and believes in Extreme Programming principles, we’d love to meet you. At Incubyte, we’re a DevOps organization where developers own the entire release cycle, meaning you’ll get hands-on experience across programming, cloud infrastructure, client communication, and everything in between. Ready to level up your craft and join a team that’s as quality-obsessed as you are? Read on!
What You'll Do
- Write Tests First: Start by writing tests to ensure code quality
- Clean Code: Produce self-explanatory, clean code with predictable results
- Frequent Releases: Make frequent, small releases
- Pair Programming: Work in pairs for better results
- Peer Reviews: Conduct peer code reviews for continuous improvement
- Product Team: Collaborate in a product team to build and rapidly roll out new features and fixes
- Full Stack Ownership: Handle everything from the front end to the back end, including infrastructure and DevOps pipelines
- Never Stop Learning: Commit to continuous learning and improvement
- AI-First Development Focus
- Leverage AI tools like GitHub Copilot, Cursor, Augment, Claude Code, etc., to accelerate development and automate repetitive tasks.
- Use AI to detect potential bugs, code smells, and performance bottlenecks early in the development process.
- Apply prompt engineering techniques to get the best results from AI coding assistants.
- Evaluate AI generated code/tools for correctness, performance, and security before merging.
- Continuously explore, stay ahead by experimenting and integrating new AI powered tools and workflows as they emerge.
Requirements
What We're Looking For
- Proficiency in some or all of the following: ReactJS, JavaScript, Object Oriented Programming in JS
- 3+ years of Object-Oriented Programming with Python or equivalent
- 3+ years of experience working with relational (SQL) databases
- 3+ years of experience using Git to contribute code as part of a team of Software Craftspeople
- AI Skills & Mindset
- Power user of AI assisted coding tools (e.g., GitHub Copilot, Cursor, Augment, Claude Code).
- Strong prompt engineering skills to effectively guide AI in crafting relevant, high-quality code.
- Ability to critically evaluate AI generated code for logic, maintainability, performance, and security.
- Curiosity and adaptability to quickly learn and apply new AI tools and workflows.
- AI evaluation mindset balancing AI speed with human judgment for robust solutions.
Benefits
What We Offer
- Dedicated Learning & Development Budget: Fuel your growth with a budget dedicated solely to learning.
- Conference Talks Sponsorship: Amplify your voice! If you’re speaking at a conference, we’ll fully sponsor and support your talk.
- Cutting-Edge Projects: Work on exciting projects with the latest AI technologies
- Employee-Friendly Leave Policy: Recharge with ample leave options designed for a healthy work-life balance.
- Comprehensive Medical & Term Insurance: Full coverage for you and your family’s peace of mind.
- And More: Extra perks to support your well-being and professional growth.
Work Environment
- Remote-First Culture: At Incubyte, we thrive on a culture of structured flexibility — while you have control over where and how you work, everyone commits to a consistent rhythm that supports their team during core working hours for smooth collaboration and timely project delivery. By striking the perfect balance between freedom and responsibility, we enable ourselves to deliver high-quality standards our customers recognize us by. With asynchronous tools and push for active participation, we foster a vibrant, hands-on environment where each team member’s engagement and contributions drive impactful results.
- Work-In-Person: Twice a year, we come together for two-week sprints to collaborate in person, foster stronger team bonds, and align on goals. Additionally, we host an annual retreat to recharge and connect as a team. All travel expenses are covered.
- Proactive Collaboration: Collaboration is central to our work. Through daily pair programming sessions, we focus on mentorship, continuous learning, and shared problem-solving. This hands-on approach keeps us innovative and aligned as a team.
Incubyte is an equal opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all employees.
Who We Are:
QuitSure, a digital therapeutic, is the no.1 quit smoking app in the world, with over 3 million downloads. Its program has the highest quit rate (71%) for smoking based on the clinical trials. We are founded by alumni of IIT, IIM, and Stanford University. Our mission is to get 1 million smokers smoke-free in the next one year. Do take a look at our website: https://www.quitsure.app to learn more about what we do and how it works. We are well funded, and based in Mumbai. This is a remote job.
Here is a brief presentation with an overview about QuitSure. Please navigate to the link to know more about us: https://docs.google.com/presentation/d/12ribuf2pnDvkUi37-BtrE2yIOyYubWI59c53dQ6030o/edit?usp=sharing
How to Apply:
Please fill out and submit this form to officially apply for this role:
https://forms.gle/hW4d5Rj9W5TVy9Kh8
Who could you be? (Experience and Background):
We are seeking an experienced Technical Product Manager to lead product development and drive innovation for our technology products. This role requires a unique blend of technical expertise, product vision, and leadership. You’ll define product requirements, manage cross-functional teams, and ensure the successful delivery of features that delight millions of users.
What you will be doing here:
Product Strategy & Requirements
- Write comprehensive Product Requirements Documents (PRDs) that clearly articulate features, user stories, acceptance criteria, and technical specifications
- Conduct product research to identify market opportunities, user needs, and competitive gaps
- Define product roadmaps and prioritize features based on business value, user impact, and technical feasibility
Translate business objectives into actionable technical requirements for the engineering team.
Technical Leadership
Read and understand code to engage meaningfully with engineering discussions
- Review technical architectures and provide input on scalability, performance, and maintainability
- Make informed decisions about technical trade-offs and their impact on product timelines
- Bridge the gap between business stakeholders and technical teams through clear communication
Agile & Team Management
- Lead daily scrum calls and ensure smooth execution of sprint activities
- Manage the technical team's workflow, remove blockers, and facilitate collaboration
- Implement and optimize Agile/Scrum methodologies across the product development lifecycle
- Conduct sprint planning, retrospectives, and backlog grooming sessions
- Foster a culture of continuous improvement and data-driven decision making
Stakeholder Management
- Collaborate with design, engineering, marketing, and business teams to align on product vision
- Present product updates, roadmaps, and metrics to leadership and stakeholders
- Gather and synthesize feedback from users, customers, and internal teams
Competencies required for this role:
- Bachelor's degree in Computer Science, Engineering, Information Technology, or related technical field
- MBA or equivalent advanced degree is a plus
Experience
- 4-7 years of professional experience with at least 3 years in product management
- Proven track record of working on a consumer or enterprise application with 1 million+ downloads/users
- Hands-on experience managing technical teams and leading Agile/Scrum processes
- Demonstrated ability to ship successful products from concept to launch
Technical Skills
- Ability to read and understand code in one or more programming languages (Python, Php, Java, JavaScript, ReactNative etc.)
- Understanding of software architecture, APIs, databases, and system design principles
- Familiarity with cloud platforms (AWS, Azure, GCP) and modern tech stacks
- Proficiency with product management tools: Jira, Confluence, Asana, or similar
- Experience with analytics platforms: Google Analytics, Mixpanel, Amplitude, or similar
- Knowledge of version control systems (Git) and development workflows
Product Skills
- Strong ability to write clear, detailed PRDs and user stories
- Experience conducting user research, usability testing, and data analysis
- Proficiency in creating wireframes, mockups, or working closely with design teams
- Understanding of product metrics, A/B testing, and growth strategies
Preferred Qualifications
- Previous experience at product-led organization (app based)
- Experience managing products at scale with complex technical infrastructure
Key Competencies
Technical Acumen: Can dive into technical details when needed and earn credibility with engineering teams through technical competence.
Communication: Exceptional written and verbal communication skills; ability to explain complex technical concepts to non-technical stakeholders.
Leadership: Proven ability to influence without authority and drive cross-functional teams toward common goals.
Analytical Thinking: Data-driven approach to decision making with strong problem-solving abilities.
User-Centric Mindset: Deep empathy for users and passion for solving real problems.
Execution Excellence: Strong bias for action with ability to balance speed and quality.
Adaptability: Comfortable working in fast-paced, ambiguous environments with changing priorities.
What Success Looks Like in This Role
- Successful delivery of product features that meet user needs and business objectives
- Improved team velocity and engineering satisfaction scores
- Clear, well-documented product requirements that minimize rework
- High stakeholder satisfaction and alignment across teams
Measurable improvements in key product metrics (engagement, retention, revenue, etc.)
What We Offer
- Competitive salary
- Flexible working arrangements
- Opportunity to work on innovative and impact driven products
- Directly working with founders
Looking forward to connecting with you. Hope you are excited about being part of this high-impact organisation.
Detailed JD (Roles and Responsibilities)
Full stack (Backend focused) Ownership. Programing - Python, react (Good to have - C#, Node),Agile .Flexible to learn new things
About NEXUS SP Solutions
European tech company (Spain) in telecom/IT/cybersecurity. We’re hiring a Part-time Automation Developer (20h/week) to build and maintain scripts, integrations and CI/CD around our Odoo v18 + eCommerce stack.
What you’ll do
• Build Python automations: REST API integrations (vendors/payments), data ETL, webhooks, cron jobs.
• Maintain CI/CD (GitHub Actions) for modules and scripts; basic Docker.
• Implement backups/alerts and simple monitors (logs, retries).
• Collaborate with Full-Stack dev and UX on delivery and performance.
Requirements
• 2–5 yrs coding in Python for integrations/ETL.
• REST/JSON, OAuth2, webhooks; solid Git.
• Basic Docker + GitHub Actions (or GitLab CI).
• SQL/PostgreSQL basics; English for daily comms (Spanish/French is a plus).
• ≥ 3h overlap with CET; able to start within 15–30 days.
Nice to have
• Odoo RPC/XML-RPC, Selenium/Playwright, Linux server basics, retry/idempotency patterns.
Compensation & terms
• ₹2.5–5 LPA for 20h/week (contract/retainer).
• Long-term collaboration; IP transfer, work in our repos; PR-based workflow; CI/CD.
Process
- 30–45’ technical call. 2) Paid mini-task (8–10h): Python micro-service calling a REST API with retries, logging and unit test. 3) Offer.
About NEXUS SP Solutions
European tech company (Spain) in telecom/IT/cybersecurity. We’re hiring a Full-Stack Developer experienced in Odoo v17–18, Python and JavaScript to continuously improve our ERP & eCommerce.
Responsibilities
• Build/customize Odoo modules (Sales/Inventory/Website/eCommerce).
• Integrate REST APIs & payments (Stripe/Redsys/Bizum).
• Improve performance, security and reliability.
• Collaborate with UX/UI; deliver clean front code (OWL/QWeb, HTML/CSS/JS).
• Use Git and CI/CD (GitHub Actions); write docs/tests.
Requirements
• 2–6 yrs with Python + Odoo (ORM, models, views, ACL/record rules).
• PostgreSQL, XML/QWeb/OWL, REST, Git.
• English for daily communication (Spanish/French is a plus).
• Full-time remote with 3h overlap with CET.
Compensation
• ₹5–9 LPA (≈ ₹41.7k–₹75k / month; FX-dependent ≈ €460–€940).
• Long-term contract, roadmap, IP transfer (code belongs to NEXUS), repos in our org, CI/CD.
Process
- 30–45’ technical interview. 2) Paid task (8–12h): mini Odoo module + README + 1 test. 3) Offer.
Requirements :
- Strong communication skills are essential.
- The candidate should have 4-5 years of experience in automation and be proficient in Python and pytest (minimum 3-4 years of experience).
- They should have experience with SQL queries and JDBC connection.
- They must be capable of performing tasks with minimal supervision and taking responsibility.
- Agile working experience is crucial, and they should be available to attend & contribute in Agile ceremonies during UK/US hours.
- The candidate should proactively raise any issues or risks.
- They should have an understanding of CI/CD with Azure pipelines.
- The ability to understand flows and functionalities and identify issues through automation is important.
- A good grasp of code reusability, writing methods with fuzzy logic, and understanding OOP concepts is required.
Strong Software Engineering Profile
Mandatory (Experience 1): Must have 7+ years of experience using Python to design software solutions.
Mandatory (Skills 1): Strong working experience with Python (with Django framework experience) and Microservices architecture is a must.
Mandatory (Skills 2): Must have experience with event-driven architectures using Kafka
Mandatory (Skills 3): Must have Experience in DevOps practices and container orchestration using Kubernetes, along with cloud platforms like AWS, GCP, or Azure
Mandatory (Company): Product companies, Experience working in fintech, banking, or product companies is a plus.
Mandatory (Education): From IIT (Candidate should have done bachelor degree Btech or Dual degree Btech+Mtech or Intergrated Msc), From other premium institutes NIT, MNNIT, VITS, BITS (Candidates should have done B.E/B.Tech)
Preferred
Preferred (Skills 1): Experience in Task Queues like Celery and RabbitMQ is preferred.
Preferred (Skills 2): Experience with RDBMS/SQL is also preferrable.
Preferred (Education): Computer science
Job Title: Full Stack Engineer (Real-Time Audio Systems) – Voice AI
Location: Remote
Experience: 4+ Years
Employment Type: Full-time
About the Role
We’re looking for an experienced engineer to lead the development of a real-time Voice AI platform. This role blends deep expertise in conversational AI, audio infrastructure, and full-stack systems, making you a key contributor to building natural, low-latency voice-driven agents for complex healthcare workflows and beyond.
You’ll work directly with the founding team to build and deploy production-grade voice AI systems.
If you love working with WebRTC, WebSockets, and streaming pipelines, this is the place to build something impactful.
Key Responsibilities
- Build and optimize voice-driven AI systems integrating ASR (speech recognition), TTS (speech synthesis), and LLM inference with WebRTC and WebSocket infrastructure.
- Orchestrate multi-turn conversations using frameworks like Pipecat with memory and context management.
- Develop scalable backends and APIs to support streaming audio pipelines, stateful agents, and secure healthcare workflows.
- Implement real-time communication features with low-latency audio streaming pipelines.
- Collaborate closely with research, engineering, and product teams to ship experiments and deploy into production rapidly.
- Monitor, optimize, and maintain deployed voice agents for high reliability, safety, and performance.
- Translate experimental AI audio models into production-ready services.
Requirements
- 4+ years of software engineering experience with a focus on real-time systems, streaming, or conversational AI.
- Proven experience building and deploying voice AI, audio/video, or low-latency communication systems.
- Strong proficiency in Python (FastAPI, async frameworks, LangChain or similar).
- Working knowledge of modern front-end frameworks like Next.js (preferred).
- Hands-on experience with WebRTC, WebSockets, Redis, Kafka, Docker, and AWS.
- Exposure to healthcare tech, RCM, or regulated environments (highly valued).
Bonus Points
- Contributions to open-source audio/media projects.
- Experience with DSP, live streaming, or media infrastructure.
- Familiarity with observability tools (e.g., Grafana, Prometheus).
- Passion for reading research papers and discussing the future of voice communication.
🚀 We’re Hiring: AI Engineer (2+ Yrs, Generative AI/LLMs)
🌍 Remote | ⚡ Immediate Joiner - 15days
Work on cutting-edge GenAI apps, LLM fine-tuning & integrations (OpenAI, Gemini, Anthropic).
Exciting projects across industries in a service-company environment.
🔹 Skills: Python | LLMs | Fine-tuning | Vector DBs | GenAI Apps
🔸Apply: https://lnkd.in/dVQwSMBD
✨ Let’s build the future of AI together!
#AIJobs #Hiring #GenAI #LLM #Python #RemoteJobs
Position: Senior Data Engineer
Overview:
We are seeking an experienced Senior Data Engineer to design, build, and optimize scalable data pipelines and infrastructure to support cross-functional teams and next-generation data initiatives. The ideal candidate is a hands-on data expert with strong technical proficiency in Big Data technologies and a passion for developing efficient, reliable, and future-ready data systems.
Reporting: Reports to the CEO or designated Lead as assigned by management.
Employment Type: Full-time, Permanent
Location: Remote (Pan India)
Shift Timings: 2:00 PM – 11:00 PM IST
Key Responsibilities:
- Design and develop scalable data pipeline architectures for data extraction, transformation, and loading (ETL) using modern Big Data frameworks.
- Identify and implement process improvements such as automation, optimization, and infrastructure re-design for scalability and performance.
- Collaborate closely with Engineering, Product, Data Science, and Design teams to resolve data-related challenges and meet infrastructure needs.
- Partner with machine learning and analytics experts to enhance system accuracy, functionality, and innovation.
- Maintain and extend robust data workflows and ensure consistent delivery across multiple products and systems.
Required Qualifications:
- Bachelor’s degree in Computer Science, Engineering, or related field.
- 10+ years of hands-on experience in Data Engineering.
- 5+ years of recent experience with Apache Spark, with a strong grasp of distributed systems and Big Data fundamentals.
- Proficiency in Scala, Python, Java, or similar languages, with the ability to work across multiple programming environments.
- Strong SQL expertise and experience working with relational databases such as PostgreSQL or MySQL.
- Proven experience with Databricks and cloud-based data ecosystems.
- Familiarity with diverse data formats such as Delta Tables, Parquet, CSV, and JSON.
- Skilled in Linux environments and shell scripting for automation and system tasks.
- Experience working within Agile teams.
- Knowledge of Machine Learning concepts is an added advantage.
- Demonstrated ability to work independently and deliver efficient, stable, and reliable software solutions.
- Excellent communication and collaboration skills in English.
About the Organization:
We are a leading B2B data and intelligence platform specializing in high-accuracy contact and company data to empower revenue teams. Our technology combines human verification and automation to ensure exceptional data quality and scalability, helping businesses make informed, data-driven decisions.
What We Offer:
Our workplace embraces diversity, inclusion, and continuous learning. With a fast-paced and evolving environment, we provide opportunities for growth through competitive benefits including:
- Paid Holidays and Leaves
- Performance Bonuses and Incentives
- Comprehensive Medical Policy
- Company-Sponsored Training Programs
We are an Equal Opportunity Employer, committed to maintaining a workplace free from discrimination and harassment. All employment decisions are made based on merit, competence, and business needs.
Responsibilities
Develop and maintain web and backend components using Python, Node.js, and Zoho tools
Design and implement custom workflows and automations in Zoho
Perform code reviews to maintain quality standards and best practices
Debug and resolve technical issues promptly
Collaborate with teams to gather and analyze requirements for effective solutions
Write clean, maintainable, and well-documented code
Manage and optimize databases to support changing business needs
Contribute individually while mentoring and supporting team members
Adapt quickly to a fast-paced environment and meet expectations within the first month
Selection Process
1. HR Screening: Review of qualifications and experience
2. Online Technical Assessment: Test coding and problem-solving skills
3. Technical Interview: Assess expertise in web development, Python, Node.js, APIs, and Zoho
4. Leadership Evaluation: Evaluate team collaboration and leadership abilities
5. Management Interview: Discuss cultural fit and career opportunities
6. Offer Discussion: Finalize compensation and role specifics
Experience Required
2-4 years of relevant experience as a Zoho Developer
Proven ability to work as a self-starter and contribute individually
Strong technical and interpersonal skills to support team members effectively
About Us:
MyOperator and Heyo are India’s leading conversational platforms, empowering 40,000+ businesses with Call and WhatsApp-based engagement. We’re a product-led SaaS company scaling rapidly, and we’re looking for a skilled Software Developer to help build the next generation of scalable backend systems.
Role Overview:
We’re seeking a passionate Python Developer with strong experience in backend development and cloud infrastructure. This role involves building scalable microservices, integrating AI tools like LangChain/LLMs, and optimizing backend performance for high-growth B2B products.
Key Responsibilities:
- Develop robust backend services using Python, Django, and FastAPI
- Design and maintain a scalable microservices architecture
- Integrate LangChain/LLMs into AI-powered features
- Write clean, tested, and maintainable code with pytest
- Manage and optimize databases (MySQL/Postgres)
- Deploy and monitor services on AWS
- Collaborate across teams to define APIs, data flows, and system architecture
Must-Have Skills:
- Python and Django
- MySQL or Postgres
- Microservices architecture
- AWS (EC2, RDS, Lambda, etc.)
- Unit testing using pytest
- LangChain or Large Language Models (LLM)
- Strong grasp of Data Structures & Algorithms
- AI coding assistant tools (e.g., Chat GPT & Gemini)
Good to Have:
- MongoDB or ElasticSearch
- Go or PHP
- FastAPI
- React, Bootstrap (basic frontend support)
- ETL pipelines, Jenkins, Terraform
Why Join Us?
- 100% Remote role with a collaborative team
- Work on AI-first, high-scale SaaS products
- Drive real impact in a fast-growing tech company
- Ownership and growth from day one
Job Title : Perl Developer
Experience : 6+ Years
Engagement Type : C2C (Contract)
Location : Remote
Shift Timing : General Shift
Job Summary :
We are seeking an experienced Perl Developer with strong scripting and database expertise to support an application modernization initiative.
The role involves code conversion for compatibility between Sybase and MS SQL, ensuring performance, reliability, and maintainability of mission-critical systems.
You will work closely with the engineering team to enhance, migrate, and optimize codebases written primarily in Perl, with partial transitions toward Python for long-term sustainability.
Mandatory Skills :
Perl, Python, T-SQL, SQL Server, ADO, Git, Release Management, Monitoring Tools, Automation Tools, CI/CD, Sybase-to-MSSQL Code Conversion
Key Responsibilities :
- Analyze and convert existing application code from Sybase to MS SQL for compatibility and optimization.
- Maintain and enhance existing Perl scripts and applications.
- Where feasible, refactor or rewrite legacy components into Python for improved scalability.
- Collaborate with development and release teams to ensure seamless integration and deployment.
- Follow established Git/ADO version control and release management practices.
- (Optional) Contribute to monitoring, alerting, and automation improvements.
Required Skills :
- Strong Perl development experience (primary requirement).
- Proficiency in Python for code conversion and sustainability initiatives.
- Hands-on experience with T-SQL / SQL Server for database interaction and optimization.
- Familiarity with ADO/Git and standard release management workflows.
Nice to Have :
- Experience with monitoring and alerting tools.
- Familiarity with automation tools and CI/CD pipelines.
Role: Perl Developer
Location: Remote
Experience: 6–8 years
Shift: General
Job Description
Primary Skills (Must Have):
- Strong Perl development skills.
- Good knowledge of Python and T-SQL / SQL Server to create compatible code.
- Hands-on experience with ADO, Git, and release management practices.
Secondary Skills (Good to Have):
- Familiarity with monitoring/alerting tools.
- Exposure to automation tools.
Day-to-Day Responsibilities
- Perform application code conversion for compatibility between Sybase and MS SQL.
- Work on existing Perl-based codebase, ensuring maintainability and compatibility.
- Convert code into Python where feasible (as part of the migration strategy).
- Where Python conversion is not feasible, create compatible code in Perl.
- Collaborate with the team on release management and version control (Git).
Lead technical Consultant
Experience: 9-15 Years
This is a Backend-heavy Polyglot developer role - 80% Backend 20% Frontend
Backend
- 1st Primary Language - Java or Python or Go Or ROR or Rust
- 2nd Primary Language - one of the above or Node
The candidate should be experienced in atleast 2 backend tech stacks.
Frontend
- React or Angular
- HTML, CSS
The interview process would be quite complex and require depth of experience. The candidate should be hands-on in backend and frontend development (80-20)
The candidate should have experience with Unit testing, CI/CD, devops etc.
Good Communication skills is a must have.
Senior Technical Consultant (Polyglot)
Experience- 5-9 Years
This is a Backend-heavy Polyglot developer role - 80% Backend 20% Frontend
Backend
- 1st Primary Language - Java or Python or Go Or ROR or Rust
- 2nd Primary Language - one of the above or Node
The candidate should be experienced in atleast 2 backend tech stacks.
Frontend
- React or Angular
- HTML, CSS
The interview process would be quite complex and require depth of experience. The candidate should be hands-on in backend and frontend development (80-20)
The candidate should have experience with Unit testing, CI/CD, devops etc.
Good Communication skills is a must have.
Strong Software Engineering Profile
Mandatory (Experience 1): Must have 5+ years of experience using Python to design software solutions.
Mandatory (Skills 1): Strong working experience with Python (with Django framework experience) and Microservices architecture is a must.
Mandatory (Skills 2): Must have experience with event-driven architectures using Kafka
Mandatory (Skills 3): Must have Experience in DevOps practices and container orchestration using Kubernetes, along with cloud platforms like AWS, GCP, or Azure
Mandatory (Company): Product companies, Experience working in fintech, banking, or product companies is a plus.
Mandatory (Education): From IIT (Candidate should have done bachelor degree Btech or Dual degree Btech+Mtech or Intergrated Msc), From other premium institutes NIT, MNNIT, VITS, BITS (Candidates should have done B.E/B.Tech)
Preferred
Note: The shift hours for this job are from 4PM- 1AM IST
About The Role:
We are seeking a highly skilled and experienced QA Automation Engineer with over 5 years of experience in both automation and manual testing. The ideal candidate will possess strong expertise in Python, Playwright, PyTest, Pywinauto, and Java with Selenium, API testing with Rest Assured, and SQL. Experience in the mortgage domain, Azure DevOps, and desktop & web application testing is essential. The role requires working in evening shift timings (4 PM – 1 AM IST) to collaborate with global teams.
Key Responsibilities:
- Design and develop automation test scripts using Python, Playwright, PywinAuto, and PyTest.
- Design, develop, and maintain automation frameworks for desktop applications using Java with WinAppDriver and Selenium, and Python with Pywinauto.
- Understand business requirements in the mortgage domain and prepare detailed test plans, test cases, and test scenarios.
- Define automation strategy and identify test cases to automate for web, desktop, and API testing.
- Perform manual testing for desktop, web, and API applications to validate functional and non-functional requirements.
- Create and execute API automation scripts using Rest Assured for RESTful services validation.
- Perform SQL queries to validate backend data and ensure data integrity in mortgage domain application.
- Use Azure DevOps for test case management, defect tracking, CI/CD pipeline execution, and test reporting.
- Collaborate with DevOps and development teams to integrate automated tests within CI/CD pipelines.
- Proficient in version control and collaborative development using Git.
- Experience in managing test automation projects and dependencies using Maven.
- Work closely with developers, BAs, and product owners to clarify requirements and provide early feedback.
- Report and track defects with clear reproduction steps, logs, and screenshots until closure.
- Apply mortgage domain knowledge to test scenarios for loan origination, servicing, payments, compliance, and default modules.
- Ensure adherence to regulatory and compliance standards in mortgage-related applications.
- Perform cross-browser testing and desktop compatibility testing for client-based applications.
- Drive defect prevention by identifying gaps in requirements and suggesting improvements.
- Ensure best practices in test automation - modularization, reusability, and maintainability.
- Provide daily/weekly status reports on testing progress, defect metrics, and automation coverage.
- Maintain documentation for automation frameworks, test cases, and domain-specific scenarios.
- Experienced in working within Agile/Scrum development environments.
- Proven ability to thrive in a fast-paced environment and consistently meet deadlines with minimal supervision.
- Strong team player with excellent multitasking skills, capable of managing multiple priorities in a deadline-driven environment.
Key requirements:
- 4-8 years of experience in Quality Assurance (manual and automation).
- Strong proficiency in Python, Pywinauto, PyTest, Playwright
- Hands-on experience with Rest Assured for API automation.
- Expertise in SQL for backend testing and data validation.
- Experience in mortgage domain applications (loan origination, servicing, compliance).
- Knowledge of Azure DevOps for CI/CD, defect tracking, and test case management.
- Proficiency in testing desktop and web applications.
- Excellent collaboration and communication skills to work with cross-functional global teams.
- Willingness to work in evening shift timings (4 PM – 1 AM IST).
Job Title : Senior Technical Consultant (Polyglot)
Experience Required : 5 to 10 Years
Location : Bengaluru / Chennai (Remote Available)
Positions : 2
Notice Period : Immediate to 1 Month
Role Overview :
We seek passionate polyglot developers (Java/Python/Go) who love solving complex problems and building elegant digital products.
You’ll work closely with clients and teams, applying Agile practices to deliver impactful digital experiences..
Mandatory Skills :
Strong in Java/Python/Go (any 2), with frontend experience in React/Angular, plus knowledge of HTML, CSS, CI/CD, Unit Testing, and DevOps.
Key Skills & Requirements :
Backend (80% Focus) :
- Strong expertise in Java, Python, or Go (at least 2 backend stacks required).
- Additional exposure to Node.js, Ruby on Rails, or Rust is a plus.
- Hands-on experience in building scalable, high-performance backend systems.
Frontend (20% Focus) :
- Proficiency in React or Angular
- Solid knowledge of HTML, CSS, JavaScript
Other Must-Haves :
- Strong understanding of unit testing, CI/CD pipelines, and DevOps practices.
- Ability to write clean, testable, and maintainable code.
- Excellent communication and client-facing skills.
Roles & Responsibilities :
- Tackle technically challenging and mission-critical problems.
- Collaborate with teams to design and implement pragmatic solutions.
- Build prototypes and showcase products to clients.
- Contribute to system design and architecture discussions.
- Engage with the broader tech community through talks and conferences.
Interview Process :
- Technical Round (Online Assessment)
- Technical Round with Client (Code Pairing)
- System Design & Architecture (Build from Scratch)
✅ This is a backend-heavy polyglot developer role (80% backend, 20% frontend).
✅ The right candidate is hands-on, has multi-stack expertise, and thrives in solving complex technical challenges.
🌍 We’re Hiring: Senior Field AI Engineer | Remote | Full-time
Are you passionate about pioneering enterprise AI solutions and shaping the future of agentic AI?
Do you thrive in strategic technical leadership roles where you bridge advanced AI engineering with enterprise business impact?
We’re looking for a Senior Field AI Engineer to serve as the technical architect and trusted advisor for enterprise AI initiatives. You’ll translate ambitious business visions into production-ready applied AI systems, implementing agentic AI solutions for large enterprises.
What You’ll Do:
🔹 Design and deliver custom agentic AI solutions for mid-to-large enterprises
🔹 Build and integrate intelligent agent systems using frameworks like LangChain, LangGraph, CrewAI
🔹 Develop advanced RAG pipelines and production-grade LLM solutions
🔹 Serve as the primary technical expert for enterprise accounts and build long-term customer relationships
🔹 Collaborate with Solutions Architects, Engineering, and Product teams to drive innovation
🔹 Represent technical capabilities at industry conferences and client reviews
What We’re Looking For:
✔️ 7+ years of experience in AI/ML engineering with production deployment expertise
✔️ Deep expertise in agentic AI frameworks and multi-agent system design
✔️ Advanced Python programming and scalable backend service development
✔️ Hands-on experience with LLM platforms (GPT, Gemini, Claude) and prompt engineering
✔️ Experience with vector databases (Pinecone, Weaviate, FAISS) and modern ML infrastructure
✔️ Cloud platform expertise (AWS, Azure, GCP) and MLOps/CI-CD knowledge
✔️ Strategic thinker able to balance technical vision with hands-on delivery in fast-paced environments
✨ Why Join Us:
- Drive enterprise AI transformation for global clients
- Work with a category-defining AI platform bridging agents and experts
- High-impact, customer-facing role with strategic influence
- Competitive benefits: medical, vision, dental insurance, 401(k)
🌍 We’re Hiring: Customer Facing Data Scientist (CFDS) | Remote | Full-time
Are you passionate about applied data science and enjoy partnering directly with enterprise customers to deliver measurable business impact?
Do you thrive in fast-paced, cross-functional environments and want to be the face of a cutting-edge AI platform?
We’re looking for a Customer Facing Data Scientist to design, develop, and deploy machine learning applications with our clients, helping them unlock the value of their data while building strong, trusted relationships.
What You’ll Do:
🔹 Collaborate directly with customers to understand their business challenges and design ML solutions
🔹 Manage end-to-end data science projects with a customer success mindset
🔹 Build long-term trusted relationships with enterprise stakeholders
🔹 Work across industries: Banking, Finance, Health, Retail, E-commerce, Oil & Gas, Marketing
🔹 Evangelize the platform, teach, enable, and support customers in building AI solutions
🔹 Collaborate internally with Data Science, Engineering, and Product teams to deliver robust solutions
What We’re Looking For:
✔️ 5–10 years of experience solving complex data problems using Machine Learning
✔️ Expert in ML modeling and Python coding
✔️ Excellent customer-facing communication and presentation skills
✔️ Experience in AI services or startup environments preferred
✔️ Domain expertise in Finance is a plus
✔️ Applied experience with Generative AI / LLM-based solutions is a plus
✨ Why Join Us:
- High-impact opportunity to shape a new business vertical
- Work with next-gen AI technology to solve real enterprise problems
- Backed by top-tier investors with experienced leadership
- Recognized as a Top 5 Data Science & ML platform by G2
- Comprehensive benefits: medical, vision, dental insurance, 401(k)
🚀 We’re Hiring: Senior AI Engineer (Customer Facing) | Remote
Are you passionate about building and deploying enterprise-grade AI solutions?
Do you enjoy combining deep technical expertise with customer-facing problem-solving?
We’re looking for a Senior AI Engineer to design, deliver, and integrate cutting-edge AI/LLM applications for global enterprise clients.
What You’ll Do:
🔹 Partner directly with enterprise customers to understand business requirements & deliver AI solutions
🔹 Architect and integrate intelligent agent systems (LangChain, LangGraph, CrewAI)
🔹 Build LLM pipelines with RAG and client-specific knowledge
🔹 Collaborate with internal teams to ensure seamless integration
🔹 Champion engineering best practices with production-grade Python code
What We’re Looking For:
✔️ 5+ years of hands-on experience in AI/ML engineering or backend systems
✔️ Proven track record with LLMs & intelligent agents
✔️ Strong Python and backend expertise
✔️ Experience with vector databases (Pinecone, We aviate, FAISS)
✔️ Excellent communication & customer-facing skills
Preferred: Cloud (AWS/Azure/GCP), MLOps knowledge, and startup/AI services experience.
🌍 Remote role | High-impact opportunity | Backed by strong leadership & growth
If this sounds like you (or someone in your network), let’s connect!
- Strong Software Engineering Profile
- Mandatory (Experience 1): Must have 5+ years of experience using Python to design software solutions.
- Mandatory (Skills 1): Strong working experience with Python (with Django framework experience) and Microservices architecture is a must.
- Mandatory (Skills 2): Must have experience with event-driven architectures using Kafka
- Mandatory (Skills 3): Must have Experience in DevOps practices and container orchestration using Kubernetes, along with cloud platforms like AWS, GCP, or Azure
- Mandatory (Company): Product companies, Experience working in fintech, banking, or product companies is a plus.
- Mandatory (Education): From IIT (Candidate should have done bachelor degree Btech or Dual degree Btech+Mtech or Intergrated Msc), From other premium institutes NIT, MNNIT, VITS, BITS (Candidates should have done B.E/B.Tech)
Full Stack Developer Internship – (Remote)
Pay: ₹20,000 - ₹30,000/month | Duration: 3 months
We’re building Pitchline - a voice based conversational sales AI agent, an ambitious AI-powered web app aimed at solving meaningful problems in the B2B space. It’s currently in MVP stage and has strong early demand. I’m looking for a hands-on an Full Stack Developer Intern who can work closely with me to bring this to life.
You’ll be one of the first people to touch the codebase — shaping the foundation and solving problems across AI integration, backend APIs, and a bit of frontend work.
What you'll be doing
- Build and maintain backend APIs (Python)
- Integrate AI models (OpenAI, LangChain, Pinecone/Weaviate etc.) for core workflows
- Design DB schemas and manage basic infra (Postgres)
- Support frontend development (basic UI integration in React or similar)
- Rapidly iterate on product ideas and ship working features
- Collaborate closely with me (Founder) to shape the MVP
What we're looking for
- Curiosity to learn new things. You don’t wait for someone to unblock you and take full ownership and get things done by yourself.
- Strong foundation in backend development
- Experience working with APIs, databases, and deploying backend services
- Curious about or experienced in AI/LLM tools like OpenAI APIs, LangChain, vector databases, etc.
- Comfortable working with version control and basic dev workflows
- Worked on real projects or shipped anything end-to-end (Even if it is a personal project)
Why join us?
You’ll be a core member of the team. What we’re building is one of a kind and being a part of the successful founding team will fast track your personal and professional growth.
You’ll work on a real product with potential, witnessing in real time the impact your hard work brings.
You’ll get ownership and be part of early decisions.
You'll learn how design, tech, and business come together in early-stage product building
Flexible working hours
Opportunity to convert to full-time upon successful conversion.
We’re a fast paced team, working hard to deploy the MVP as soon as possible. If you're excited about AI, startup building, and getting your hands dirty with real development then our company is a great place to grow.
About the Role
NeoGenCode Technologies is looking for a Senior Technical Architect with strong expertise in enterprise architecture, cloud, data engineering, and microservices. This is a critical role demanding leadership, client engagement, and architectural ownership in designing scalable, secure enterprise systems.
Key Responsibilities
- Design scalable, secure, and high-performance enterprise software architectures.
- Architect distributed, fault-tolerant systems using microservices and event-driven patterns.
- Provide technical leadership and hands-on guidance to engineering teams.
- Collaborate with clients, understand business needs, and translate them into architectural designs.
- Evaluate, recommend, and implement modern tools, technologies, and processes.
- Drive DevOps, CI/CD best practices, and application security.
- Mentor engineers and participate in architecture reviews.
Must-Have Skills
- Architecture: Enterprise Solutions, EAI, Design Patterns, Microservices (API & Event-driven)
- Tech Stack: Java, Spring Boot, Python, Angular (recent 2+ years experience), MVC
- Cloud Platforms: AWS, Azure, or Google Cloud
- Client Handling: Strong experience with client-facing roles and delivery
- Data: Data Modeling, RDBMS & NoSQL, Data Migration/Retention Strategies
- Security: Familiarity with OWASP, PCI DSS, InfoSec principles
Good to Have
- Experience with Mobile Technologies (native, hybrid, cross-platform)
- Knowledge of tools like Enterprise Architect, TOGAF frameworks
- DevOps tools, containerization (Docker), CI/CD
- Experience in financial services / payments domain
- Familiarity with BI/Analytics, AI/ML, Predictive Analytics
- 3rd-party integrations (e.g., MuleSoft, BizTalk)
We are seeking a Lead AI Engineer to spearhead development of advanced agentic workflows and large language model (LLM) systems. The ideal candidate should bring deep expertise in agent building, LLM evaluation/tracing, and prompt operations, combined with strong deployment experience at scale.
Key Responsibilities:
- Design and build agentic workflows leveraging modern frameworks.
- Develop robust LLM evaluation, tracing, and prompt ops pipelines.
- Lead MCP (Model Context Protocol) based system integrations.
- Deploy and scale AI/ML solutions with enterprise-grade reliability.
- Collaborate with product and engineering teams to deliver high-impact solutions.
Required Skills & Experience:
- Proficiency with LangChain, LangGraph, Pydantic, Crew.ai, and MCP.
- Strong understanding of LLM architecture, behavior, and evaluation methods.
- Hands-on expertise in MLOps, DevOps, and deploying AI/ML workloads at scale.
- Experience leading AI projects from prototyping to production.
- Strong foundation in prompt engineering, observability, and agent orchestration.
Product Manager – AI Go-to-Market (GTM)
You know that feeling when you see a product not just built, but truly adopted? That’s what this role is about.
We built something that turns the endless scroll of social video into business intelligence. The product is already strong — now it’s time to take it to market, scale adoption, and own how it reaches the world.
This isn’t another PM role. This is where you become the strategist who shapes how AI meets the market.
Who We Are
Our team is small, global, and moves fast. Not startup-fast. Not “we say we’re agile” fast. Actually fast.
We ship meaningful features in days, and now we need someone who can do the same on the market side.
The people here don’t just work with AI — they think in AI. They dream in Python. They know how to build.
What we’re missing is the person who knows how to launch, position, and scale.
What We Need
We need someone who’s lived the GTM life.
Someone who has:
- Shaped go-to-market strategies across multiple channels.
- Crafted positioning, messaging, and pricing that drove adoption.
- Partnered with sales & marketing to accelerate pipeline and conversion.
- Translated market insights into product direction.
You don’t need to be taught what adoption metrics look like. You don’t need to “grow into” GTM strategy. You already know these things so deeply that you can focus on the only thing that matters: getting AI into the hands of people who can’t live without it.
Who You Are
- Strong IT/product foundation with a track record in launching AI/tech products.
- An AI believer who sees how it will reshape industries.
- Obsessed with channels, adoption, differentiation, and growth loops.
- Someone who thrives where market execution meets product credibility.
The Reality
The work is beautifully challenging. The pace is intense in the best way. The problems are complex but worth solving. And the team? They care deeply.
If you get your energy from taking innovation to market and building adoption strategies that matter, you’ll probably fall in love with what we do here. If you prefer more structure or slower rhythms, this might not align — and that’s completely valid.
How to Apply
If you’re reading this thinking “finally, somewhere that gets it” — we’d love to see something you’ve launched. Not a resume. Not a cover letter. Show us proof of how you’ve taken a product to market.
We’re excited to see what you’ve built and have a real conversation about whether this could be magic for both of us.
About the Role
We are looking for enthusiastic LLM Interns to join our team remotely for a 3-month internship. This role is ideal for students or graduates interested in AI, Natural Language Processing (NLP), and Large Language Models (LLMs). You will gain hands-on experience working with cutting-edge AI tools, prompt engineering, and model fine-tuning. While this is an unpaid internship, interns who successfully complete the program will receive a Completion Certificate and a Letter of Recommendation.
Responsibilities
- Research and experiment with LLMs, NLP techniques, and AI frameworks.
- Design, test, and optimize prompts and workflows for different use cases.
- Assist in fine-tuning or integrating LLMs for internal projects.
- Evaluate model outputs and improve accuracy, efficiency, and reliability.
- Collaborate with developers, data scientists, and product managers to implement AI-driven features.
- Document experiments, results, and best practices.
Requirements
- Strong interest in Artificial Intelligence, NLP, and Machine Learning.
- Familiarity with Python and ML libraries (e.g., TensorFlow, PyTorch, Hugging Face Transformers).
- Basic understanding of LLM concepts such as embeddings, fine-tuning, and inference.
- Knowledge of APIs (OpenAI, Anthropic, Hugging Face, etc.) is a plus.
- Good analytical and problem-solving skills.
- Ability to work independently in a remote environment.
What You’ll Gain
- Practical exposure to state-of-the-art AI tools and LLMs.
- Mentorship from AI and software professionals.
- Completion Certificate upon successful completion.
- Letter of Recommendation based on performance.
- Experience to showcase in research projects, academic work, or future AI roles.
Internship Details
- Duration: 3 months
- Location: Remote (Work from Home)
- Stipend: Unpaid
- Perks: Completion Certificate + Letter of Recommendation
About the Role
We are looking for enthusiastic Backend Developer Interns to join our team remotely for a 3-month internship. This is an excellent opportunity to gain hands-on experience in backend development, work on real projects, and expand your technical skills. While this is an unpaid internship, interns who successfully complete the program will receive a Completion Certificate and a Letter of Recommendation.
Responsibilities
- Assist in developing and maintaining backend services and APIs.
- Work with databases (SQL/NoSQL) for data storage and retrieval.
- Collaborate with frontend developers to integrate user-facing elements with server-side logic.
- Write clean, efficient, and reusable code.
- Debug, troubleshoot, and optimize backend performance.
- Participate in code reviews and team discussions.
- Document technical processes and contributions.
Requirements
- Basic knowledge of at least one backend language/framework (Node.js, Python/Django/Flask, Java/Spring Boot, or similar).
- Understanding of RESTful APIs and web services.
- Familiarity with relational and/or NoSQL databases (MySQL, PostgreSQL, MongoDB, etc.).
- Knowledge of Git/GitHub for version control.
- Strong problem-solving and analytical skills.
- Ability to work independently in a remote environment.
- Good communication skills and eagerness to learn.
What You’ll Gain
- Real-world experience in backend development.
- Mentorship and exposure to industry practices.
- Completion Certificate at the end of the internship.
- Letter of Recommendation based on performance.
- Opportunity to strengthen your portfolio with live projects.
Internship Details
- Duration: 3 months
- Location: Remote (Work from Home)
- Stipend: Unpaid
- Perks: Completion Certificate + Letter of Recommendation
About Unilog
Unilog is the only connected product content and eCommerce provider serving the Wholesale Distribution, Manufacturing, and Specialty Retail industries. Our flagship CX1 Platform is at the center of some of the most successful digital transformations in North America. CX1 Platform’s syndicated product content, integrated eCommerce storefront, and automated PIM tool simplify our customers' path to success in the digital marketplace.
With more than 500 customers, Unilog is uniquely positioned as the leader in eCommerce and product content for Wholesale distribution, manufacturing, and specialty retail.
About the Role
We are looking for a highly motivated Innovation Engineer to join our CTO Office and drive the exploration, prototyping, and adoption of next-generation technologies. This role offers a unique opportunity to work at the forefront of AI/ML, Generative AI (Gen AI), Large Language Models (LLMs), Vertex AI, MCP, Vector Databases, AI Search, Agentic AI, Automation.
As an Innovation Engineer, you will be responsible for identifying emerging technologies, building proof-of-concepts (PoCs), and collaborating with cross-functional teams to define the future of AI-driven solutions. Your work will directly influence the company’s technology strategy and help shape disruptive innovations.
Key Responsibilities
- Research Implementation: Stay ahead of industry trends, evaluate emerging AI/ML technologies, and prototype novel solutions in areas like Gen AI, Vector Search, AI Agents, VertexAI, MCP and Automation.
- Proof-of-Concept Development: Rapidly build, test, and iterate PoCs to validate new technologies for potential business impact.
- AI/ML Engineering: Design and develop AI/ML models, AI Agents, LLMs, intelligent search capabilities leveraging Vector embeddings.
- Vector AI Search: Explore vector databases and optimize retrieval-augmented generation (RAG) workflows.
- Automation AI Agents: Develop autonomous AI agents and automation frameworks to enhance business processes.
- Collaboration Thought Leadership: Work closely with software developers and product teams to integrate innovations into production-ready solutions.
- Innovation Strategy: Contribute to the technology roadmap, patents, and research papers to establish leadership in emerging domains.
Required Qualifications
- 4–10 years of experience in AI/ML, software engineering, or a related field.
- Strong hands-on expertise in Python, TensorFlow, PyTorch, LangChain, Hugging Face, OpenAI APIs, Claude, Gemini, VertexAI, MCP.
- Experience with LLMs, embeddings, AI search, vector databases (e.g., Pinecone, FAISS, Weaviate, PGVector), MCP and agentic AI (Vertex, Autogen, ADK)
- Familiarity with cloud platforms (AWS, Azure, GCP) and AI/ML infrastructure.
- Strong problem-solving skills and a passion for innovation.
- Ability to communicate complex ideas effectively and work in a fast-paced, experimental environment.
Preferred Qualifications
- Experience with multi-modal AI (text, vision, audio), reinforcement learning, or AI security.
- Knowledge of data pipelines, MLOps, and AI governance.
- Contributions to open-source AI/ML projects or published research papers.
Why Join Us?
- Work on cutting-edge AI/ML innovations with the CTO Office.
- Influence the company’s future AI strategy and shape emerging technologies.
- Competitive compensation, growth opportunities, and a culture of continuous learning.
About Our Benefits
Unilog offers a competitive total rewards package including competitive salary, multiple medical, dental, and vision plans to meet all our employees’ needs, career development, advancement opportunities, annual merit, a generous time-off policy, and a flexible work environment.
Unilog is committed to building the best team and we are committed to fair hiring practices where we hire people for their potential and advocate for diversity, equity, and inclusion. As such, we do not discriminate or make decisions based on your race, color, religion, creed, ancestry, sex, national origin, age, disability, familial status, marital status, military status, veteran status, sexual orientation, gender identity, or expression, or any other protected class.
About Unilog
Unilog is the only connected product content and eCommerce provider serving the Wholesale Distribution, Manufacturing, and Specialty Retail industries. Our flagship CX1 Platform is at the center of some of the most successful digital transformations in North America.
With more than 500 customers, Unilog is uniquely positioned as the leader in eCommerce and product content for Wholesale distribution, manufacturing, and specialty retail.
About the Role
We are seeking a skilled AI CX Automation Engineer to design, build, and optimize AI-driven workflows for customer support automation.
This role will be responsible for enabling end-to-end L1 support automation using Freshworks AI (Freddy AI) and other conversational AI platforms. The ideal candidate will have strong technical expertise in conversational AI, workflow automation, and system integrations, working closely with Knowledge Managers and Customer Support teams to maximize case deflection and resolution efficiency.
Key Responsibilities
- AI Workflow Design: Build, configure, and optimize conversational AI workflows for L1 customer query handling.
- Automation Enablement: Implement automation using Freshworks AI (Freddy AI), chatbots, and orchestration tools to reduce manual support load.
- Integration: Connect AI agents with knowledge bases, CRM, and ticketing systems to enable contextual and seamless responses.
- Conversational Design: Craft natural, intuitive conversation flows for chatbots and virtual agents to improve customer experience.
- Performance Optimization: Monitor AI agent performance, resolution rates, and continuously fine-tune workflows.
- Cross-functional Collaboration: Partner with Knowledge Managers, Product Teams, and Support to ensure workflows align with up-to-date content and customer needs.
- Scalability Innovation: Explore emerging agentic AI capabilities and recommend enhancements to future-proof CX automation.
Required Qualifications
- 4–10 years of experience in conversational AI, automation engineering, or customer support technology.
- Hands-on expertise with Freshworks AI (Freddy AI) or similar AI-driven CX platforms (Zendesk AI, Salesforce Einstein, Dialogflow, Rasa, etc.).
- Strong experience in workflow automation, chatbot configuration, and system integrations (APIs, Webhooks, RPA).
- Familiarity with LLMs, intent recognition, and conversational AI frameworks.
- Strong analytical skills to evaluate and optimize AI agent performance.
- Excellent problem-solving, collaboration, and communication skills.
Preferred Qualifications
- Experience with agentic AI frameworks and multi-turn conversational flows.
- Knowledge of scripting or programming languages (Python, Node.js) for custom AI integrations.
- Familiarity with vector search, RAG (Retrieval-Augmented Generation), and AI search to improve context-driven answers.
- Exposure to SaaS, product-based companies, or enterprise-scale customer support operations.
Why Join Us?
- Be at the forefront of AI-driven customer support automation.
- Directly contribute to achieving up to 60% case resolution through AI workflows.
- Collaborate with Knowledge Managers and AI engineers to build next-gen CX solutions.
- Competitive compensation, benefits, and a culture of continuous learning.
Benefits
Unilog offers a competitive total rewards package including:
- Competitive salary
- Multiple medical, dental, and vision plans
- Career development and advancement opportunities
- Annual merit increases
- Generous time-off policy
- Flexible work environment
We are committed to fair hiring practices and advocate for diversity, equity, and inclusion.
About Data Axle:
Data Axle Inc. has been an industry leader in data, marketing solutions, sales and research for over 50 years in the USA. Data Axle now as an established strategic global centre of excellence in Pune. This centre delivers mission critical data services to its global customers powered by its proprietary cloud-based technology platform and by leveraging proprietary business & consumer databases.
Data Axle Pune is pleased to have achieved certification as a Great Place to Work!
Roles & Responsibilities:
We are looking for a Data Scientist to join the Data Science Client Services team to continue our success of identifying high quality target audiences that generate profitable marketing return for our clients. We are looking for experienced data science, machine learning and MLOps practitioners to design, build and deploy impactful predictive marketing solutions that serve a wide range of verticals and clients. The right candidate will enjoy contributing to and learning from a highly talented team and working on a variety of projects.
We are looking for a Senior Data Scientist who will be responsible for:
- Ownership of design, implementation, and deployment of machine learning algorithms in a modern Python-based cloud architecture
- Design or enhance ML workflows for data ingestion, model design, model inference and scoring
- Oversight on team project execution and delivery
- If senior, establish peer review guidelines for high quality coding to help develop junior team members’ skill set growth, cross-training, and team efficiencies
- Visualize and publish model performance results and insights to internal and external audiences
Qualifications:
- Masters in a relevant quantitative, applied field (Statistics, Econometrics, Computer Science, Mathematics, Engineering)
- Minimum of 3.5 years of work experience in the end-to-end lifecycle of ML model development and deployment into production within a cloud infrastructure (Databricks is highly preferred)
- Proven ability to manage the output of a small team in a fast-paced environment and to lead by example in the fulfilment of client requests
- Exhibit deep knowledge of core mathematical principles relating to data science and machine learning (ML Theory + Best Practices, Feature Engineering and Selection, Supervised and Unsupervised ML, A/B Testing, etc.)
- Proficiency in Python and SQL required; PySpark/Spark experience a plus
- Ability to conduct a productive peer review and proper code structure in Github
- Proven experience developing, testing, and deploying various ML algorithms (neural networks, XGBoost, Bayes, and the like)
- Working knowledge of modern CI/CD methods This position description is intended to describe the duties most frequently performed by an individual in this position.
It is not intended to be a complete list of assigned duties but to describe a position level.
About HelloRamp.ai
HelloRamp is on a mission to revolutionize media creation for automotive and retail using AI. Our platform powers 3D/AR experiences for leading brands like Cars24, Spinny, and Samsung. We’re now building the next generation of Computer Vision + AI products, including cutting-edge NeRF pipelines and AI-driven video generation.
What You’ll Work On
- Develop and optimize Computer Vision pipelines for large-scale media creation.
- Implement NeRF-based systems for high-quality 3D reconstruction.
- Build and fine-tune AI video generation models using state-of-the-art techniques.
- Optimize AI inference for production (CUDA, TensorRT, ONNX).
- Collaborate with the engineering team to integrate AI features into scalable cloud systems.
- Research latest AI/CV advancements and bring them into production.
Skills & Experience
- Strong Python programming skills.
- Deep expertise in Computer Vision and Machine Learning.
- Hands-on with PyTorch/TensorFlow.
- Experience with NeRF frameworks (Instant-NGP, Nerfstudio, Plenoxels) and/or video synthesis models.
- Familiarity with 3D graphics concepts (meshes, point clouds, depth maps).
- GPU programming and optimization skills.
Nice to Have
- Knowledge of Three.js or WebGL for rendering AI outputs on the web.
- Familiarity with FFmpeg and video processing pipelines.
- Experience in cloud-based GPU environments (AWS/GCP).
Why Join Us?
- Work on cutting-edge AI and Computer Vision projects with global impact.
- Join a small, high-ownership team where your work matters.
- Opportunity to experiment, publish, and contribute to open-source.
- Competitive pay and flexible work setup.
About Us
MatchMove is a leading embedded finance platform that empowers businesses to embed financial services into their applications. We provide innovative solutions across payments, banking-as-a-service, and spend/send management, enabling our clients to drive growth and enhance customer experiences.
Are You The One?
As a Technical Lead Engineer - Data, you will architect, implement, and scale our end-to-end data platform built on AWS S3, Glue, Lake Formation, and DMS. You will lead a small team of engineers while working cross-functionally with stakeholders from fraud, finance, product, and engineering to enable reliable, timely, and secure data access across the business.
You will champion best practices in data design, governance, and observability, while leveraging GenAI tools to improve engineering productivity and accelerate time to insight.
You will contribute to
- Owning the design and scalability of the data lake architecture for both streaming and batch workloads, leveraging AWS-native services.
- Leading the development of ingestion, transformation, and storage pipelines using AWS Glue, DMS, Kinesis/Kafka, and PySpark.
- Structuring and evolving data into OTF formats (Apache Iceberg, Delta Lake) to support real-time and time-travel queries for downstream services.
- Driving data productization, enabling API-first and self-service access to curated datasets for fraud detection, reconciliation, and reporting use cases.
- Defining and tracking SLAs and SLOs for critical data pipelines, ensuring high availability and data accuracy in a regulated fintech environment.
- Collaborating with InfoSec, SRE, and Data Governance teams to enforce data security, lineage tracking, access control, and compliance (GDPR, MAS TRM).
- Using Generative AI tools to enhance developer productivity — including auto-generating test harnesses, schema documentation, transformation scaffolds, and performance insights.
- Mentoring data engineers, setting technical direction, and ensuring delivery of high-quality, observable data pipelines.
Responsibilities
- Architect scalable, cost-optimized pipelines across real-time and batch paradigms, using tools such as AWS Glue, Step Functions, Airflow, or EMR.
- Manage ingestion from transactional sources using AWS DMS, with a focus on schema drift handling and low-latency replication.
- Design efficient partitioning, compression, and metadata strategies for Iceberg or Hudi tables stored in S3, and cataloged with Glue and Lake Formation.
- Build data marts, audit views, and analytics layers that support both machine-driven processes (e.g. fraud engines) and human-readable interfaces (e.g. dashboards).
- Ensure robust data observability with metrics, alerting, and lineage tracking via OpenLineage or Great Expectations.
- Lead quarterly reviews of data cost, performance, schema evolution, and architecture design with stakeholders and senior leadership.
- Enforce version control, CI/CD, and infrastructure-as-code practices using GitOps and tools like Terraform.
Requirements
- At-least 6 years of experience in data engineering.
- Deep hands-on experience with AWS data stack: Glue (Jobs & Crawlers), S3, Athena, Lake Formation, DMS, and Redshift Spectrum.
- Expertise in designing data pipelines for real-time, streaming, and batch systems, including schema design, format optimization, and SLAs.
- Strong programming skills in Python (PySpark) and advanced SQL for analytical processing and transformation.
- Proven experience managing data architectures using open table formats (Iceberg, Delta Lake, Hudi) at scale.
- Understanding of stream processing with Kinesis/Kafka and orchestration via Airflow or Step Functions.
- Experience implementing data access controls, encryption policies, and compliance workflows in regulated environments.
- Ability to integrate GenAI tools into data engineering processes to drive measurable productivity and quality gains — with strong engineering hygiene.
- Demonstrated ability to lead teams, drive architectural decisions, and collaborate with cross-functional stakeholders.
Brownie Points
- Experience working in a PCI DSS or any other central bank regulated environment with audit logging and data retention requirements.
- Experience in the payments or banking domain, with use cases around reconciliation, chargeback analysis, or fraud detection.
- Familiarity with data contracts, data mesh patterns, and data as a product principles.
- Experience using GenAI to automate data documentation, generate data tests, or support reconciliation use cases.
- Exposure to performance tuning and cost optimization strategies in AWS Glue, Athena, and S3.
- Experience building data platforms for ML/AI teams or integrating with model feature stores.
MatchMove Culture:
- We cultivate a dynamic and innovative culture that fuels growth, creativity, and collaboration. Our fast-paced fintech environment thrives on adaptability, agility, and open communication.
- We focus on employee development, supporting continuous learning and growth through training programs, learning on the job and mentorship.
- We encourage speaking up, sharing ideas, and taking ownership. Embracing diversity, our team spans across Asia, fostering a rich exchange of perspectives and experiences.
- Together, we harness the power of fintech and e-commerce to make a meaningful impact on people's lives.
Grow with us and shape the future of fintech and e-commerce. Join us and be part of something bigger!






















