
AI Engineer (1–8 years)
Location: Bellandur, Outer Ring Road, Bengaluru (Work from Office only)
Organization Size: 20 members across functions, and growing
Reports to: Head of Engineering
GetSetYo is a Bangalore-based early-stage travel tech startup, built by internet industry veterans (ex-Makemytrip, Flipkart, Ola, PhonePe, Zynga, MagicPin, etc.) and premier academic institutes (IIT Delhi, IIT BHU, ISB, DCE, NIT Surathkal, etc.), and funded by multiple unicorn founders (of companies such as MakeMyTrip, Zomato, Groww, Udaan, MaMaEarth, etc.) and we’re growing fast. Look us up here - https://www.getsetyo.com/about
We’re building something exciting in the travel tech space and looking for a senior AI Engineer to join our core engineering team in Bangalore.
Who Are You:
- 1–8 years of total software engineering experience, including at least 1 year building and shipping AI/ML or LLM-powered products in production
- Engineering degree from a top-ranked college
- Strong engineering foundation in Python or Java, with the ability to build reliable backend services, APIs, evaluation pipelines, and developer tooling around AI systems
- Hands-on experience with LLM application patterns such as RAG, tool/function calling, structured output generation, vector search, reranking, and agentic workflows
- Familiarity with agent frameworks and orchestration patterns, including multi-step workflows, planner/executor patterns, tool routing, and guardrails
- Working knowledge of MCP (Model Context Protocol) or similar patterns for connecting models to internal tools, data sources, and external systems
- Strong understanding of context engineering, prompt design, and how to manage instructions, conversation state, tools, memory, and retrieved context for consistent model behavior
- Experience with evaluation and observability for AI systems: offline evals, online metrics, regression testing, trace inspection, cost/latency monitoring, and failure analysis
- Comfortable working in a fast-paced startup where you can own problem statements end to end — from prototype to production rollout
- Must have experience using AI-native developer tools such as Claude Code / coding agents / AI-assisted workflows to accelerate delivery
What You’ll Do
- Build and own production-grade AI features across the stack, from experimentation and prototyping to backend integration, deployment, monitoring, and iterative improvement
- Design and implement agentic workflows for real user problems — combining LLM reasoning, retrieval, tool use, business rules, and backend APIs into reliable multi-step systems
- Build and optimize RAG and search systems: document ingestion, chunking strategies, embedding pipelines, vector indexes, hybrid retrieval, reranking, and citation/grounding flows
- Integrate models with internal and external systems through tool calling, APIs, and where relevant MCP-compatible interfaces, so models can safely access the right context and take useful actions
- Drive context engineering for AI products: decide what memory, instructions, retrieved context, tool outputs, and interaction history should be passed to the model at each step for maximum quality and efficiency
- Build evaluation systems for prompts, agents, and retrieval quality — including benchmark datasets, golden test cases, automated regression checks, and human-in-the-loop review workflows
- Establish observability and debugging for AI pipelines: traces, tool execution logs, latency/cost tracking, hallucination analysis, and failure-mode investigation
- Help define engineering standards for AI systems across security, guardrails, versioning, rollback, experimentation, and cost-performance tradeoffs
What We Offer:
- AI Impact from Day 1: Lead the development of our core ML capabilities
- Fast Iteration: Weekly releases and direct user feedback
- Collaborative Culture: Flat structure and transparent communication
- Vibrant Office: In-person energy in Bangalore HQ
- Perks: Employee travel discounts and exclusive deals.

About GetSetYo Technology Labs Private Limited
About
Similar jobs
Job Title: AI Engineer
Location: Bengaluru
Experience: 3 Years
Working Days: 5 Days
About the Role
We’re reimagining how enterprises interact with documents and workflows—starting with BFSI and healthcare. Our AI-first platforms are transforming credit decisioning, document intelligence, and underwriting at scale. The focus is on Intelligent Document Processing (IDP), GenAI-powered analysis, and human-in-the-loop (HITL) automation to accelerate outcomes across lending, insurance, and compliance workflows.
As an AI Engineer, you’ll be part of a high-caliber engineering team building next-gen AI systems that:
- Power robust APIs and platforms used by underwriters, credit analysts, and financial institutions.
- Build and integrate GenAI agents.
- Enable “human-in-the-loop” workflows for high-assurance decisions in real-world conditions.
Key Responsibilities
- Build and optimize ML/DL models for document understanding, classification, and summarization.
- Apply LLMs and RAG techniques for validation, search, and question-answering tasks.
- Design and maintain data pipelines for structured and unstructured inputs (PDFs, OCR text, JSON, etc.).
- Package and deploy models as REST APIs or microservices in production environments.
- Collaborate with engineering teams to integrate models into existing products and workflows.
- Continuously monitor and retrain models to ensure reliability and performance.
- Stay updated on emerging AI frameworks, architectures, and open-source tools; propose improvements to internal systems.
Required Skills & Experience
- 2–5 years of hands-on experience in AI/ML model development, fine-tuning, and building ML solutions.
- Strong Python proficiency with libraries such as NumPy, Pandas, scikit-learn, PyTorch, or TensorFlow.
- Solid understanding of transformers, embeddings, and NLP pipelines.
- Experience working with LLMs (OpenAI, Claude, Gemini, etc.) and frameworks like LangChain.
- Exposure to OCR, document parsing, and unstructured text analytics.
- Familiarity with model serving, APIs, and microservice architectures (FastAPI, Flask).
- Working knowledge of Docker, cloud environments (AWS/GCP/Azure), and CI/CD pipelines.
- Strong grasp of data preprocessing, evaluation metrics, and model validation workflows.
- Excellent problem-solving ability, structured thinking, and clean, production-ready coding practices.
Job Overview
Architect and build scalable, high-performance backend systems while working on mission-critical platforms that process real-time market data and portfolio analytics. The role also involves leveraging Generative AI capabilities to enhance data intelligence, automation, and user-facing features, while ensuring regulatory compliance and secure financial transactions.
Key Responsibilities
- Design, develop, and maintain scalable backend services and APIs using NodeJS and Python
- Build event-driven architectures using RabbitMQ and Kafka for real-time data processing
- Develop and manage data pipelines integrating PostgreSQL and BigQuery for analytics and warehousing
- Integrate and deploy Generative AI models (LLMs, embeddings, AI APIs) into backend systems for automation, insights, and intelligent workflows
- Design AI-powered features such as recommendation systems, document processing, or conversational interfaces
- Ensure system reliability, security, and low-latency performance for mission-critical systems
- Lead technical design discussions, conduct code reviews, and mentor junior engineers
- Optimize database queries, implement caching strategies, and improve overall system performance
- Collaborate with cross-functional teams to deliver end-to-end product features
- Implement monitoring, logging, and observability solutions
Required Skills and Qualifications
- 2+ years of professional backend development experience
- Strong expertise in NodeJS and Python for production-grade applications
- Proven experience building RESTful APIs and microservices architectures
- Experience working with Generative AI frameworks/APIs (OpenAI, LangChain, vector databases, prompt engineering)
- Understanding of integrating LLMs into production systems (RAG, embeddings, fine-tuning basics)
- Strong proficiency in PostgreSQL, including query optimization and schema design
- Hands-on experience with RabbitMQ and Kafka
- Experience with BigQuery or similar data warehousing solutions
- Solid understanding of distributed systems, scalability patterns, and high-traffic applications
- Strong knowledge of authentication, authorization, and security best practices
- Experience with Git, CI/CD pipelines, and modern development workflows
- Excellent problem-solving and debugging skills
- Exposure to fintech or financial services, cloud platforms (GCP/AWS/Azure), Docker/Kubernetes, caching tools (Redis/Memcached), and regulatory requirements (KYC, compliance, data privacy) is a plus
Apply directly at: https://wohlig.keka.com/careers/jobdetails/136351
Intern (GenAI - Python) - 3 Months Unpaid Internship
Job Title: GenAI Intern (Python) - 3 Months Internship (Unpaid)
Location: Ahmedabad (On-Site)
Duration: 3 Months
Stipend: Unpaid Internship
Company: Softcolon Technologies
About the Internship:
Softcolon Technologies is seeking a dedicated GenAI Intern who is eager to delve into real-world AI applications. This internship provides hands-on experience in Generative AI development, focusing on RAG systems and AI Agents. It is an ideal opportunity for individuals looking to enhance their skills in Python-based AI development through practical project involvement.
Eligibility:
- Freshers or currently pursuing BE (IT/CE) or related field
- Strong interest in Generative AI and real-world AI product development
Required Skills (Must Have):
- Basic knowledge of Python
- Basic understanding of Python frameworks like FastAPI and basic Django
- Familiarity with APIs and JSON
- Submission of resume, GitHub Profile/Project Portfolio, and any AI/Python project links
What You Will Learn (Internship Goals):
You will gain hands-on experience in:
- Fundamentals of Generative AI (GenAI)
- Building RAG (Retrieval-Augmented Generation) applications
- Working with Vector Databases and embeddings
- Creating AI Agents using Python
- Integrating LLMs such as OpenAI (GPT models), Claude, Gemini
- Prompt Engineering + AI workflow automation
- Building production-ready APIs using FastAPI
Responsibilities:
- Assist in developing GenAI-based applications using Python
- Support RAG pipeline implementation (embedding + search + response)
- Work on API integrations with OpenAI/Claude/Gemini
- Assist in building backend services using FastAPI
- Maintain project documentation and GitHub updates
- Collaborate with team members for tasks and daily progress updates
Selection Process:
- Resume + GitHub portfolio screening
- Short technical discussion (Python + basics of APIs)
- Final selection by the team
Why Join Us?
- Practical experience in GenAI through real projects
- Mentorship from experienced developers
- Opportunity to work on portfolio-level projects
- Certificate + recommendation (based on performance)
- Potential for a paid role post-internship (based on performance)
How to Apply:
Share your resume and GitHub portfolio link via:
We are seeking a highly skilled Senior Backend Developer with deep expertise in Python and FastAPI to join our team. This role focuses on building high-performance, scalable backend services capable of handling high request volumes while integrating advanced LLM technologies.
The ideal candidate will design robust distributed systems, implement efficient data storage solutions, and ensure enterprise-grade security within an Azure-based infrastructure. This is a great opportunity to work on AI/ML integrations and mission-critical applications requiring high performance and reliability.
Key Responsibilities:
Backend Development
- Design and maintain high-performance backend services using Python and FastAPI
- Implement advanced FastAPI features such as dependency injection, middleware, and async programming
- Write comprehensive unit tests using pytest
- Design and maintain Pydantic schemas
High-Concurrency Systems
- Implement asynchronous code for high-volume request processing
- Apply concurrency patterns and atomic operations to ensure efficient system performance
Data & Storage
- Optimize MongoDB operations
- Implement Redis caching strategies (TTL, performance tuning, caching patterns)
Distributed Systems
- Implement rate limiting, retry logic, failover mechanisms, and region routing
- Build microservices and event-driven architectures
- Work with EventHub, Blob Storage, and Databricks
AI/ML Integration
- Integrate OpenAI API, Gemini API, and Claude API
- Manage LLM integrations using LiteLLM
- Optimize AI service usage within the Azure ecosystem
Security
- Implement JWT authentication
- Manage API keys and encryption protocols
- Implement PII masking and data security mechanisms
Collaboration
- Work with cross-functional teams on architecture and system design
- Contribute to engineering best practices and technical improvements
- Mentor junior developers where required
Must-Have Skills & Requirements
Experience
- 7+ years of hands-on Python backend development
- Bachelor’s degree in Computer Science, Engineering, or related field
- Experience building high-traffic, scalable systems
Core Technical Skills
Python
- Advanced knowledge of asynchronous programming, concurrency, and atomic operations
FastAPI
- Expert-level experience with dependency injection, middleware, and async code
Testing
- Strong experience with pytest and Pydantic schemas
Databases
- Hands-on experience with MongoDB and Redis
- Strong understanding of caching patterns, TTL, and performance optimization
Distributed Systems
- Experience with rate limiting, retry logic, failover mechanisms, high concurrency processing, and region routing
Microservices
- Experience building microservices and event-driven systems
- Exposure to EventHub, Blob Storage, and Databricks
Cloud
- Strong experience working in Azure environments
AI Integration
- Familiarity with OpenAI API, Gemini API, Claude API, and LiteLLM
Security
- Implementation experience with JWT authentication, API keys, encryption, and PII masking
Soft Skills
- Strong problem-solving and debugging skills
- Excellent communication and collaboration
- Ability to manage multiple priorities
- Detail-oriented approach to code quality
- Experience mentoring junior developers
Good-to-Have Skills
Containerization
- Docker, Kubernetes (preferably within Azure)
DevOps
- CI/CD pipelines and automated deployment
Monitoring & Observability
- Experience with Grafana, distributed tracing, custom metrics
Industry Experience
- Experience in Insurance, Financial Services, or regulated industries
Advanced AI/ML
- Vector databases
- Similarity search optimization
- LangChain / LangSmith
Data Processing
- Real-time data processing and event streaming
Database Expertise
- PostgreSQL with vector extensions
- Advanced Redis clustering
Multi-Cloud
- Experience with AWS or GCP alongside Azure
Performance Optimization
- Advanced caching strategies
- Backend performance tuning
We are looking for a Technical Lead - GenAI with a strong foundation in Python, Data Analytics, Data Science or Data Engineering, system design, and practical experience in building and deploying Agentic Generative AI systems. The ideal candidate is passionate about solving complex problems using LLMs, understands the architecture of modern AI agent frameworks like LangChain/LangGraph, and can deliver scalable, cloud-native back-end services with a GenAI focus.
Key Responsibilities :
- Design and implement robust, scalable back-end systems for GenAI agent-based platforms.
- Work closely with AI researchers and front-end teams to integrate LLMs and agentic workflows into production services.
- Develop and maintain services using Python (FastAPI/Django/Flask), with best practices in modularity and performance.
- Leverage and extend frameworks like LangChain, LangGraph, and similar to orchestrate tool-augmented AI agents.
- Design and deploy systems in Azure Cloud, including usage of serverless functions, Kubernetes, and scalable data services.
- Build and maintain event-driven / streaming architectures using Kafka, Event Hubs, or other messaging frameworks.
- Implement inter-service communication using gRPC and REST.
- Contribute to architectural discussions, especially around distributed systems, data flow, and fault tolerance.
Required Skills & Qualifications :
- Strong hands-on back-end development experience in Python along with Data Analytics or Data Science.
- Strong track record on platforms like LeetCode or in real-world algorithmic/system problem-solving.
- Deep knowledge of at least one Python web framework (e.g., FastAPI, Flask, Django).
- Solid understanding of LangChain, LangGraph, or equivalent LLM agent orchestration tools.
- 2+ years of hands-on experience in Generative AI systems and LLM-based platforms.
- Proven experience with system architecture, distributed systems, and microservices.
- Strong familiarity with Any Cloud infrastructure and deployment practices.
- Should know about any Data Engineering or Analytics expertise (Preferred) e.g. Azure Data Factory, Snowflake, Databricks, ETL tools Talend, Informatica or Power BI, Tableau, Data modelling, Datawarehouse development.
About FrontM
At FrontM, we are on a mission to transform the lives of frontline workforces, particularly in the maritime industry. We believe in creating a more connected, empowered, and engaged workforce by building cutting-edge solutions that merge the power of technology with human-centric needs. Our vision is to develop the world’s leading digital toolbox platform for maritime operations —a platform that brings everything for frontline workforces from digital wallets, recruitment, onboarding, healthcare, and learning to welfare and human capital management under one seamless umbrella.
Role Summary
As a JavaScript Developer at FrontM, you will be at the forefront of developing our pioneering digital toolbox platform and the low-code developer framework that powers it. You will have the opportunity to work with the latest JavaScript frameworks, integrating advanced technologies such as Large Language Models (LLMs), AI, and the latest GPT models. You’ll also be part of our exciting roadmap to evolve our low-code platform into a no-code solution, making app development accessible to everyone. Your contributions will be pivotal in the creation and enhancement of the Maritime App Store, where innovation meets practicality, offering solutions that make a tangible difference in the lives of seafarers and other frontline workers.
Key Responsibilities
Application Development (≈60%)
- Build micro-apps using the frontm.ai framework
- Implement intent-based architectures, context and state management
- Develop responsive UIs, forms, collections, filters, and workflows
- Integrate AWS services (Lambda, S3, DynamoDB, Bedrock)
- Build conversational AI features and real-time capabilities (messaging, video, notifications)
Framework Development (≈25%)
- Enhance and extend the frontm.ai core framework
- Build reusable components, patterns, and accelerators
- Improve performance for low-bandwidth environments
- Contribute to documentation, examples, and design reviews
- Support migration towards TypeScript and future Rust components
AI-Assisted Development (≈15%)
- Use Claude Code for efficient development
- Write and refine prompts for code generation
- Review, validate, and harden AI-generated code
- Implement LLM integrations via AWS Bedrock / OpenAI
- Build AI assistants using the skills layer
Required Technical Skills
JavaScript / TypeScript
- 5+ years professional JavaScript experience
- Strong TypeScript, async patterns, modular design
- Clean code practices and modern tooling
Architecture & Cloud
- Microservices and event-driven systems
- Serverless AWS (Lambda, API Gateway, DynamoDB, S3)
- REST APIs, WebSockets, CI/CD
- Infrastructure as Code experience preferred
AI & LLMs
- Hands-on use of Claude Code or similar tools
- Prompt engineering and hallucination mitigation
- Conversational AI and NLP experience
Data
- MongoDB / MongoDB Atlas
- Caching, indexing, and multi-tenant data patterns
Desired skills
- Experience with low-bandwidth or offline-first systems
- Understanding of secure, distributed deployments
- Exposure to healthcare, logistics, or maritime systems
Experience & Education
- 5+ years software development
- 2+ years AWS serverless
- 1+ year AI-assisted development
- Degree in Computer Science or equivalent experience
Personal Attributes
- Strong problem-solving and critical thinking
- Comfortable reviewing AI-generated code
- Clear communicator and reliable team contributor
- Self-driven, detail-oriented, and adaptable
Why join FrontM?
Above-Market Compensation: We believe in rewarding talent, offering a salary package that reflects your skills and potential.
Long-Term Career Growth: As FrontM expands, so will your opportunities. We are committed to helping our team members develop their careers, offering mentorship, learning opportunities, and the chance to take on more responsibility.
Cutting-Edge Technology: Work with the latest in JavaScript frameworks, AI, LLMs, and GPT models, contributing to a platform that’s at the forefront of technological innovation.
Make a Real Impact: This is your chance to work on something that matters—to build solutions that directly improve the quality of life for thousands of people worldwide.
Job Title: Python Backend / GenAI Engineer (4+ Years)
Job Summary
Looking for a Python Backend Engineer with experience in Generative AI, LangGraph workflows, data engineering, and AI evaluation using Arize AI.
Responsibilities
* Develop backend APIs using Python (FastAPI / Flask / Django)
* Build Generative AI and RAG-based applications
* Design LangGraph / agent workflows
* Create data engineering pipelines (ETL, data processing)
* Implement LLM monitoring and evaluation using Arize AI
* Integrate vector databases and AI services
* Maintain scalable and production-ready backend systems
Required Skills
* 4+ years of Python backend development
* Experience in Generative AI / LLM applications
* Knowledge of LangGraph / LangChain
* Experience in data engineering pipelines
* Familiarity with Arize AI or model evaluation tools
* Understanding of REST APIs, databases, Docker
Good to Have
* Cloud platforms (Azure / AWS )
* Vector databases (FAISS, Pinecone, Azure AI Search)
Company: Grey Chain AI
Location: Remote
Experience: 7+ Years
Employment Type: Full Time
About the Role
We are looking for a Senior Python AI Engineer who will lead the design, development, and delivery of production-grade GenAI and agentic AI solutions for global clients. This role requires a strong Python engineering background, experience working with foreign enterprise clients, and the ability to own delivery, guide teams, and build scalable AI systems.
You will work closely with product, engineering, and client stakeholders to deliver high-impact AI-driven platforms, intelligent agents, and LLM-powered systems.
Key Responsibilities
- Lead the design and development of Python-based AI systems, APIs, and microservices.
- Architect and build GenAI and agentic AI workflows using modern LLM frameworks.
- Own end-to-end delivery of AI projects for international clients, from requirement gathering to production deployment.
- Design and implement LLM pipelines, prompt workflows, and agent orchestration systems.
- Ensure reliability, scalability, and security of AI solutions in production.
- Mentor junior engineers and provide technical leadership to the team.
- Work closely with clients to understand business needs and translate them into robust AI solutions.
- Drive adoption of latest GenAI trends, tools, and best practices across projects.
Must-Have Technical Skills
- 7+ years of hands-on experience in Python development, building scalable backend systems.
- Strong experience with Python frameworks and libraries (FastAPI, Flask, Pydantic, SQLAlchemy, etc.).
- Solid experience working with LLMs and GenAI systems (OpenAI, Claude, Gemini, open-source models).
- Good experience with agentic AI frameworks such as LangChain, LangGraph, LlamaIndex, or similar.
- Experience designing multi-agent workflows, tool calling, and prompt pipelines.
- Strong understanding of REST APIs, microservices, and cloud-native architectures.
- Experience deploying AI solutions on AWS, Azure, or GCP.
- Knowledge of MLOps / LLMOps, model monitoring, evaluation, and logging.
- Proficiency with Git, CI/CD, and production deployment pipelines.
Leadership & Client-Facing Experience
- Proven experience leading engineering teams or acting as a technical lead.
- Strong experience working directly with foreign or enterprise clients.
- Ability to gather requirements, propose solutions, and own delivery outcomes.
- Comfortable presenting technical concepts to non-technical stakeholders.
What We Look For
- Excellent communication, comprehension, and presentation skills.
- High level of ownership, accountability, and reliability.
- Self-driven professional who can operate independently in a remote setup.
- Strong problem-solving mindset and attention to detail.
- Passion for GenAI, agentic systems, and emerging AI trends.
Why Grey Chain AI
Grey Chain AI is a Generative AI-as-a-Service and Digital Transformation company trusted by global brands such as UNICEF, BOSE, KFINTECH, WHO, and Fortune 500 companies. We build real-world, production-ready AI systems that drive business impact across industries like BFSI, Non-profits, Retail, and Consulting.
Here, you won’t just experiment with AI — you will build, deploy, and scale it for the real world.
About Asha Health
Asha Health helps medical practices launch their own AI clinics. We're backed by Y Combinator, General Catalyst, 186 Ventures, Reach Capital and many more. We recently raised an oversubscribed seed round from some of the best investors in Silicon Valley. Our team includes AI product leaders from companies like Google, physician executives from major health systems, and more.
About the Role
We're looking for a Software Engineer to join our engineering team in our Bangalore office. We're looking for someone who is an all-rounder, but has particularly exceptional backend engineering skills.
Our ideal candidate has built AI agents at the orchestration layer level and leveraged clever engineering techniques to improve latency & reliability for complex workflows.
We pay well above market for the country's best talent and provide a number of excellent perks.
Responsibilities
In this role, you will have the opportunity to build state-of-the-art AI agents, and learn what it takes to build an industry-leading multimodal, multi-agent suite.
You'll wear many hats. Your responsibilities will fall into 2 categories:
AI Engineering
- Develop AI agents with a high bar for reliability and performance.
- Build SOTA LLM-powered tools for providers, practices, and patients.
- Architect our data annotation, fine tuning, and RLHF workflows.
- Live on the bleeding-edge ensuring that every week, we have the most cutting edge agents as the industry evolves.
Full-Stack Engineering (80% backend, 20% frontend)
- Lead the team in designing scalable architecture to support performant web applications.
- Develop features end-to-end for our web applications with industry leading product and user experience (Typescript, nodeJS, python etc).
Perks of Working at Asha Health
#1 Build cool stuff: work on the latest, cutting-edge tech (build frontier AI agents with technologies that evolve every 2 weeks).
#2 Surround yourself with top talent: our team includes senior AI product leaders from companies like Google, experienced physician executives, and top 1% engineering talent (the best of the best).
#3 Rocketship trajectory: we get more customer interest than we have time to onboard, it's a good problem to have :)
Company Overview
McKinley Rice is not just a company; it's a dynamic community, the next evolutionary step in professional development. Spiritually, we're a hub where individuals and companies converge to unleash their full potential. Organizationally, we are a conglomerate composed of various entities, each contributing to the larger narrative of global excellence.
Redrob by McKinley Rice: Redefining Prospecting in the Modern Sales Era
Backed by a $40 million Series A funding from leading Korean & US VCs, Redrob is building the next frontier in global outbound sales. We’re not just another database—we’re a platform designed to eliminate the chaos of traditional prospecting. In a world where sales leaders chase meetings and deals through outdated CRMs, fragmented tools, and costly lead-gen platforms, Redrob provides a unified solution that brings everything under one roof.
Inspired by the breakthroughs of Salesforce, LinkedIn, and HubSpot, we’re creating a future where anyone, not just enterprise giants, can access real-time, high-quality data on 700 M+ decision-makers, all in just a few clicks.
At Redrob, we believe the way businesses find and engage prospects is broken. Sales teams deserve better than recycled data, clunky workflows, and opaque credit-based systems. That’s why we’ve built a seamless engine for:
- Precision prospecting
- Intent-based targeting
- Data enrichment from 16+ premium sources
- AI-driven workflows to book more meetings, faster
We’re not just streamlining outbound—we’re making it smarter, scalable, and accessible. Whether you’re an ambitious startup or a scaled SaaS company, Redrob is your growth copilot for unlocking warm conversations with the right people, globally.
EXPERIENCE
Duties you'll be entrusted with:
- Develop and execute scalable APIs and applications using the Node.js or Nest.js framework
- Writing efficient, reusable, testable, and scalable code.
- Understanding, analyzing, and implementing – Business needs, feature modification requests, and conversion into software components
- Integration of user-oriented elements into different applications, data storage solutions
- Developing – Backend components to enhance performance and receptiveness, server-side logic, and platform, statistical learning models, highly responsive web applications
- Designing and implementing – High availability and low latency applications, data protection and security features
- Performance tuning and automation of applications and enhancing the functionalities of current software systems.
- Keeping abreast with the latest technology and trends.
Expectations from you:
Basic Requirements
- Minimum qualification: Bachelor’s degree or more in Computer Science, Software Engineering, Artificial Intelligence, or a related field.
- Experience with Cloud platforms (AWS, Azure, GCP).
- Strong understanding of monitoring, logging, and observability practices.
- Experience with event-driven architectures (e.g., Kafka, RabbitMQ).
- Expertise in designing, implementing, and optimizing Elasticsearch.
- Work with modern tools including Jira, Slack, GitHub, Google Docs, etc.
- Expertise in Event driven architecture.
- Experience in Integrating Generative AI APIs.
- Working experience in high user concurrency.
- Experience in scaled databases for handling millions of records - indexing, retrieval, etc.,
Technical Skills
- Demonstrable experience in web application development with expertise in Node.js or Nest.js.
- Knowledge of database technologies and agile development methodologies.
- Experience working with databases, such as MySQL or MongoDB.
- Familiarity with web development frameworks, such as Express.js.
- Understanding of microservices architecture and DevOps principles.
- Well-versed with AWS and serverless architecture.
Soft Skills
- A quick and critical thinker with the ability to come up with a number of ideas about a topic and bring fresh and innovative ideas to the table to enhance the visual impact of our content.
- Potential to apply innovative and exciting ideas, concepts, and technologies.
- Stay up-to-date with the latest design trends, animation techniques, and software advancements.
- Multi-tasking and time-management skills, with the ability to prioritize tasks.
THRIVE
Some of the extensive benefits of being part of our team:
- We offer skill enhancement and educational reimbursement opportunities to help you further develop your expertise.
- The Member Reward Program provides an opportunity for you to earn up to INR 85,000 as an annual Performance Bonus.
- The McKinley Cares Program has a wide range of benefits:
- The wellness program covers sessions for mental wellness, and fitness and offers health insurance.
- In-house benefits have a referral bonus window and sponsored social functions.
- An Expanded Leave Basket including paid Maternity and Paternity Leaves and rejuvenation Leaves apart from the regular 20 leaves per annum.
- Our Family Support benefits not only include maternity and paternity leaves but also extend to provide childcare benefits.
- In addition to the retention bonus, our McKinley Retention Benefits program also includes a Leave Travel Allowance program.
- We also offer an exclusive McKinley Loan Program designed to assist our employees during challenging times and alleviate financial burdens.












