
AI Engineer (1–8 years)
Location: Bellandur, Outer Ring Road, Bengaluru (Work from Office only)
Organization Size: 20 members across functions, and growing
Reports to: Head of Engineering
GetSetYo is a Bangalore-based early-stage travel tech startup, built by internet industry veterans (ex-Makemytrip, Flipkart, Ola, PhonePe, Zynga, MagicPin, etc.) and premier academic institutes (IIT Delhi, IIT BHU, ISB, DCE, NIT Surathkal, etc.), and funded by multiple unicorn founders (of companies such as MakeMyTrip, Zomato, Groww, Udaan, MaMaEarth, etc.) and we’re growing fast. Look us up here - https://www.getsetyo.com/about
We’re building something exciting in the travel tech space and looking for a senior AI Engineer to join our core engineering team in Bangalore.
Who Are You:
- 1–8 years of total software engineering experience, including at least 1 year building and shipping AI/ML or LLM-powered products in production
- Engineering degree from a top-ranked college
- Strong engineering foundation in Python or Java, with the ability to build reliable backend services, APIs, evaluation pipelines, and developer tooling around AI systems
- Hands-on experience with LLM application patterns such as RAG, tool/function calling, structured output generation, vector search, reranking, and agentic workflows
- Familiarity with agent frameworks and orchestration patterns, including multi-step workflows, planner/executor patterns, tool routing, and guardrails
- Working knowledge of MCP (Model Context Protocol) or similar patterns for connecting models to internal tools, data sources, and external systems
- Strong understanding of context engineering, prompt design, and how to manage instructions, conversation state, tools, memory, and retrieved context for consistent model behavior
- Experience with evaluation and observability for AI systems: offline evals, online metrics, regression testing, trace inspection, cost/latency monitoring, and failure analysis
- Comfortable working in a fast-paced startup where you can own problem statements end to end — from prototype to production rollout
- Must have experience using AI-native developer tools such as Claude Code / coding agents / AI-assisted workflows to accelerate delivery
What You’ll Do
- Build and own production-grade AI features across the stack, from experimentation and prototyping to backend integration, deployment, monitoring, and iterative improvement
- Design and implement agentic workflows for real user problems — combining LLM reasoning, retrieval, tool use, business rules, and backend APIs into reliable multi-step systems
- Build and optimize RAG and search systems: document ingestion, chunking strategies, embedding pipelines, vector indexes, hybrid retrieval, reranking, and citation/grounding flows
- Integrate models with internal and external systems through tool calling, APIs, and where relevant MCP-compatible interfaces, so models can safely access the right context and take useful actions
- Drive context engineering for AI products: decide what memory, instructions, retrieved context, tool outputs, and interaction history should be passed to the model at each step for maximum quality and efficiency
- Build evaluation systems for prompts, agents, and retrieval quality — including benchmark datasets, golden test cases, automated regression checks, and human-in-the-loop review workflows
- Establish observability and debugging for AI pipelines: traces, tool execution logs, latency/cost tracking, hallucination analysis, and failure-mode investigation
- Help define engineering standards for AI systems across security, guardrails, versioning, rollback, experimentation, and cost-performance tradeoffs
What We Offer:
- AI Impact from Day 1: Lead the development of our core ML capabilities
- Fast Iteration: Weekly releases and direct user feedback
- Collaborative Culture: Flat structure and transparent communication
- Vibrant Office: In-person energy in Bangalore HQ
- Perks: Employee travel discounts and exclusive deals.

About GetSetYo Technology Labs Private Limited
About
We are a funded Travel Tech Startup based out of Bengaluru, with a highly experienced founding team hailing from Institutions such as IIT Delhi, ISB, MakeMyTrip, Flipkart, Ola, PhonePe.
GetSetYo is building a new distribution model for travel. We combine AI-powered technology, travel expertise, and creator-led distribution to help travellers discover and book trips through trusted influencers and communities.
Our platform enables travel creators (YouTubers, Instagrammers, and travel experts) to monetize their audience by generating travel leads and transactions.Travellers either book directly online on the GetSetYo platform or take assistance from our expert travel sales team to plan and book customised trips.
About our Investors
We are funded by respected funds and entrepreneurs including co-founders of Zomato, MakeMyTrip, Groww, MamaEarth, Udaan, MoneyView, Niyo, to name a few.
About our Leadership Team
Abhishek was Vice President at MakeMyTrip. He has rich and diverse experience across other consumer sectors also having served as Senior Director at Flipkart and Ola. He studied Computer Science from IIT Delhi.
Sahil brings in deep experience in engineering, with his previous role being Senior Director Engg at Magicpin, and with companies such as Policybazaar He received his education from Delhi College of Engineering.
Similar jobs
About Techjays
At Techjays, we build production-grade AI platforms for global clients. We operate at the intersection of backend engineering, distributed systems, and applied AI — delivering secure, scalable, and enterprise-ready intelligent systems. Our team has built and scaled products at Google, Akamai, NetApp, ADP, Cognizant, and Capgemini.
About the Role
This is not a feature-delivery role. We are looking for an AI Lead who can architect, own, and scale intelligent backend systems end-to-end. You will drive both technical direction and execution — working across LLM integrations, RAG pipelines, agentic AI workflows, and cloud-native backend systems for global clients.
What You'll Do
- Architect and scale backend systems powering AI-driven applications
- Design and implement RAG pipelines, AI agents, and LLM integrations
- Own systems end-to-end — from architecture to deployment and scaling
- Integrate and optimize LLMs (Claude, GPT, Gemini) for real-world production use cases
- Build high-performance distributed systems with observability and cost efficiency
- Lead backend and AI initiatives with strong technical ownership
- Mentor engineers and raise the technical bar across teams
- Collaborate with product and AI teams to deliver AI-native solutions
What We're Looking For
- 6–10 years of strong backend engineering experience
- Hands-on expertise in Python (FastAPI / Django / Flask)
- Deep understanding of Generative AI and LLM-based systems
- Strong experience with RAG pipelines and Vector Databases (Pinecone, FAISS, ChromaDB, Weaviate)
- Solid knowledge of Agentic AI — building autonomous agents and multi-agent workflows
- Proficiency in AWS or GCP in production environments
- Experience with distributed systems, microservices, and system design
- Strong grasp of Data Structures, Algorithms, and Design Patterns
- Familiarity with WebSockets, Git, Linux/Unix, and CI/CD
Nice to Have
- Experience with Anthropic Claude API and Claude Code
- Familiarity with real-time data systems or streaming (Kafka, etc.)
- MLOps and AI system lifecycle experience
- Optimizing AI systems for latency, cost, and scalability
Who You Are
- You think in systems, not just features
- You take full ownership of what you build
- You are comfortable navigating fast-moving, ambiguous environments
- You stay updated with the latest in Generative AI and backend technologies
- Strong communicator who can collaborate across teams and global clients
What We Offer
- Competitive compensation (Best in Industry)
- Work on production-grade AI systems used by global clients
- Exposure to cutting-edge AI tools and frameworks
- A culture that values clarity, integrity, and continuous growth
7+ years of experience in Python Development
Good experience in Microservices and APIs development.
Must have exposure to large scale data
Good to have Gen AI experience
Code versioning and collaboration. (Git)
Knowledge for Libraries for extracting data from websites.
Knowledge of SQL and NoSQL databases
Familiarity with RESTful APIs
Familiarity with Cloud (Azure /AWS) technologies
About Wissen Technology:
• The Wissen Group was founded in the year 2000. Wissen Technology, a part of Wissen Group, was established in the year 2015.
• Wissen Technology is a specialized technology company that delivers high-end consulting for organizations in the Banking & Finance, Telecom, and Healthcare domains. We help clients build world class products.
• Our workforce consists of 550+ highly skilled professionals, with leadership and senior management executives who have graduated from Ivy League Universities like Wharton, MIT, IITs, IIMs, and NITs and with rich work experience in some of the biggest companies in the world.
• Wissen Technology has grown its revenues by 400% in these five years without any external funding or investments.
• Globally present with offices US, India, UK, Australia, Mexico, and Canada.
• We offer an array of services including Application Development, Artificial Intelligence & Machine Learning, Big Data & Analytics, Visualization & Business Intelligence, Robotic Process Automation, Cloud, Mobility, Agile & DevOps, Quality Assurance & Test Automation.
• Wissen Technology has been certified as a Great Place to Work®.
• Wissen Technology has been voted as the Top 20 AI/ML vendor by CIO Insider in 2020.
• Over the years, Wissen Group has successfully delivered $650 million worth of projects for more than 20 of the Fortune 500 companies.
We have served client across sectors like Banking, Telecom, Healthcare, Manufacturing, and Energy. They include likes of Morgan Stanley, MSCI, StateStreet, Flipkart, Swiggy, Trafigura, GE to name a few.
Website : www.wissen.com
About FrontM
At FrontM, we are on a mission to transform the lives of frontline workforces, particularly in the maritime industry. We believe in creating a more connected, empowered, and engaged workforce by building cutting-edge solutions that merge the power of technology with human-centric needs. Our vision is to develop the world’s leading digital toolbox platform for maritime operations —a platform that brings everything for frontline workforces from digital wallets, recruitment, onboarding, healthcare, and learning to welfare and human capital management under one seamless umbrella.
Role Summary
As a JavaScript Developer at FrontM, you will be at the forefront of developing our pioneering digital toolbox platform and the low-code developer framework that powers it. You will have the opportunity to work with the latest JavaScript frameworks, integrating advanced technologies such as Large Language Models (LLMs), AI, and the latest GPT models. You’ll also be part of our exciting roadmap to evolve our low-code platform into a no-code solution, making app development accessible to everyone. Your contributions will be pivotal in the creation and enhancement of the Maritime App Store, where innovation meets practicality, offering solutions that make a tangible difference in the lives of seafarers and other frontline workers.
Key Responsibilities
Application Development (≈60%)
- Build micro-apps using the frontm.ai framework
- Implement intent-based architectures, context and state management
- Develop responsive UIs, forms, collections, filters, and workflows
- Integrate AWS services (Lambda, S3, DynamoDB, Bedrock)
- Build conversational AI features and real-time capabilities (messaging, video, notifications)
Framework Development (≈25%)
- Enhance and extend the frontm.ai core framework
- Build reusable components, patterns, and accelerators
- Improve performance for low-bandwidth environments
- Contribute to documentation, examples, and design reviews
- Support migration towards TypeScript and future Rust components
AI-Assisted Development (≈15%)
- Use Claude Code for efficient development
- Write and refine prompts for code generation
- Review, validate, and harden AI-generated code
- Implement LLM integrations via AWS Bedrock / OpenAI
- Build AI assistants using the skills layer
Required Technical Skills
JavaScript / TypeScript
- 5+ years professional JavaScript experience
- Strong TypeScript, async patterns, modular design
- Clean code practices and modern tooling
Architecture & Cloud
- Microservices and event-driven systems
- Serverless AWS (Lambda, API Gateway, DynamoDB, S3)
- REST APIs, WebSockets, CI/CD
- Infrastructure as Code experience preferred
AI & LLMs
- Hands-on use of Claude Code or similar tools
- Prompt engineering and hallucination mitigation
- Conversational AI and NLP experience
Data
- MongoDB / MongoDB Atlas
- Caching, indexing, and multi-tenant data patterns
Desired skills
- Experience with low-bandwidth or offline-first systems
- Understanding of secure, distributed deployments
- Exposure to healthcare, logistics, or maritime systems
Experience & Education
- 5+ years software development
- 2+ years AWS serverless
- 1+ year AI-assisted development
- Degree in Computer Science or equivalent experience
Personal Attributes
- Strong problem-solving and critical thinking
- Comfortable reviewing AI-generated code
- Clear communicator and reliable team contributor
- Self-driven, detail-oriented, and adaptable
Why join FrontM?
Above-Market Compensation: We believe in rewarding talent, offering a salary package that reflects your skills and potential.
Long-Term Career Growth: As FrontM expands, so will your opportunities. We are committed to helping our team members develop their careers, offering mentorship, learning opportunities, and the chance to take on more responsibility.
Cutting-Edge Technology: Work with the latest in JavaScript frameworks, AI, LLMs, and GPT models, contributing to a platform that’s at the forefront of technological innovation.
Make a Real Impact: This is your chance to work on something that matters—to build solutions that directly improve the quality of life for thousands of people worldwide.
Job Overview
Architect and build scalable, high-performance backend systems while working on mission-critical platforms that process real-time market data and portfolio analytics. The role also involves leveraging Generative AI capabilities to enhance data intelligence, automation, and user-facing features, while ensuring regulatory compliance and secure financial transactions.
Key Responsibilities
- Design, develop, and maintain scalable backend services and APIs using NodeJS and Python
- Build event-driven architectures using RabbitMQ and Kafka for real-time data processing
- Develop and manage data pipelines integrating PostgreSQL and BigQuery for analytics and warehousing
- Integrate and deploy Generative AI models (LLMs, embeddings, AI APIs) into backend systems for automation, insights, and intelligent workflows
- Design AI-powered features such as recommendation systems, document processing, or conversational interfaces
- Ensure system reliability, security, and low-latency performance for mission-critical systems
- Lead technical design discussions, conduct code reviews, and mentor junior engineers
- Optimize database queries, implement caching strategies, and improve overall system performance
- Collaborate with cross-functional teams to deliver end-to-end product features
- Implement monitoring, logging, and observability solutions
Required Skills and Qualifications
- 2+ years of professional backend development experience
- Strong expertise in NodeJS and Python for production-grade applications
- Proven experience building RESTful APIs and microservices architectures
- Experience working with Generative AI frameworks/APIs (OpenAI, LangChain, vector databases, prompt engineering)
- Understanding of integrating LLMs into production systems (RAG, embeddings, fine-tuning basics)
- Strong proficiency in PostgreSQL, including query optimization and schema design
- Hands-on experience with RabbitMQ and Kafka
- Experience with BigQuery or similar data warehousing solutions
- Solid understanding of distributed systems, scalability patterns, and high-traffic applications
- Strong knowledge of authentication, authorization, and security best practices
- Experience with Git, CI/CD pipelines, and modern development workflows
- Excellent problem-solving and debugging skills
- Exposure to fintech or financial services, cloud platforms (GCP/AWS/Azure), Docker/Kubernetes, caching tools (Redis/Memcached), and regulatory requirements (KYC, compliance, data privacy) is a plus
Apply directly at: https://wohlig.keka.com/careers/jobdetails/136351
Key Responsibilities
Platform Build & Architecture
• Refactor an existing Python-based prototype into a modular, production-grade platform
• Define clear service boundaries (API layer, orchestration, agent runtime, data access)
• Build reusable components that allow extension without exposing core engine logic Agent Framework & Orchestration
• Design and implement frameworks for AI agents and reporting workflows
• Build orchestration for multi-step execution (deterministic + AI-driven)
• Ensure outputs are traceable, auditable, and suitable for financial reporting Developer Enablement
• Enable internal/client developers to: o build and deploy reporting agents o reuse approved components o access platform capabilities via APIs (without direct code access)
• Implement access controls and abstraction layers Full Stack Development
• Lead development across: o Backend: Python, FastAPI o Frontend: Next.js o Real-time: WebSockets
• Build simple internal interfaces for: o job execution o monitoring o output review Code Governance & DevOps
Own development workflows in Azure DevOps • branching strategy, PRs, code reviews, merges • Set up CI/CD pipelines, environments (dev/test/prod), and release processes • Ensure code quality, testing, and maintainability standards Team Leadership (Near-term) • Act as the technical anchor o shore • Mentor and guide future hires as the team scales • Establish best practices across code, documentation, and delivery
Required Skills
• Strong experience in Python backend development (FastAPI or similar)
• Experience with React / Next.js
• Familiarity with WebSockets or real-time systems
• Experience building APIs and scalable backend systems
• Hands-on experience with Azure DevOps (repos, pipelines, PR workflows)
• Understanding of modular architecture, access control, and system design • Ability to operate in an early-stage, fast-evolving environment
Software Engineer
Onsite - HSR Bangalore
6 Days work from Office (Flexible working hours)
Product is a PowerPoint AI assistant used by consulting companies and Fortune 500 teams. A typical professional spends 1 to 3 hours creating one slide. With Product company, they create a v1 of their entire deck in 10 minutes, and make changes like “turn this table to a chart” in seconds directly within PowerPoint.
In the next 2 years, our goal at company is to forever change the way business presentations are made.
Who are we?
- small, strong team of 5
- founders are CS graduates from IIT Kharagpur with a specialisation in AI
- work 6 days a week from our office in HSR Layout in Bangalore
- funded by Y Combinator and other amazing investors
- used by consulting companies and Fortune 500 teams
Your responsibilities (in order)
- Design, implement, test, and deploy full features
- Design and implement a robust infrastructure to enable rapid development and automated testing
- Look at usage data to iterate on features
What we’re looking for
- Undergraduate or master's in Computer Science or equivalent degree
- 2+ years of backend or DevOps software engineering experience
- Experience with TypeScript (JavaScript) or Python
You’ll be a good fit if
- You want to work on a product that can change the way a very large number of people work
- The chaos of high growth and things breaking is exciting to you
- You are a workaholic, looking to upskill faster than most people think is possible. This role is not a good fit for you if you’re looking to prioritise work-life balance.
- You prefer working in-person with other smart people who are excited and passionate about what they’re building
- You love solving very hard problems at a rapid pace. We discuss timelines in days or weeks, so you’ll constantly be expected to ship really high-quality work.
Perks
- Comprehensive health insurance for you and dependents
- Workstation enhancements
- Subscriptions to AI tools such as Cursor, ChatGPT, etc.
(If there's anything else we can do to make your work more enjoyable, just ask)
If you are interested in proceeding, we would be happy to move your profile to the next stage of the evaluation process.
Kindly share the following details to help us take this forward :
- Current CTC (Fixed + Variable):
- Expected CTC:
- Notice Period (If currently serving, please mention your Last Working Day)
- Details of any active offers in hand (if applicable)
- Expected/Available Date of Joining (if applicable)
- Attach Updated CV:
- Attach Github Link / Leet code link or other:
- Current Location:
- Preffered Location:
- Reason for job Change:
- Reason for relocation (if applicable):
- Are you comfortable with 6 days wfo (flexible working hours)?( Yes / No):
We are looking for a Technical Lead - GenAI with a strong foundation in Python, Data Analytics, Data Science or Data Engineering, system design, and practical experience in building and deploying Agentic Generative AI systems. The ideal candidate is passionate about solving complex problems using LLMs, understands the architecture of modern AI agent frameworks like LangChain/LangGraph, and can deliver scalable, cloud-native back-end services with a GenAI focus.
Key Responsibilities :
- Design and implement robust, scalable back-end systems for GenAI agent-based platforms.
- Work closely with AI researchers and front-end teams to integrate LLMs and agentic workflows into production services.
- Develop and maintain services using Python (FastAPI/Django/Flask), with best practices in modularity and performance.
- Leverage and extend frameworks like LangChain, LangGraph, and similar to orchestrate tool-augmented AI agents.
- Design and deploy systems in Azure Cloud, including usage of serverless functions, Kubernetes, and scalable data services.
- Build and maintain event-driven / streaming architectures using Kafka, Event Hubs, or other messaging frameworks.
- Implement inter-service communication using gRPC and REST.
- Contribute to architectural discussions, especially around distributed systems, data flow, and fault tolerance.
Required Skills & Qualifications :
- Strong hands-on back-end development experience in Python along with Data Analytics or Data Science.
- Strong track record on platforms like LeetCode or in real-world algorithmic/system problem-solving.
- Deep knowledge of at least one Python web framework (e.g., FastAPI, Flask, Django).
- Solid understanding of LangChain, LangGraph, or equivalent LLM agent orchestration tools.
- 2+ years of hands-on experience in Generative AI systems and LLM-based platforms.
- Proven experience with system architecture, distributed systems, and microservices.
- Strong familiarity with Any Cloud infrastructure and deployment practices.
- Should know about any Data Engineering or Analytics expertise (Preferred) e.g. Azure Data Factory, Snowflake, Databricks, ETL tools Talend, Informatica or Power BI, Tableau, Data modelling, Datawarehouse development.
Company Overview
McKinley Rice is not just a company; it's a dynamic community, the next evolutionary step in professional development. Spiritually, we're a hub where individuals and companies converge to unleash their full potential. Organizationally, we are a conglomerate composed of various entities, each contributing to the larger narrative of global excellence.
Redrob by McKinley Rice: Redefining Prospecting in the Modern Sales Era
Backed by a $40 million Series A funding from leading Korean & US VCs, Redrob is building the next frontier in global outbound sales. We’re not just another database—we’re a platform designed to eliminate the chaos of traditional prospecting. In a world where sales leaders chase meetings and deals through outdated CRMs, fragmented tools, and costly lead-gen platforms, Redrob provides a unified solution that brings everything under one roof.
Inspired by the breakthroughs of Salesforce, LinkedIn, and HubSpot, we’re creating a future where anyone, not just enterprise giants, can access real-time, high-quality data on 700 M+ decision-makers, all in just a few clicks.
At Redrob, we believe the way businesses find and engage prospects is broken. Sales teams deserve better than recycled data, clunky workflows, and opaque credit-based systems. That’s why we’ve built a seamless engine for:
- Precision prospecting
- Intent-based targeting
- Data enrichment from 16+ premium sources
- AI-driven workflows to book more meetings, faster
We’re not just streamlining outbound—we’re making it smarter, scalable, and accessible. Whether you’re an ambitious startup or a scaled SaaS company, Redrob is your growth copilot for unlocking warm conversations with the right people, globally.
EXPERIENCE
Duties you'll be entrusted with:
- Develop and execute scalable APIs and applications using the Node.js or Nest.js framework
- Writing efficient, reusable, testable, and scalable code.
- Understanding, analyzing, and implementing – Business needs, feature modification requests, and conversion into software components
- Integration of user-oriented elements into different applications, data storage solutions
- Developing – Backend components to enhance performance and receptiveness, server-side logic, and platform, statistical learning models, highly responsive web applications
- Designing and implementing – High availability and low latency applications, data protection and security features
- Performance tuning and automation of applications and enhancing the functionalities of current software systems.
- Keeping abreast with the latest technology and trends.
Expectations from you:
Basic Requirements
- Minimum qualification: Bachelor’s degree or more in Computer Science, Software Engineering, Artificial Intelligence, or a related field.
- Experience with Cloud platforms (AWS, Azure, GCP).
- Strong understanding of monitoring, logging, and observability practices.
- Experience with event-driven architectures (e.g., Kafka, RabbitMQ).
- Expertise in designing, implementing, and optimizing Elasticsearch.
- Work with modern tools including Jira, Slack, GitHub, Google Docs, etc.
- Expertise in Event driven architecture.
- Experience in Integrating Generative AI APIs.
- Working experience in high user concurrency.
- Experience in scaled databases for handling millions of records - indexing, retrieval, etc.,
Technical Skills
- Demonstrable experience in web application development with expertise in Node.js or Nest.js.
- Knowledge of database technologies and agile development methodologies.
- Experience working with databases, such as MySQL or MongoDB.
- Familiarity with web development frameworks, such as Express.js.
- Understanding of microservices architecture and DevOps principles.
- Well-versed with AWS and serverless architecture.
Soft Skills
- A quick and critical thinker with the ability to come up with a number of ideas about a topic and bring fresh and innovative ideas to the table to enhance the visual impact of our content.
- Potential to apply innovative and exciting ideas, concepts, and technologies.
- Stay up-to-date with the latest design trends, animation techniques, and software advancements.
- Multi-tasking and time-management skills, with the ability to prioritize tasks.
THRIVE
Some of the extensive benefits of being part of our team:
- We offer skill enhancement and educational reimbursement opportunities to help you further develop your expertise.
- The Member Reward Program provides an opportunity for you to earn up to INR 85,000 as an annual Performance Bonus.
- The McKinley Cares Program has a wide range of benefits:
- The wellness program covers sessions for mental wellness, and fitness and offers health insurance.
- In-house benefits have a referral bonus window and sponsored social functions.
- An Expanded Leave Basket including paid Maternity and Paternity Leaves and rejuvenation Leaves apart from the regular 20 leaves per annum.
- Our Family Support benefits not only include maternity and paternity leaves but also extend to provide childcare benefits.
- In addition to the retention bonus, our McKinley Retention Benefits program also includes a Leave Travel Allowance program.
- We also offer an exclusive McKinley Loan Program designed to assist our employees during challenging times and alleviate financial burdens.
Must have:
- 8+ years of experience with a significant focus on developing, deploying & supporting AI solutions in production environments.
- Proven experience in building enterprise software products for B2B businesses, particularly in the supply chain domain.
- Good understanding of Generics, OOPs concepts & Design Patterns
- Solid engineering and coding skills. Ability to write high-performance production quality code in Python
- Proficiency with ML libraries and frameworks (e.g., Pandas, TensorFlow, PyTorch, scikit-learn).
- Strong expertise in time series forecasting using stat, ML, DL and foundation models
- Experience of working on processing time series data employing techniques such as decomposition, clustering, outlier detection & treatment
- Exposure to generative AI models and agent architectures on platforms such as AWS Bedrock, Crew AI, Mosaic/Databricks, Azure
- Experience of working with modern data architectures, including data lakes and data warehouses, having leveraged one or more of the frameworks such as Airbyte, Airflow, Dagster, AWS Glue, Snowflake,, DBT
- Hands-on experience with cloud platforms (e.g., AWS, Azure, GCP) and deploying ML models in cloud environments.
- Excellent problem-solving skills and the ability to work independently as well as in a collaborative team environment.
- Effective communication skills, with the ability to convey complex technical concepts to non-technical stakeholders
Good To Have:
- Experience with MLOps tools and practices for continuous integration and deployment of ML models.
- Has familiarity with deploying applications on Kubernetes
- Knowledge of supply chain management principles and challenges.
- A Master's or Ph.D. in Computer Science, Machine Learning, Data Science, or a related field is preferred
What you’ll be doing
Weare much more than our job descriptions, but here is where you will begin:
As a Senior Software Engineer Data & ML You’ll Be:
● Architect, design, test, implement, deploy, monitor and maintain end-to-end backend
services. You build it, you own it.
● Work with people from other teams and departments on a day to day basis to ensure
efficient project execution with a focus on delivering value to our members.
● Regularly aligning your team’s vision and roadmap with the target architecture within your
domain and to ensure the success of complex multi domain initiatives.
● Integrate already trained ML and GenAI models (preferably GCP in services.
ROLE:
Whatyou’ll need,
Like us, you’ll be deeply committed to delivering impactful outcomes for customers.
What Makes You a Great Fit
● 5 years of proven work experience as a Backend Python Engineer
● Understanding of software engineering fundamentals OOPS, SOLID, etc.)
● Hands-on experience with Python libraries like Pandas, NumPy, Scikit-learn,
Lang chain/LLamaIndex etc.
● Experience with machine learning frameworks such as PyTorch or TensorFlow, Keras, being
proficient in Python
● Hands-on Experience with frameworks such as Django or FastAPI or Flask
● Hands-on experience with MySQL, MongoDB, Redis and BigQuery (or equivalents)
● Extensive experience integrating with or creating REST APIs
● Experience with creating and maintaining CI/CD pipelines- we use GitHub Actions.
● Experience with event-driven architectures like Kafka, RabbitMq or equivalents.
● Knowledge about:
o LLMs
o Vector stores/databases
o PromptEngineering
o Embeddings and their implementations
● Somehands-onexperience in implementations of the above ML/AI will be preferred
● Experience with GCP/AWS services.
● You are curious about and motivated by the future trends in data, AI/ML, analytics












