50+ Python Jobs in India
Apply to 50+ Python Jobs on CutShort.io. Find your next job, effortlessly. Browse Python Jobs and apply today!
We are seeking a skilled Data Engineer to join the AI Platform Capabilities team supporting the UDP Uplift program.
In this role, you will design, build, and test standardized data and AI platform capabilities across a multi-cloud environment (Azure & GCP).
You will collaborate closely with AI use case teams to develop:
- Scalable data pipelines
- Reusable data products
- Foundational data infrastructure
Your work will support advanced AI solutions such as:
- GenAI
- RAG (Retrieval-Augmented Generation)
- Document Intelligence
Key Responsibilities
- Design and develop scalable ETL/ELT pipelines for AI workloads
- Build and optimize data pipelines for structured & unstructured data
- Enable context processing & vector store integrations
- Support streaming data workflows and batch processing
- Ensure adherence to enterprise data models, governance, and security standards
- Collaborate with DataOps, MLOps, Security, and business teams (LBUs)
- Contribute to data lifecycle management for AI platforms
Required Skills
- 5–7 years of hands-on experience in Data Engineering
- Strong expertise in Python and advanced SQL
- Experience with GCP and/or Azure cloud-native data services
- Hands-on experience with PySpark / Spark SQL
- Experience building data pipelines for ML/AI workloads
- Understanding of CI/CD, Git, and Agile methodologies
- Knowledge of data quality, governance, and security practices
- Strong collaboration and stakeholder management skills
Nice-to-Have Skills
- Experience with Vector Databases / Vector Stores (for RAG pipelines)
- Familiarity with MLOps / GenAIOps concepts (feature stores, model registries, prompt management)
- Exposure to Knowledge Graphs / Context Stores / Document Intelligence workflows
- Experience with DBT (Data Build Tool)
- Knowledge of Infrastructure-as-Code (Terraform)
- Experience in multi-cloud deployments (Azure + GCP)
- Familiarity with event-driven systems (Kafka, Pub/Sub) & API integrations
Ideal Candidate Profile
- Strong data engineering foundation with AI/ML exposure
- Experience working in multi-cloud environments
- Ability to build production-grade, scalable data systems
- Comfortable working in cross-functional, fast-paced environments
Looking for a Senior Full-Stack Tech Lead to build web platform a lean MVP of an AI content generation SaaS platform with Templates, AI API integrations, user dashboard, admin panel, and an early node-based Workflow prototype.
The product will be similar in overall logic to modern AI content tools, with the following core areas:
- AI image / video generation via external APIs
- User dashboard
- Templates section with ready-made visual examples and prompts
- User photo upload and generation flow
- Basic generation history
- Basic admin panel
- Credits / usage limits
- Early Workflow / Space prototype for node-based generation flows
Important:
This is not an enterprise build. We are looking for a compact, practical MVP using existing AI APIs and modern AI coding tools.
We are looking for a Senior Full-Stack Developer / Tech Lead who can both write code and help structure the project technically.
Responsibilities:
- Review the project brief and help define the MVP scope
- Build the main web application
- Implement frontend and backend logic
- Integrate external AI generation APIs
- Build user dashboard, generation history, and basic admin panel
- Set up database, storage, and basic deployment
- Help with architecture decisions
- Work with a UI/UX designer and possibly a separate workflow/frontend developer
Required skills:
- React.js / Next.js
- TypeScript
- Node.js
- PostgreSQL
- API integrations
- Experience building SaaS or web platforms
- Good understanding of backend architecture
- Ability to work independently and communicate clearly
Nice to have:
- Python
- Redis / queues
- AI API integrations
- Image or video generation experience
- React Flow or node-based workflow UI experience
- AWS / GCP / Cloudflare / S3-compatible storage
- Telegram bot experience
What we need from you:
1. Links to 2–3 relevant projects
2. Your experience with SaaS, AI tools, or generation platforms
3. Your recommended MVP approach
4. Your availability
5. Your expected fixed price or monthly rate
6. Whether you can start with a short paid discovery / technical planning task
Project format:
- Remote
- Contract or full-time
- Lean MVP first
- Fixed milestones preferred
- Long-term cooperation possible if the first stage goes well
We are not looking for a large agency. We prefer a strong individual developer or a compact team that can move fast and build a practical MVP.
Key Responsibilities
Architect and implement enterprise-grade Lakehouse solutions using Databricks
Design and deliver scalable batch and real-time data pipelines using Apache Spark (PySpark/SQL)
Build ETL/ELT pipelines, incremental data loads, and metadata-driven ingestion frameworks
Implement and optimize Databricks components: Delta Lake, Delta Live Tables, Autoloader, Structured Streaming, and Workflows
Design large-scale data warehousing solutions with 3NF and dimensional modeling
Establish data governance, security, and data quality frameworks, including Unity Catalog
Lead ML lifecycle management using MLflow and drive AI use cases (RAG, AI/BI)
Manage cloud-native deployments on Microsoft Azure and integrate with enterprise systems (e.g., ServiceNow)
Drive CI/CD, DevOps practices, and performance optimization of Spark workloads
Provide technical leadership, mentor teams, and ensure successful delivery
Collaborate with stakeholders to translate business requirements into scalable solutions
Required Skills & Experience
10+ years in Data Engineering / Analytics / AI with strong delivery ownership
Deep expertise in Databricks ecosystem (Notebooks, Delta Lake, Workflows, AI/BI, Apps, Genie)
Strong hands-on experience with:
a. Apache Spark (performance tuning & scalability)
b. Python and SQL
Proven experience in:
a. Solution architecture and large-scale data platforms
b. Data warehousing and advanced data modeling
c. Batch and real-time processing systems
Experience with:
a. Azure Databricks and Azure data services
b. MLflow and MLOps practices
c. ServiceNow or enterprise integrations
Exposure to AI technologies (RAG, LLM-based solutions)
Strong stakeholder management and leadership skills
Certifications (Preferred)
Databricks certifications aligned to data engineering and AI tracks, such as:
a. Databricks Certified Data Engineer Associate (validates foundational ETL, Spark, and Lakehouse capabilities)
b. Databricks Certified Data Engineer Professional (advanced expertise in pipeline design, optimization, and governance)
Certifications in Databricks Machine Learning or Generative AI tracks (e.g., ML Associate / Professional) for AI-driven use cases
Relevant cloud certifications in Microsoft Azure or Amazon Web Services for platform deployment and architecture
We are looking for a QA Engineer with hands-on experience in testing Generative AI systems, LLMs, and RAG pipelines. This role goes beyond traditional QA and focuses on evaluating non-deterministic AI outputs, testing agentic workflows, and ensuring AI safety, accuracy, and reliability in enterprise-grade AI services.
You will work on API-driven AI services such as Intelligent Document Processing and AI Gateways, ensuring they meet enterprise standards before deployment.
Key Responsibilities
- Test and validate Generative AI applications, LLMs, and RAG-based systems
- Evaluate AI outputs for accuracy, groundedness, coherence, and hallucination
- Design and execute test strategies for multi-step agentic workflows
- Perform API and integration testing for AI services
- Build automated test pipelines using Python
- Create synthetic datasets for testing AI systems
- Conduct adversarial testing (prompt injection, safety, guardrails)
- Integrate AI testing into CI/CD pipelines
Must-Have Skills
- 5–7 years of experience in QA / Test Automation
- Hands-on experience testing Generative AI / LLM-based applications
- Strong programming skills in Python
- Experience with RAG pipelines
- Knowledge of LLM evaluation frameworks (RAGAS, TruLens, LangSmith or similar)
- Strong experience in API testing (Postman, REST Assured, etc.)
- Experience testing multi-agent workflows / agentic systems
- Understanding of hallucination, prompt injection, and AI safety concepts
Good-to-Have Skills
- Experience with GCP (Vertex AI) or Azure OpenAI
- SQL / NoSQL knowledge for data validation
- Experience in BFSI / Insurance domain
- Performance testing of APIs and AI systems
Additional Information
- Candidates without hands-on experience in testing Generative AI / LLM systems will not be considered
- Immediate to 45 days notice period preferred
Location: Bangalore
Experience: 2–5 years
Type: Full-time | On-site
Open Roles: 2
Start: Immediate
Why this role exists
Most systems work at a low scale.
Very few survive real production load, complex workflows, and enterprise edge cases.
We are building a platform that must:
- Scale from 500K → 20M+ interactions/month
- Handle complex insurance workflows reliably
- Become easier to deploy as it grows, not harder
This role exists to build the backend foundation that makes this possible.
What you’ll do
You will not just write services.
You will design and own core platform systems.
1. Scale the platform without breaking architecture
- Scale from 50K → 2M+ interactions/month
- Ensure:
- High availability
- Low latency
- Fault tolerance
- Avoid large rewrites — build systems that evolve cleanly
2. Build the workflow automation (WA) engine
- Design a flexible system with:
- States
- Stages
- Cohorts
- Dynamic workflows
- Ensure workflows:
- Handle edge cases reliably
- Can be configured easily
- Move from:
- Hardcoded flows → configurable execution engine
3. Build the insurance-specific data layer
- Design data models for:
- Policy states
- Claim workflows
- Consent tracking
- Ensure the system works across:
- Multiple insurers
- Multiple use cases
- Build a platform-first data layer, not use-case-specific hacks
4. Make deployment and setup simple
- Ensure workflows and data models are:
- Easy to configure
- Easy to launch
- Reduce friction for:
- Product teams
- Deployment teams
5. Create a compounding data advantage
- Build a data layer that:
- Improves with every deployment
- Captures structured signals
- Ensure data becomes a long-term edge, not just storage
6. Own production reliability
- Participate in on-call rotation across 3 engineers
- Ensure:
- Incidents are handled quickly
- Root causes are fixed permanently
- Build systems where reliability is shared, not individual
What success looks like
- Platform scales to 2M+ interactions/month smoothly
- Workflow engine supports complex, dynamic use cases
- Data layer enables fast deployment across accounts
- Edge cases are handled without constant firefighting
- System becomes easier to use as it grows
- Production issues are rare and predictable
Who you are
- You have 2-5 years of backend engineering experience
- You have built:
- Scalable systems
- Distributed services
- You think in:
- Systems
- Data models
- Trade-offs
- You are comfortable owning:
- Architecture
- Production systems
What will make you stand out
- Experience building:
- Workflow engines
- State machines
- Data-heavy platforms
- Strong understanding of:
- System design
- Distributed systems
- Failure handling
- Experience working in:
- High-scale production environments
Why join
- You will build the core backend of an AI platform
- Your work directly impacts:
- Scale
- Reliability
- Product capability
- You will design systems that move from:
- Use-case specific → platform-level infrastructure
What this role is not
- Not just API development
- Not limited to feature-level work
- Not disconnected from production realities
What this role is
- A system architect
- A builder of scalable platforms
- A driver of long-term technical advantage
One question to self-evaluate
Can you design backend systems that scale, handle edge cases, and become easier to use as they grow?
Role Overview
As a Senior QA Engineer (Automation), you will drive product quality across all stages of development and deployment. You’ll take complete ownership of defining QA strategy, implementing robust automation frameworks, and ensuring every release meets our high standards of reliability, performance, and user delight.
This role is ideal for someone who thrives in a fast-paced startup environment, loves solving problems, and is passionate about building scalable and flawless user experiences.
Key Responsibilities
Define & Execute QA Strategy:
Develop and implement test strategies covering functional, regression, integration, and exploratory testing.
Automation Leadership:
Build and maintain scalable automation frameworks integrated into CI/CD pipelines to improve speed, reliability, and test coverage.
Collaborate Early:
Partner closely with Product and Engineering teams to ensure testable requirements and early QA involvement in the development cycle.
Release Readiness:
Own end-to-end release validation, including regression testing, defect triage, and final sign-off on product quality.
Quality Metrics & Reporting:
Define, track, and communicate key QA metrics (defect leakage, build health, test coverage) to drive data-backed improvements.
Performance & Security Testing:
Conduct basic performance and security validation to ensure system robustness.
Mentorship & Best Practices:
Guide junior QA engineers, promoting test design excellence, automation best practices, and continuous improvement.
Process Optimization:
Continuously enhance QA processes through retrospectives, automation expansion, and shift-left testing principles.
Documentation:
Maintain comprehensive documentation of test cases, strategies, bug reports, and quality incident postmortems.
What We’re Looking For
- 5 - 10 years of QA experience in product-based startups, ideally in B2C environments.
- Proven expertise in test automation (e.g., Selenium, Appium, Cypress, Playwright, etc.).
- Strong understanding of CI/CD pipelines, API testing, and test design principles.
- Hands-on experience with manual and exploratory testing.
- Ability to handle multiple projects independently and drive them to completion.
- High sense of ownership, accountability, and attention to detail.
- Excellent communication and collaboration skills.
- Willingness to work from the office (HSR Layout, Bangalore).
Why Join Us
- Opportunity to impact millions of users in India’s devotional and spiritual space.
- Work with a talented, passionate, and mission-driven team.
- High ownership role with end-to-end accountability.
- Fast-paced, collaborative, and growth-oriented culture.
Build seamless, trusted experiences that bring faith and technology together.
Backend Developer
📍 Noida | 🕐 Full-Time | 🧭 Experience: 2–3 Years
The Mission
We aren't building traditional backend systems — we're powering the infrastructure behind Agentic Intelligence. TestMu AI is building the world's first AI-native platform where backend systems don't just serve requests, they enable autonomous decision-making, execution, and scale.
The name "TestMu" comes from our community conference. Our users and team aren't an audience — they're the heartbeat of what we build. We believe AI augments human potential. It doesn't replace it.
You'll be building the core backend systems that power AI-driven workflows — ensuring high performance, scalability, and reliability at every layer.
The Pillars of Impact
🚀 1. Core Backend & System Architecture (50%)
- Build and scale high-performance backend services and APIs
- Design efficient database schemas, query optimization, and data flows
- Write clean, logical, production-grade code (Python, Golang, or similar)
- Own system performance — latency, throughput, and reliability
⚙️ 2. Backend for AI Systems (30%)
- Develop backend systems supporting AI agents and autonomous workflows
- Handle large-scale data processing, async tasks, and event-driven systems
- Integrate backend infrastructure seamlessly with AI/ML components
🧠 3. Scalability & Distributed Systems (20%)
- Contribute to microservices architecture and service decomposition
- Build fault-tolerant, highly available distributed systems
- Optimize systems for high concurrency and real-time execution
Your Engineering Stack
Tech/ToolsPython / GolangBuilding core backend services and logic-heavy systemsAWS / GCPDeploying and scaling distributed backend infrastructureKafka / RabbitMQHandling asynchronous processing and event-driven workflows
The Bar
SignalCore Backend Experience2–3 years of hands-on experience building APIs, backend systems, and scalable servicesProblem-Solving AbilityStrong fundamentals in data structures, algorithms, and logical thinkingSystem Design UnderstandingAbility to design scalable backend systems with clear architectural thinkingOwnership & ExecutionExperience owning backend features end-to-end in a fast-paced environment
The Interview Loop · Screening for the Top 1%
RoundsRound I · Recruiter ScreenEvaluation of backend experience, problem-solving approach, and project depthRound II · Hiring ManagerDeep dive into backend projects, APIs, databases, and system design thinkingRound III · Domain LeadLive coding + backend problem-solving + discussion on scalability and distributed systemsFinal · LeadershipCulture fit, ownership mindset, and ability to operate in a high-velocity startup environment
Your Growth Trajectory TestMu AI is a high-growth environment where we promote based on complexity solved, not years of tenure. As a Backend Developer, you have a massive runway to scale from an Individual Contributor (IC) into a core Engineering Leadership role, working alongside pioneers in agentic intelligence.
Perks of the Future
- Health & Wellness: 100% premium covered insurance for you + family (spouse, kids, parents) with annual check-ups.
- Fuel for Innovation: Fresh, daily gourmet lunch and dinner served at our Noida HQ.
- Seamless Transit: Safe, GPS-enabled cab facilities for eligible shifts (home-office-home).
- POD Culture: Dedicated quarterly budgets for team-building, offsites, and collaborative celebrations.
Role Overview:
As a Backend Developer at LearnTube.ai, you will ship the backbone that powers 2.3 million learners in 64 countries—owning APIs that crunch 1 billion learning events & the AI that supports it with <200 ms latency.
Skip the wait and get noticed faster by completing our AI-powered screening. Click this link to start your quick interview. It only takes a few minutes and could be your shortcut to landing the job! -https://bit.ly/LT_Python
What You'll Do:
At LearnTube, we’re pushing the boundaries of Generative AI to revolutionize how the world learns. As a Backend Engineer, your roles and responsibilities will include:
- Ship Micro-services – Build FastAPI services that handle ≈ 800 req/s today and will triple within a year (sub-200 ms p95).
- Power Real-Time Learning – Drive the quiz-scoring & AI-tutor engines that crunch millions of events daily.
- Design for Scale & Safety – Model data (Postgres, Mongo, Redis, SQS) and craft modular, secure back-end components from scratch.
- Deploy Globally – Roll out Dockerised services behind NGINX on AWS (EC2, S3, SQS) and GCP (GKE) via Kubernetes.
- Automate Releases – GitLab CI/CD + blue-green / canary = multiple safe prod deploys each week.
- Own Reliability – Instrument with Prometheus / Grafana, chase 99.9 % uptime, trim infra spend.
- Expose Gen-AI at Scale – Publish LLM inference & vector-search endpoints in partnership with the AI team.
- Ship Fast, Learn Fast – Work with founders, PMs, and designers in weekly ship rooms; take a feature from Figma to prod in < 2 weeks.
What makes you a great fit?
Must-Haves:
- 3+ yrs Python back-end experience (FastAPI)
- Strong with Docker & container orchestration
- Hands-on with GitLab CI/CD, AWS (EC2, S3, SQS) or GCP (GKE / Compute) in production
- SQL/NoSQL (Postgres, MongoDB) + You’ve built systems from scratch & have solid system-design fundamentals
Nice-to-Haves
- k8s at scale, Terraform,
- Experience with AI/ML inference services (LLMs, vector DBs)
- Go / Rust for high-perf services
- Observability: Prometheus, Grafana, OpenTelemetry
About Us:
At LearnTube, we’re on a mission to make learning accessible, affordable, and engaging for millions of learners globally. Using Generative AI, we transform scattered internet content into dynamic, goal-driven courses with:
- AI-powered tutors that teach live, solve doubts in real time, and provide instant feedback.
- Seamless delivery through WhatsApp, mobile apps, and the web, with over 1.4 million learners across 64 countries.
Meet the Founders:
LearnTube was founded by Shronit Ladhani and Gargi Ruparelia, who bring deep expertise in product development and ed-tech innovation. Shronit, a TEDx speaker, is an advocate for disrupting traditional learning, while Gargi’s focus on scalable AI solutions drives our mission to build an AI-first company that empowers learners to achieve career outcomes. We’re proud to be recognised by Google as a Top 20 AI Startup and are part of their 2024 Startups Accelerator: AI First Program, giving us access to cutting-edge technology, credits, and mentorship from industry leaders.
Why Work With Us?
At LearnTube, we believe in creating a work environment that’s as transformative as the products we build. Here’s why this role is an incredible opportunity:
- Cutting-Edge Technology: You’ll work on state-of-the-art generative AI applications, leveraging the latest advancements in LLMs, multimodal AI, and real-time systems.
- Autonomy and Ownership: Experience unparalleled flexibility and independence in a role where you’ll own high-impact projects from ideation to deployment.
- Rapid Growth: Accelerate your career by working on impactful projects that pack three years of learning and growth into one.
- Founder and Advisor Access: Collaborate directly with founders and industry experts, including the CTO of Inflection AI, to build transformative solutions.
- Team Culture: Join a close-knit team of high-performing engineers and innovators, where every voice matters, and Monday morning meetings are something to look forward to.
- Mission-Driven Impact: Be part of a company that’s redefining education for millions of learners and making AI accessible to everyone.
About us:
Optimo Capital is a newly established NBFC founded by Prashant Pitti, who is also a co-founder of EaseMyTrip (a billion-dollar listed startup that grew profitably without any funding).
Our mission is to serve the underserved MSME businesses in India with their credit needs. With less than 15% of MSMEs having access to formal credit, we aim to bridge this credit gap by employing a phygital model (physical branches + digital decision-making).
As a technology and data-first company, tech and data enthusiasts play a crucial role in building the infrastructure at Optimo, and help the company thrive.
What we offer:
Join our dynamic startup team as a Full Stack Developer and play a crucial role in web application & API developments, customer journeys, tech integrations, building robust credit risk and underwriting decision engines, cloud infrastructure, and more.
This is an exceptional opportunity to learn, grow, and make a significant impact in a fast-paced startup environment. We believe that the freedom and accountability to make decisions in technology, software, system architecture, and other design aspects bring out the best in you and help us build the best for the company.
This environment will not only offer you a steep learning curve but also allow you to experience the direct impact of your technological contributions. In addition, we offer industry-standard compensation.
What we look for:
We are looking for individuals with strong proficiency in Python, React, and Django. Any experience in a startup, front-end/back-end development, tech-integrations, or open-source contributions will be highly valued.
We focus not only on your skills but also on your attitude and your hunger to learn, grow, lead, and thrive—both as an individual and as part of a team. We encourage taking on challenges, learning new technologies, understanding, building, and implementing them within a short period of time. Your willingness to put in the extra effort to build the best systems will be highly appreciated.
Skills:
Excellent proficiency with the ability to write clean, robust, production-level code. Experience in designing, developing, and maintaining web apps and rule engines is required. At least one year of experience as a developer in any engineering / software-based role is required.
1) Frontend Development
- JavaScript: Strong proficiency in JavaScript, including ES6+ features
- React: Experience building complex user interfaces using React and its ecosystem (e.g., Redux, Context API)
- HTML/CSS: Solid understanding of HTML5 and CSS3 for creating responsive and accessible web pages
2) Backend Development
- Python: Proficiency in Python for server-side development
- Django: Working knowledge in Django, Django Rest Framework
- Flask (or FastAPI): Experience building RESTful APIs using Flask or FastAPI is a plus
3) REST APIs: A strong understanding of APIs is required, along with prior experience in API development or integration. Writing REST APIs from scratch is highly desirable.
4) Databases: A basic understanding of both relational (e.g., PostgreSQL, MySQL) and NoSQL (e.g., MongoDB) databases is required. Basic knowledge of database management, optimization, and query design is expected.
5) Git: Proficiency in Git is essential, with experience in branching, merging, pull requests, and conflict resolution. Experience in collaborative projects using Git is highly valued.
6) Good to have:
- Basic understanding of data pipelines/ETLs, dashboarding, and AWS is beneficial but not required.
- Experience in building WhatsApp chat/flow journeys, Working with maps, and creating data layers (e.g., Google Maps API, Mapbox) is highly valued. (not mandatory)
What you'll be working on:
- Design and build systems focused on creating straight-through processes for lending (specifically property loans), from customer onboarding to disbursement, with an emphasis on accurate and efficient credit and risk assessment.
- Take projects from ideation to production, including web applications, rule engines, third-party API integrations, and other technology developments.
- Take initiative and ownership of engineering projects, ensuring a seamless user experience.
- Manage and coordinate the cloud infrastructure and application setup, including source code repositories, CI/CD pipelines, servers, and deployments.
Other Requirements:
- Availability for full-time work in Bangalore. Advantage for immediate joiners.
- Strong passion for technology and problem-solving.
- Ability to translate requirements into intuitive interfaces is highly appreciated
- At least 1 year of industry experience in a technical role specifically as a developer is a must.
- Self-motivated and capable of working both independently and collaboratively.
If you are ready to embark on an exciting journey of growth, learning, and innovation, apply now to join our pioneering team in Bangalore.
About Us
We are a company where the ‘HOW’ of building software is just as important as the ‘WHAT.’ We partner with large organizations to modernize legacy codebases and collaborate with startups to launch MVPs, scale, or act as extensions of their teams. Guided by Software Craftsmanship values and eXtreme Programming Practices, we deliver high-quality, reliable software solutions tailored to our clients' needs.
We thrive to:
- Bring our clients' dreams to life by being their trusted engineering partners, crafting innovative software solutions.
- Challenge offshore development stereotypes by delivering exceptional quality, and proving the value of craftsmanship.
- Empower clients to deliver value quickly and frequently to their end users.
- Ensure long-term success for our clients by building reliable, sustainable, and impactful solutions.
- Raise the bar of software craft by setting a new standard for the community.
Job Description
This is a remote position.
Our Core Values
- Quality with Pragmatism: We aim for excellence with a focus on practical solutions.
- Extreme Ownership: We own our work and its outcomes fully.
- Proactive Collaboration: Teamwork elevates us all.
- Pursuit of Mastery: Continuous growth drives us.
- Effective Feedback: Honest, constructive feedback fosters improvement.
- Client Success: Our clients’ success is our success.
Experience Level
This role is ideal for engineers with 6+ years of hands-on software development experience, particularly in Python and ReactJs at scale.
Role Overview
If you’re a Software Craftsperson who takes pride in clean, test-driven code and believes in Extreme Programming principles, we’d love to meet you. At Incubyte, we’re a DevOps organization where developers own the entire release cycle, meaning you’ll get hands-on experience across programming, cloud infrastructure, client communication, and everything in between. Ready to level up your craft and join a team that’s as quality-obsessed as you are? Read on!
What You'll Do
- Write Tests First: Start by writing tests to ensure code quality
- Clean Code: Produce self-explanatory, clean code with predictable results
- Frequent Releases: Make frequent, small releases
- Pair Programming: Work in pairs for better results
- Peer Reviews: Conduct peer code reviews for continuous improvement
- Product Team: Collaborate in a product team to build and rapidly roll out new features and fixes
- Full Stack Ownership: Handle everything from the front end to the back end, including infrastructure and DevOps pipelines
- Never Stop Learning: Commit to continuous learning and improvement
- AI-First Development Focus
- Leverage AI tools like GitHub Copilot, Cursor, Augment, Claude Code, etc., to accelerate development and automate repetitive tasks.
- Use AI to detect potential bugs, code smells, and performance bottlenecks early in the development process.
- Apply prompt engineering techniques to get the best results from AI coding assistants.
- Evaluate AI generated code/tools for correctness, performance, and security before merging.
- Continuously explore, stay ahead by experimenting and integrating new AI powered tools and workflows as they emerge.
Requirements
What We're Looking For
- Proficiency in some or all of the following: ReactJS, JavaScript, Object Oriented Programming in JS
- 6+ years of Object-Oriented Programming with Python or equivalent
- 6+ years of experience working with relational (SQL) databases
- 6+ years of experience using Git to contribute code as part of a team of Software Craftspeople
- AI Skills & Mindset
- Power user of AI assisted coding tools (e.g., GitHub Copilot, Cursor, Augment, Claude Code).
- Strong prompt engineering skills to effectively guide AI in crafting relevant, high-quality code.
- Ability to critically evaluate AI generated code for logic, maintainability, performance, and security.
- Curiosity and adaptability to quickly learn and apply new AI tools and workflows.
- AI evaluation mindset balancing AI speed with human judgment for robust solutions.
Benefits
What We Offer
- Dedicated Learning & Development Budget: Fuel your growth with a budget dedicated solely to learning.
- Conference Talks Sponsorship: Amplify your voice! If you’re speaking at a conference, we’ll fully sponsor and support your talk.
- Cutting-Edge Projects: Work on exciting projects with the latest AI technologies
- Employee-Friendly Leave Policy: Recharge with ample leave options designed for a healthy work-life balance.
- Comprehensive Medical & Term Insurance: Full coverage for you and your family’s peace of mind.
- And More: Extra perks to support your well-being and professional growth.
Work Environment
- Remote-First Culture: At Incubyte, we thrive on a culture of structured flexibility — while you have control over where and how you work, everyone commits to a consistent rhythm that supports their team during core working hours for smooth collaboration and timely project delivery. By striking the perfect balance between freedom and responsibility, we enable ourselves to deliver high-quality standards our customers recognize us by. With asynchronous tools and push for active participation, we foster a vibrant, hands-on environment where each team member’s engagement and contributions drive impactful results.
- Work-In-Person: Twice a year, we come together for two-week sprints to collaborate in person, foster stronger team bonds, and align on goals. Additionally, we host an annual retreat to recharge and connect as a team. All travel expenses are covered.
- Proactive Collaboration: Collaboration is central to our work. Through daily pair programming sessions, we focus on mentorship, continuous learning, and shared problem-solving. This hands-on approach keeps us innovative and aligned as a team.
Incubyte is an equal opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all employees.
POSITION: Sr. QA Engineer
We are looking for a seasoned and results-driven Senior QA Engineer with 5 to 7 years of
experience in Manual and Automation Testing. The candidate should have deep expertise in QA
processes, strong automation skills using Python or equivalent, and the ability to lead quality
initiatives for our core product suite. You won't just be finding bugs — you will be building a
resilient quality ecosystem that leverages modern tools.
What You’ll Be Doing:
● Understand business requirements and convert them into test scenarios and test cases
● Perform Manual Testing including Functional, Regression, Integration, & System Testing
● Develop, maintain, and execute Automation Scripts using Python
● Identify, report, and track defects using defect management tools
● Work closely with Developers, Product Managers, and QA team members
● Lead requirement analysis, test planning, and test case reviews
● Contribute to improving QA processes and automation coverage
● Participate in sprint planning, retrospectives, and cross-functional reviews
● Identify, report, and track defects using defect management tools; manage triage and
resolution with development teams
● Catch edge cases before they become production issues
● Co-ordination is release processes
Automation Skills:
● Maintain, and extend robust Automation Frameworks ( PyTest / Selenium) for UI and
backend services as well as design patterns, and CI/CD integration
● Monitor nightly automation runs, troubleshoot defects
● Ability to design, extend, debug, and maintain test frameworks independently
API and Database Testing:
● Perform contract testing and functional validation of REST APIs using Postman or similar
tools.
● Write complex SQL queries to validate data pipelines, migrations
Qualifications:
● Strong understanding of Software Testing concepts
● Experience in writing Test Cases and Test Scenarios
● Experience in Defect Tracking tools (JIRA, etc.)
● Experience in CI/CD tools (Jenkins, GitHub Actions, GitLab CI)
● Experience in Agile / Scrum methodology
● Strong analytical and problem-solving skills with a 'break-it' mentality
● Good communication skills — ability to articulate quality risks to non-technical
stakeholders
● Self-motivated, quick learner, and proactive in driving quality culture
● Strong team player with empathy to help developers 'fix-it'
● Exposure to Agentic AI testing frameworks
REPORTING: This position will report to Sr. Project Manager or as assigned by Management.
EMPLOYMENT TYPE: Full-Time, Permanent
LOCATION: Jaipur (Work from Office)
SHIFT TIMINGS: 10:00 AM - 07:00 PM IST
WHO WE ARE:
SalesIntel is an agentic pipeline generation platform that helps go-to-market teams focus on
accounts that are ready to buy and the buyers who matter most. We enable thousands of users
by turning buying signals into actionable insights that drive revenue.
For more information, please visit – www.salesintel.io
WHAT WE OFFER:
SalesIntel’s workplace is all about diversity. Different countries and cultures are represented in
our workforce. We are growing at a fast pace and our work environment is constantly evolving
with changing times. We motivate our team to better themselves by offering all the good stuff
you’d expect like Holidays, Paid Leaves, Bonuses, Incentives, Medical Policy and company paid
Training Programs.
SalesIntel is an Equal Opportunity Employer. We prohibit discrimination and harassment of any
type and offer equal employment opportunities to employees and applicants without regard to
race, color, religion, sex, sexual orientation, gender identity or expression, pregnancy, age,
national origin, disability status, genetic information, protected veteran status, or any other
characteristic protected by law.
Senior Data Engineer (Databricks, BigQuery, Snowflake)
Experience: 8+ Years in Data Engineering
Location: Remote | Onsite (Noida, Gurgaon, Pune, Nagpur, Jaipur, Gandhinagar)
Budget: Open / Competitive
Job Summary:
We are seeking a highly skilled Senior Data Engineer to design, build, and optimize scalable data solutions that support advanced analytics and machine learning initiatives. You will lead the development of reliable, high-performance data systems and collaborate closely with data scientists to enable data-driven decision-making.
In this role, we expect a forward-thinking professional who utilizes AI-augmented development tools (such as Cursor, Windsurf, or GitHub Copilot) to increase engineering velocity and maintain high code standards in a modern enterprise environment.
Key Responsibilities:
- Scalable Pipelines: Design, develop, and optimize end-to-end data pipelines using SQL, Python, and PySpark.
- ETL/ELT Workflows: Build and maintain workflows to transform raw data into structured, analytics-ready datasets.
- ML Integration: Partner with data scientists to deploy and integrate machine learning models into production environments.
- Cloud Infrastructure: Manage and scale data infrastructure within AWS and Azure ecosystems.
- Data Warehousing: Utilize Databricks and Snowflake for big data processing and enterprise warehousing.
- Automation & IaC: Implement workflow orchestration using Apache Airflow and manage infrastructure as code via Terraform.
- Performance Tuning: Optimize data storage, retrieval, and system performance across data warehouse platforms.
- Governance & Compliance: Ensure data quality and security using tools like Unity Catalog or Hive Metastore.
- AI-Augmented Development: Integrate AI tools and LLM APIs into data pipelines and use AI IDEs to streamline debugging and documentation.
Technical Requirements:
- Experience: 8+ years of core Data Engineering experience in large-scale enterprise or consulting environments.
- Languages: Expert proficiency in SQL and Python for complex data processing.
- Big Data: Hands-on experience with PySpark and large-scale distributed computing.
- Architecture: Strong understanding of ETL frameworks, data pipeline architecture, and data warehousing best practices.
- Cloud Platforms: Deep working knowledge of AWS and Azure.
- Modern Tooling: Proven experience with Databricks, Snowflake, and Apache Airflow.
- Infrastructure: Experience with Terraform or similar IaC tools for scalable deployments.
- AI Competency: Proficiency in using AI IDEs (Cursor/Windsurf) and integrating AI/ML models into production data flows.
Preferred Qualifications:
- Exposure to data governance and cataloging tools (e.g., Unity Catalog).
- Knowledge of performance tuning for massive-scale big data systems.
- Familiarity with real-time data processing frameworks.
- Experience in digital transformation and sustainability-focused data projects.
Job Summary:
We are looking for a highly motivated and skilled Software Engineer to join our team.
This role requires a strong understanding of the software development lifecycle, proficiency in coding, and excellent communication skills.
The ideal candidate will be responsible for production monitoring, resolving minor technical issues, collecting client information, providing effective client interactions, and supporting our development team in resolving challenges
Key Responsibilities:
Client Interaction: Serve as the primary point of contact for client queries, provide excellent communication, and ensure timely issue resolution.
Issue Resolution: Troubleshoot and resolve minor issues related to software applications in a timely manner.
Information Collection: Gather detailed technical information from clients, understand the problem context, and relay the information to the development leads for further action.
Collaboration: Work closely with development leads and cross-functional teams to provide timely support and resolution for customer issues.
Documentation: Document client issues, actions taken, and resolutions for future reference and continuous improvement.
Software Development Lifecycle: Be involved in maintaining, supporting, and optimizing software through its lifecycle, including bug fixes and enhancements.
Automating Redundant Support Tasks: (good to have) Should be able to automate the redundant repetitive tasks Required Skills and Qualifications:
Mandatory Skills:
Expertise in at least one Object Oriented Programming language (Python, Java, C#, C++, Reactjs, Nodejs).
Good knowledge on Data Structure and their correct usage.
Open to learn any new software development skill if needed for the project.
Alignment and utilization of the core enterprise technology stacks and integration capabilities throughout the transition states.
Participate in planning, definition, and high-level design of the solution and exploration of solution alternatives.
Define, explore, and support the implementation of enablers to evolve solution intent, working directly with Agile teams to implement them.
Good knowledge on the implications.
Experience architecting & estimating deep technical custom solutions & integrations.
Added advantage:
You have developed software using web technologies.
You have handled a project from start to end.
You have worked in an Agile Development project and have experience of writing and estimating User Stories
Communication Skills: Excellent verbal and written communication skills, with the ability to clearly explain technical issues to non-technical clients.
Client-Facing Experience: Strong ability to interact with clients, gather necessary information, and ensure a high level of customer satisfaction.
Problem-Solving: Quick-thinking and proactive in resolving minor issues, with a focus on providing excellent user experience.
Team Collaboration: Ability to collaborate with development leads, engineering teams, and other stakeholders to escalate complex issues or gather additional technical support when required.
Preferred Skills:
Familiarity with Cloud Platforms and Cyber Security tools: Knowledge of cloud computing platforms and services (AWS, Azure, Google Cloud) and Cortex XSOAR, SIEM, SOAR, XDR tools is a plus.
Automation and Scripting: Experience with automating processes or writing scripts to support issue resolution is an advantage.
Role & Responsibilities
We are looking for a hands-on Camunda Developer with strong experience in workflow orchestration and backend development. The ideal candidate should be able to design, build, and optimize end-to-end business processes using Camunda (preferably Camunda 8) and work closely with engineering and business teams to implement scalable and resilient workflows.
Key Responsibilities:
- Translate business requirements into BPMN workflows using Camunda (preferably Camunda 8)
- Design and implement end-to-end process orchestration across systems
- Build and manage service integrations (REST APIs, event-driven systems)
- Develop and maintain Zeebe workers / microservices (Python)
- Collaborate with stakeholders to refine workflows and handle edge cases
- Implement error handling, retries, and compensation mechanisms
- Analyse and improve workflows for scalability, reliability, and performance
- Ensure data consistency and idempotent process execution
- Work with cross-functional teams including data and analytics for process observability
Ideal Candidate
- Strong Senior Camunda Developer / Workflow Orchestration Engineer Profiles
- Mandatory (Experience 1) – Must have 4+years of hands-on experience in backend development and workflow systems, demonstrable through production-grade work on business process automation or backend service development.
- Mandatory (Experience 2) – Must have strong hands-on experience with Camunda and BPMN 2.0, including designing, building, and deploying end-to-end business process workflows in production.
- Mandatory (Experience 3) – Must have hands-on experience with Zeebe workers and the Camunda 8 stack, built and maintained as part of real orchestration systems.
- Mandatory (Experience 4) – Must have strong production-level coding skills in Python, used for building and maintaining Zeebe workers and microservices.
- Mandatory (Experience 5) – Must have experience designing and working within microservices architecture and distributed systems, with clear understanding of service decomposition, inter-service communication, and distributed system failure modes.
- Mandatory (Experience 6) – Must have hands-on experience building and consuming REST APIs and working with event-driven systems (message brokers, pub/sub, event streams).
- Mandatory (Skills) – Must have strong debugging and problem-solving skills in production workflow environments, with specific examples of resolving complex issues such as stuck processes, race conditions, or data inconsistency bugs.
- Preferred (Experience 1) – Exposure to cloud platforms (AWS / GCP / Azure) and experience with data platforms (e.g., Snowflake).
- Preferred (Experience 2) – Understanding of finance-related workflows (billing, reconciliation, etc.).
Responsible for developing, enhancing, modifying, and maintaining chatbot applications in the Global Markets environment. The role involves designing, coding, testing, debugging, and documenting conversational AI solutions, along with supporting activities aligned to the corporate systems architecture.
You will work closely with business partners to understand requirements, analyze data, and deliver optimal, market-ready conversational AI and automation solutions.
Key Responsibilities
- Design, develop, test, debug, and maintain chatbot and virtual agent applications
- Collaborate with business stakeholders to define and translate requirements into technical solutions
- Analyze large volumes of conversational data to improve chatbot accuracy and performance
- Develop automation workflows for data handling and refinement
- Train and optimize chatbots using historical chat logs and user-generated content
- Ensure solutions align with enterprise architecture and best practices
- Document solutions, workflows, and technical designs clearly
Required Skills
- Hands-on experience in developing virtual agents (chatbots/voicebots) and Natural Language Processing (NLP)
- Experience with one or more AI/NLP platforms such as:
- Dialogflow, Amazon Lex, Alexa, Rasa, LUIS, Kore.AI
- Microsoft Bot Framework, IBM Watson, Wit.ai, Salesforce Einstein, Converse.ai
- Strong programming knowledge in Python, JavaScript, or Node.js
- Experience training chatbots using historical conversations or large-scale text datasets
- Practical knowledge of:
- Formal syntax and semantics
- Corpus analysis
- Dialogue management
- Strong written communication skills
- Strong problem-solving ability and willingness to learn emerging technologies
Nice-to-Have Skills
- Understanding of conversational UI and voice-based processing (Text-to-Speech, Speech-to-Text)
- Experience building voice apps for Amazon Alexa or Google Home
- Experience with Test-Driven Development (TDD) and Agile methodologies
- Ability to design and implement end-to-end pipelines for AI-based conversational applications
- Experience in text mining, hypothesis generation, and historical data analysis
- Strong knowledge of regular expressions for data cleaning and preprocessing
- Understanding of API integrations, SSO, and token-based authentication
- Experience writing unit test cases as per project standards
- Knowledge of HTTP, REST APIs, sockets, and web services
- Ability to perform keyword and topic extraction from chat logs
- Experience training and tuning topic modeling algorithms such as LDA and NMF
- Understanding of classical Machine Learning algorithms and appropriate evaluation metrics
- Experience with NLP frameworks such as NLTK and spaCy
Strong Senior Camunda Developer / Workflow Orchestration Engineer Profiles
Mandatory (Experience 1) – Must have 4+years of hands-on experience in backend development and workflow systems, demonstrable through production-grade work on business process automation or backend service development.
Mandatory (Experience 2) – Must have strong hands-on experience with Camunda and BPMN 2.0, including designing, building, and deploying end-to-end business process workflows in production.
Mandatory (Experience 3) – Must have hands-on experience with Zeebe workers and the Camunda 8 stack, built and maintained as part of real orchestration systems.
Mandatory (Experience 4) – Must have strong production-level coding skills in Python, used for building and maintaining Zeebe workers and microservices.
Mandatory (Experience 5) – Must have experience designing and working within microservices architecture and distributed systems, with clear understanding of service decomposition, inter-service communication, and distributed system failure modes.
Mandatory (Experience 6) – Must have hands-on experience building and consuming REST APIs and working with event-driven systems (message brokers, pub/sub, event streams).
Mandatory (Skills) – Must have strong debugging and problem-solving skills in production workflow environments, with specific examples of resolving complex issues such as stuck processes, race conditions, or data inconsistency bugs.
Preferred
About TIFIN
TIFIN is an AI-powered fintech platform backed by industry leaders including JP Morgan, Morningstar, Broadridge, Hamilton Lane, Franklin Templeton, and Motive Partners.
We are building engaging wealth experiences that help improve financial lives through AI and investment intelligence-powered personalization. Our mission is to transform the world of wealth by combining technology, behavioral design, and investment intelligence to deliver better outcomes for investors.
At TIFIN, we use software and APIs to create personalized financial experiences, and algorithmic intelligence to power smarter financial decisions.
Our Values: Go with your GUT
- Grow at the Edge – We are driven by personal growth, step outside our comfort zones, and strive to be the best version of ourselves with self-awareness and integrity.
- Understanding through Listening and Speaking the Truth – We value transparency, authenticity, and radical candor to create shared understanding and strong decision-making.
- Win for TeamWin – We take ownership, stay in our genius zones, and work together with energy and commitment to succeed as a team.
Role Overview
We are looking for a talented and hands-on Fullstack Engineer to join our growing engineering team in Mumbai. This role is ideal for someone who enjoys building across the stack — from scalable backend systems and intuitive web applications to mobile-first experiences.
You will work closely with cross-functional teams to build high-impact products that power modern wealth and financial experiences. This is a high-ownership role where your work will directly influence product quality, user experience, and platform scalability.
Key Responsibilities
- Design, build, and maintain scalable full-stack applications using Python on the backend and React.js on the frontend.
- Develop and ship mobile features using Flutter, ensuring a seamless cross-platform experience across iOS and Android.
- Collaborate with Product, Design, Data Science, and Engineering teams to translate business requirements into robust technical solutions.
- Build and optimize RESTful APIs, backend services, and reusable frontend components.
- Ensure high standards of performance, security, scalability, and reliability across applications.
- Take end-to-end ownership of features from design and development to testing, deployment, and monitoring.
- Participate in code reviews, architecture discussions, sprint planning, and engineering best practices.
- Contribute to internal documentation, development processes, and knowledge sharing across teams.
Required Skills & Qualifications
- 5+ years of experience in fullstack software development.
- Strong hands-on experience with Python and backend frameworks such as FastAPI, Django, or Flask.
- Solid experience with React.js, including hooks, component-based architecture, state management, and performance optimization.
- Practical hands-on experience building mobile applications using Flutter.
- Experience working with REST APIs, microservices, and modern frontend/backend integration patterns.
- Good understanding of relational and/or NoSQL databases such as PostgreSQL, MySQL, MongoDB, or Redis.
- Familiarity with cloud platforms such as AWS, GCP, or Azure.
- Experience with Docker, CI/CD pipelines, Git, and agile development workflows.
- Strong grasp of software engineering fundamentals, system design, and clean coding practices.
Good to Have
- Experience in fintech, wealth management, or financial services.
- Exposure to GraphQL, WebSockets, or real-time applications.
- Familiarity with AI/ML integrations or experience working closely with Data Science teams.
- Published apps or prior experience delivering production-grade mobile applications.
- Startup experience or experience working in high-growth product environments.
What We Offer
- Opportunity to work at the intersection of AI, fintech, and investment intelligence.
- A collaborative and high-ownership work culture.
- Competitive compensation and performance-linked incentives.
- Exposure to a global fintech ecosystem backed by top financial institutions and investors.
- The chance to build products that meaningfully improve financial lives.
TIFIN is an equal opportunity employer and values diverse perspectives, experiences, and backgrounds.
Python Developer (AI Integration Focus) – Junior :
Technical Requirements • Strong fundamentals in Python
* Experience building REST APIs
* Familiarity with:
o FastAPI / Flask / Django
o JSON, async programming basics
* Basic understanding of:
o LLM APIs (Azure OpenAI or equivalent)
o Prompt-based integrations
o Prompt Engineering
* Exposure to:
o Git and CI/CD pipelines
o Azure cloud fundamentals
* Basic database knowledge (SQL / NoSQL)
Core Engineering
* Advanced proficiency in Python
* Strong experience in:
o FastAPI / Django
o Async programming
o Event-driven architectures
o Microservices design
* Experience with:
o Azure/AWS cloud services
o Containerization (Docker, Kubernetes)
o API management / gateway design
AI & Agentic Capabilities
* Strong understanding of:
o LLM ecosystems (Claude, GPT, Gemini)
o LLM integration patterns
o Prompt engineering (few-shot, structured prompting, chaining)
o Tool invocation frameworks
* Experience with:
o Agentic frameworks and orchestration
o Workflow coordination across multiple AI services
o RAG architectures and patterns
o Vector databases
* Familiarity with:
o MCP connectors or contextual integration frameworks
Enterprise Integration
* Experience integrating AI layers with legacy enterprise systems
* Strong understanding of:
o API scalability
o Distributed system resilience
o High-availability architectures
About TIFIN:
TIFIN is a cutting-edge fintech platform transforming financial lives through AI and investment intelligence. Backed by industry leaders like JP Morgan and Morningstar, we're dedicated to personalizing wealth experiences, akin to how AI has revolutionized entertainment, but with the critical responsibility of delivering superior financial outcomes. We blend design and behavioral science with investment intelligence to create engaging software and APIs that empower better investor outcomes. Our mission is to recognize each individual's unique needs and goals, matching them to tailored financial advice and investments across our marketplace and various divisions.
Our Values: Go with your GUT
- Grow at the Edge: We embrace personal growth, stepping out of comfort zones, and putting ego aside to unlock genius. We operate with self-awareness and integrity, striving for excellence without excuses.
- Understanding through Listening and Speaking the Truth: Transparency is key. We communicate with radical candor, authenticity, and precision to foster shared understanding. We challenge ideas, but once a decision is made, we commit fully.
- Win for Teamwin: We thrive in our genius zones and take full ownership of our work. We inspire each other with energy and attitude, collaborating seamlessly to achieve collective success.
The Opportunity:
TIFIN is seeking a highly skilled and experienced LLM Engineer to join our innovative, remote-first team. This is a unique opportunity to shape the future of personalized financial experiences by leveraging your expertise in Large Language Models (LLMs) and Generative AI. As an early-stage startup, we're looking for an independent contributor and leader who is ready to build systems from the ground up and own outcomes.
What You'll Do:
- Collaborate closely with design and product teams to craft intuitive and engaging conversational AI experiences for our users.
- Work autonomously to deliver high-quality features, taking full ownership of project outcomes.
- Analyze and leverage our extensive data to create highly personalized experiences.
- Fine-tune LLMs with proprietary data to enhance model performance and relevance for our specific use cases.
- Implement various RAG (Retrieval-Augmented Generation) approaches to augment LLMs with relevant, up-to-date, and domain-specific information.
- Act as both a technical leader and an individual contributor, embodying the startup mentality of doing "whatever it takes" to succeed.
- Design and set up new workflows, systems, and tools from scratch, with support from the wider team.
What You'll Bring:
- 8+ years of professional experience in software engineering or a related field.
- Proven experience working with Large Language Models (LLMs) and Generative AI technologies.
- Demonstrated experience in building and deploying conversational bots.
- Hands-on experience with fine-tuning machine learning models, specifically LLMs.
- Proficiency in utilizing RAG-based approaches for LLM augmentation.
- A strong understanding of financial concepts and investing is a significant plus, though not strictly required.
- Ability to thrive in a fast-paced, startup environment, with a proactive and problem-solving mindset.
- Excellent communication skills and the ability to articulate complex technical concepts clearly.
Our Benefits Package Includes:
- Competitive salary with performance-linked variable compensation.
- Comprehensive medical insurance.
- Tax-saving benefits.
- Flexible Paid Time Off (PTO) policy and company-paid holidays.
- Generous Parental Leave: 6 months paid maternity leave, 2 weeks paid paternity leave.
TIFIN is an equal-opportunity employer, valuing diverse talents and perspectives. We encourage all qualified applicants to apply, regardless of background.
Profile - Databricks Developer
Experience- 5+ years
Location- Bangalore (On site)
PF & BGV is Mandatory
Job Description: -
* Design, build, and optimize data pipelines and ETL/ELT workflows using Databricks and
Apache Spark (PySpark).
* Develop scalable, high performance data solutions using Spark distributed processing.
* Lead engineering initiatives focused on automation, performance tuning, and platform
modernization.
* Implement and manage CI/CD pipelines using Git-based workflows and tools such as
GitHub Actions or Jenkins.
* Collaborate with cross-functional teams to translate business needs into technical
solutions.
* Ensure data quality, governance, and security across all processes.
* Troubleshoot and optimize Spark jobs, Databricks clusters, and workflows.
* Participate in code reviews and develop reusable engineering frameworks.
* Should have knowledge of utilizing AI tools to improve productivity and support daily
engineering activities.
* Strong knowledge and hands-on experience in Databricks Genie, including prompt
engineering, workspace usage, and automation.
Required Skills & Experience:
* 5+ years of experience in Data Engineering or related fields.
* Strong hands-on expertise in Databricks (notebooks, Delta Lake, job orchestration).
* Deep knowledge of Apache Spark (PySpark, Spark SQL, optimization techniques).
* Strong proficiency in Python for data processing, automation, and framework
development.
* Strong proficiency in SQL, including complex queries, performance tuning, and analytical
functions.
* Strong knowledge of Databricks Genie and leveraging it for engineering workflows.
* Strong experience with CI/CD and Git-based development workflows.
* Proficiency in data modeling and ETL/ELT pipeline design.
* Experience with automation frameworks and scheduling tools.
* Solid understanding of distributed systems and big data concepts
Job Title: AWS DevOps Engineer (MLOps)
We are looking for a highly skilled AWS + MLOps Engineer to design, build, and maintain scalable machine learning infrastructure and pipelines on AWS. The ideal candidate will have strong expertise in DevOps practices, cloud architecture, and MLOps frameworks, along with solid Python programming skills.
Job Description:
We are looking for an experienced AWS DevOps Engineer to join our team. You will be responsible for building and optimising CI/CD pipelines, managing AWS infrastructure, and automating tasks using AWS services.
Key Responsibilities:
- CI/CD Pipelines: Develop CI/CD pipelines with AWS CodePipeline, build ECR images, and update services on ECS.
- Automation: Create Python Lambda functions for automation and AWS Batch jobs for GPU processing.
- Infrastructure Management: Manage AWS infrastructure using Terraform (IAM roles, RDS, Lambda, etc.) and deploy microservices on EKS with ALB Ingress.
- Data Processing: Work with AWS Step Functions and EMR for data workflows; troubleshoot Spark jobs.
- Microservices: Deploy ATLAS on ECS, and create AWS Glue crawlers for data integration.
- Strong Experience with MLOps is an added advantage.
Required Skills:
- Experience with AWS services (ECS, ECR, Lambda, Step Functions, EMR, Glue, etc.).
- Proficient in CI/CD, Terraform, and Python scripting.
- Experience deploying EKS clusters and using AWS ALB for routing.
- Strong troubleshooting skills with EMR and Spark.
- Understanding/experience with AWS EMR, Sagemaker and Databricks would be added advantage
Preferred:
- AWS Certification (DevOps, Solutions Architect, etc.).
- Experience with microservices and GPU-intensive processes.
Role & Responsibilities
· Collect, clean, and analyze large structured and unstructured datasets from multiple internal and external sources
· Conduct thorough exploratory data analysis (EDA) to understand data distributions, relationships, outliers, and missing value patterns
· Profile and audit datasets to assess data quality, completeness, consistency, and fitness for modeling
· Investigate and document data lineage — understanding where data originates, how it flows, and how it transforms across systems
· Identify and resolve data anomalies, inconsistencies, and integrity issues in collaboration with data engineering teams
· Develop a deep understanding of the business domain and the underlying data that represents it — including what each field means, how it is captured, and what its limitations are
· Translate raw, messy, real-world data into clean, well-understood analytical datasets ready for modeling and reporting
· Apply statistical techniques such as correlation analysis, hypothesis testing, variance analysis, and distribution fitting to extract meaningful signals from noise
· Build and deploy machine learning models including regression, classification, clustering, NLP, and time-series analysis
· Design, evaluate, and analyze A/B experiments and controlled tests using causal inference techniques
· Develop data-driven recommendations backed by rigorous statistical reasoning
· Write clean, production-ready code in Python or R
· Collaborate with data engineers to build reliable data pipelines and feature stores
· Deploy and monitor ML models using MLOps best practices on cloud infrastructure
· Build dashboards and self-serve analytics tools to support stakeholder decision-making
Data Understanding & Analysis Skills
· Strong ability to interrogate unfamiliar datasets and quickly develop a working understanding of their structure, semantics, and quirks
· Experience working with messy, incomplete, or poorly documented real-world data
· Skilled in identifying hidden patterns, trends, seasonality, and anomalies through visual and statistical exploration
· Ability to ask the right questions about data — challenging assumptions, validating sources, and understanding the context in which data was collected
· Proficiency in data profiling, descriptive statistics, and summary reporting to communicate the shape and health of a dataset
· Experience creating data dictionaries, documentation, and data quality reports to support team-wide data understanding
· Comfort working across structured (relational tables), semi-structured (JSON, XML), and unstructured (text, logs, sensor streams) data formats
Technical Skills Required
· Proficiency in Python (pandas, NumPy, scikit-learn, PyTorch or TensorFlow) and/or R
· Strong SQL skills with hands-on experience in DB2 and SQL Server
· Experience with Databricks for large-scale data processing, feature engineering, and model training
· Familiarity with cloud platforms: Azure or AWS
· Experience with data warehouses and big data platforms (Databricks, Snowflake, or Redshift)
· Knowledge of MLOps tools such as MLflow, Kubeflow, or Airflow
· Experience with streaming data technologies such as Kafka or Spark
· Solid foundation in probability, statistics, linear algebra, and experimental design
Nice to Have
· Experience with deep learning, NLP, computer vision, or Bayesian methods
· Familiarity with real-time or streaming data pipelines
· Open-source contributions or published research

Global MNC serving 40+ Fortune 500 Companies
Want to work on exciting GenAI projects for Fortune 500 companies across multiple sectors? Then read on..
About Company:
CSG is a multi-national company having a presence in 20 countries with 1600+ Engineers. Company works with more than 40 Fortune 500 customers such as Sony, Samsung, ABB, Thyssenkrup, Toyota, Mitsubishi and many more.
Job Description:
We are looking for a talented Generative AI Developer to join our dynamic AI/ML team. This position offers an exciting opportunity to leverage cutting-edge Generative AI (GenAI) technologies to drive innovation to solve real world problems. You will be responsible for developing and optimizing GenAI-based applications, implementing advanced techniques like Retrieval-Augmented Generation (RAG), RIG (Retrieval Interleaved Generation), Agentic Frameworks and vector databases. This is a collaborative role where you will work directly with customers cross-functional teams to design, implement, and optimize AI-driven solutions. Exposure to cloud-native AI platforms such as Amazon Bedrock and Microsoft Azure OpenAI is highly desirable.
Key Responsibilities
Generative AI Application Development:
Design, develop, and deploy GenAI-driven applications to address complex industrial challenges.
Implement Retrieval-Augmented Generation (RAG) and Agentic frameworks
Data Management & Optimization:
Design and optimize document chunking strategies tailored to specific datasets and use cases.
Build, manage, and optimize data embeddings for high-performance similarity searches across vector databases.
Collaboration & Integration:
Work closely with data engineers and scientists to integrate AI solutions into existing pipelines.
Collaborate with cross-functional teams to ensure seamless AI implementation.
Cloud & AI Platform Utilization:
Explore and implement best practices for utilizing cloud-native AI platforms, such as Amazon Bedrock and Azure OpenAI, to enhance solution delivery.
Continuous Learning & Innovation:
Stay updated with the latest trends and emerging technologies in the GenAI and AI/ML fields, ensuring our solutions remain cutting-edge.
Requirements:
The ideal candidate will have strong experience in Generative AI technologies, particularly in the areas of RAG, document chunking, and vector database management. They will be able to quickly adapt to evolving AI frameworks and leverage cloud-native platforms to create efficient, scalable solutions. You will be working in a fast-paced and collaborative environment, where innovation and the ability to learn and grow are key to success.
- 3 to 5 years of overall experience in software development, with 3 years focused on AI/ML.
- Minimum 2 years of experience specifically working with Generative AI (GenAI) technologies.
- Python, PySpark and SQL knowledge is necessary for tasks
- Proven ability to work in a collaborative, fast-paced, and innovative environment.
Technical Skills:
- Generative AI Frameworks & Technologies:
- Expertise in Generative AI frameworks, including prompt engineering, fine-tuning, and few-shot learning.
- Familiarity with frameworks such as T5 (Text-to-Text Transfer Transformation), LangChain, Lang Graph, Open-source tech stalk Ollama, Mistral, DeepSeek.
- Strong knowledge of Retrieval-Augmented Generation (RAG) for combining LLMs with external data retrieval systems.
Data Management:
- Experience in designing chunking strategies for different datasets.
- Expertise in data embedding techniques and experience with vector databases like Pinecone, ChromaDB etc
- Programming & AI/ML Libraries:
- Strong programming skills in Python.
- Experience with AI/ML libraries such as TensorFlow, PyTorch, and Hugging Face Transformers.
Cloud Platforms & Integration:
- Familiarity with cloud services for AI/ML workloads (AWS, Azure).
- Experience with API integration for AI services and building scalable applications.
- Certifications (Optional but Desirable):
- Certification in AI/ML (e.g., TensorFlow, AWS Certified Machine Learning Specialty).
- Certification or coursework in Generative AI or related technologies.
Technical Lead – Full Stack
Work Location (WFO):
Nagar, Bengaluru, Karnataka
Interview Process:
L1 Interview – Face-to-Face at Office
Experience Required:
4-6 Years (Minimum1+ years in Technical Leadership role)
Budget:
Up to 13 LPA
Role Overview:
The candidate will lead the technical vision and architecture of a compliance platform by designing scalable, secure, and high-performance systems. The role involves driving full-stack development across .NET and open-source technologies, enabling unified AI Agent capabilities, Single Authentication (SSO), and a One-UI experience.
Key Responsibilities:
- Define and own end-to-end architecture including micro-frontends, .NET services, FastAPI APIs, and microservices
- Lead full-stack development using .NET and modern open-source technologies
- Modernize legacy systems (ASP.NET, .NET Core, MS SQL Server) to cloud-native architecture
- Design and implement AI Agents, SSO, and unified UI experiences
- Manage sprint planning, backlogs, and collaborate with Product Owners
- Implement CI/CD pipelines using Jenkins, GitHub Actions
- Drive containerization and orchestration using Docker & Kubernetes
- Ensure secure deployments and cloud infrastructure management
- Establish engineering best practices, code reviews, and architecture governance
- Mentor teams on Clean Architecture, SOLID principles, and DevOps practices
Required Skills:
- ReactJS, FastAPI, Python, REST/GraphQL
- ASP.NET, MVC, .NET Core, Entity Framework, MS SQL Server
- Strong experience in Microservices Architecture
- DevOps: CI/CD, Jenkins, GitOps, Docker, Kubernetes
- Cloud Platforms: AWS / Azure / GCP
- AI/ML & LLM tools: OpenAI, Llama, LangChain, etc.
- Security: RBAC, API security, secrets management
Qualifications:
- BE / BTech in Computer Science
Director of Engineering — Flights Platform
AI-First Travel Commerce · High-Scale Distributed Systems · Marketplace Infrastructure
🌏 The Problem Space
A flight search looks trivially simple. It is anything but.
Every query you fire triggers a choreography of distributed systems operating in real-time — integrating with a dozen airline GDS/NDC providers, computing dynamic fares across inventory buckets and fare rules, ranking thousands of itineraries by relevance and business intent, and returning a ranked, priced, bookable result set — all in under 100ms.
→ Millions of search queries per minute
→ <100ms end-to-end SLA with external API dependencies
→ High-value transactions — a bug here means a missed booking, not a failed render
→ Pricing errors erode trust faster than any other failure mode
We are rebuilding the Flights platform as a real-time commerce engine for Bharat — AI-native from day zero, built to power both B2C consumer journeys and high-stakes B2B enterprise corridors.
This is a once-in-a-decade opportunity to build national-scale flight infrastructure from first principles.
🧠 What You Will Own
You will own the full Flights platform — systems, architecture, and the teams that build them.
Core System Domains:
•Search Systems — high-throughput, low-latency query pipelines returning ranked, bookable options
•Pricing & Fare Engine — dynamic pricing logic, fare rules, promotional overlays, and real-time validation
•Booking & Ticketing — transaction-critical flows requiring strict consistency, idempotency, and zero data loss
•Airline Integrations — managing unreliable external GDS/NDC APIs with retries, circuit-breakers, and reconciliation
•Post-Booking Flows — cancellations, modifications, refunds — correctness at the margin is non-negotiable
Platform Scope:
•High-scale APIs serving consumer apps, B2B enterprise clients, and third-party partners
•Event-driven state machines managing booking workflows across async boundaries
•Observability and reliability infrastructure across all mission-critical flows
Team Scope:
•Lead 15–30+ engineers across multiple product and platform teams
•Manage Engineering Managers and Principal/Staff engineers
•Own hiring, org design, and technical direction
⚙️ Core Engineering Challenges
This role is fundamentally about making the right trade-offs under uncertainty — at scale.
Latency vs. Accuracy — when do you serve a cached fare vs. call a live airline API?
Availability vs. Consistency — graceful degradation at booking time vs. strict price validation
Cost vs. Performance — when is an external API call worth it vs. a cache hit?
Scalability vs. Simplicity — the best system is the one your team can reason about under incident
🤖 AI-First Engineering
AI is not an afterthought. It is load-bearing architecture.
•LLM-powered pricing intelligence — dynamic fare prediction and demand signals
•RAG pipelines for fare rules, refund policy, and support automation
•Agentic booking resolution workflows — autonomous exception handling at scale
•MCP-based orchestration layers for multi-provider integration
⚖️ Key Responsibilities
Architecture & Distributed Systems
•Design and evolve sub-100ms distributed query systems serving millions of concurrent searches
•Build fault-tolerant booking pipelines with strong consistency and durability guarantees
•Drive Kafka-based event architectures for booking state management
Reliability & Observability
•Own 99.99%+ availability for booking and pricing systems
•Build deep observability — metrics, distributed tracing, structured logging, SLOs/SLAs
•Lead post-incident reviews and drive systemic reliability improvements
Business Partnership
•Partner with Product, Revenue, and Partnerships to translate commercial goals into architecture
•Influence platform roadmap, supplier strategy, and long-term technical investment
🛠️ Technology Stack
Backend: Java · Kotlin · Go · Python
Architecture: Microservices · Event-Driven (Kafka) · gRPC
Data: Redis · Aerospike · DynamoDB · Elasticsearch
Cloud: AWS (EKS, EC2, S3)
Observability: Prometheus · Grafana · OpenTelemetry
👤 Who You Are
•12–16 years in backend/distributed systems; 5+ Years in an Engineering Leadership role, led teams of 15–50 engineers
•Built and scaled large B2C + B2B platforms — Travel Tech, FinTech, or high-scale Consumer
•Deep expertise in real-time systems, marketplace dynamics, and external API integration
•Tier-I institute background strongly preferred (IIT / IIIT / NIT / IISC / BITS / VIT / SRM — CSE/ISE)
🚀 Why This Matters
Build national-scale infrastructure for 1.4 billion people
Sit at the intersection of AI · distributed systems · marketplace economics
Define the future of travel commerce in India — from architecture to product
Job Summary
We are seeking a skilled Data Engineer with 4+ years of experience in building scalable data pipelines and working with modern data platforms. The ideal candidate should have strong expertise in Python, SQL, and cloud-based data solutions, with hands-on experience in ETL/ELT processes and data warehousing.
Key Responsibilities
- Design, build, and maintain scalable data pipelines using Python
- Develop and optimize ETL/ELT workflows for data ingestion and transformation
- Work with structured and unstructured data from multiple sources
- Build and manage data warehouses/data lakes
- Perform data validation, cleansing, and quality checks
- Optimize SQL queries and improve data processing performance
- Collaborate with data analysts, data scientists, and business teams
- Implement data governance, security, and best practices
- Monitor pipelines and troubleshoot production issues
Required Skills
- Strong programming experience in Python (Pandas, NumPy, PySpark preferred)
- Excellent SQL skills (joins, window functions, performance tuning)
- Experience with ETL tools like Informatica, Talend, or DBT
- Hands-on experience with cloud platforms (Azure / AWS / GCP)
- Experience in data warehousing solutions like Snowflake, Redshift, BigQuery
- Knowledge of workflow orchestration tools like Apache Airflow
- Familiarity with version control tools like Git
Preferred Skills
- Experience with Big Data technologies (Spark, Hadoop)
- Knowledge of streaming tools like Kafka
- Exposure to CI/CD pipelines and DevOps practices
- Experience in data modeling (Star/Snowflake schema)
- Understanding of APIs and data integration
Budget: 35 LPA to 45 LPA
Work schedule is Mon to Fri, 3:30am to 12:30pm IST
Key Responsibilities:
- Design, develop, and deploy computer vision and machine learning models for analyzing visual and document-based data.
- Build pipelines that convert unstructured visual inputs into structured and usable information.
- Develop and evaluate models for tasks such as object detection, segmentation, document parsing, and image understanding.
- Apply OCR and related techniques to extract meaningful information from complex documents and imagery.
- Work with large datasets and build efficient training and evaluation pipelines.
- Handle real-world visual datasets that may contain noise, inconsistencies, incomplete information, or varying formats.
- Experiment with different approaches to solve challenging computer vision problems and evaluate tradeoffs between accuracy, performance, and complexity.
- Collaborate with product and engineering teams to integrate machine learning models into scalable production systems.
- Continuously improve model performance, accuracy, and robustness in real-world environments.
- Stay up to date with the latest developments in AI and computer vision and apply relevant techniques where appropriate.
- Actively leverage modern AI tools and frameworks to accelerate experimentation, development, and engineering workflows.
Requirements:
- 5+ years of hands-on experience building and deploying machine learning models, particularly in Computer Vision or document understanding.
- Strong proficiency in Python for machine learning and data processing.
- Hands-on experience with modern ML frameworks such as PyTorch and libraries in the Hugging Face ecosystem.
- Experience with computer vision tooling such as OpenCV.
- Experience with common ML and data science libraries such as scikit-learn, NumPy, and Pandas.
- Experience developing models for tasks such as segmentation, object detection, or document analysis.
- Experience working with large image datasets and building training pipelines.
- Solid understanding of model evaluation, data preprocessing, and performance optimization.
- Strong problem-solving skills and ability to work in a fast-paced product environment.
- Ability to collaborate effectively with cross-functional engineering and product teams.
- The candidate should be based in India
- Willing to work remotely full-time
- Work schedule is Mon to Fri, 3:30am to 12:30pm IST
Preferred Qualifications:
- Experience with TensorFlow or other deep learning frameworks.
- Experience working with OCR pipelines or document analysis systems.
- Experience deploying machine learning models in production environments.
- Experience with containerized deployments such as Docker or Kubernetes.
- Experience working with complex technical documents, diagrams, or structured visual data.
- Familiarity with spatial or geometry-related data problems.
- Experience with libraries such as Detectron2, MMDetection, or similar.
- Familiarity with frameworks used to integrate modern AI models into applications (e.g., LangChain or similar tooling).
- Contributions to open-source ML or computer vision projects are a plus.
Additional Information:
- The problems we work on involve complex visual and document-based data, so we value engineers who enjoy tackling challenging technical problems and experimenting with different approaches to reach practical solutions.
- Candidates are required to include links to relevant projects, GitHub repositories, research work, or examples of machine learning systems they have built.
Benefits:
- Flexible remote work opportunities with career development opportunities
- Engagement with a supportive and collaborative global team
- Competitive market based salary
Job Title: Python Development Intern
Company: Honeybee Digital
Location: Remote
Internship Duration: 3 Months
Job Type: Internship
Working Hours
- Full-time: 9:00 AM – 6:00 PM
- Part-time: 9:00 AM – 1:00 PM / 1:00 PM – 6:00 PM
Note: Internship certificate will be provided only after successful completion of the internship duration.
About the Role
We are looking for a passionate and motivated Python Development Intern who is eager to gain hands-on experience in real-world projects. This role is ideal for candidates interested in backend development, automation, data handling, and API integration.
Key Responsibilities
- Assist in developing applications using Python
- Work on data handling, automation scripts, and backend logic
- Support API development and integration
- Assist in web scraping and data processing tasks
- Debug, test, and optimize existing code
- Collaborate with development and data teams
- Document code and maintain project updates
Requirements
- Basic knowledge of Python programming
- Understanding of data structures and logic building
- Familiarity with libraries such as Pandas, NumPy (preferred)
- Basic understanding of APIs and web frameworks (Flask/Django is a plus)
- Problem-solving mindset and willingness to learn
- Ability to work independently and meet deadlines
Skills You Will Gain
- Hands-on experience in Python development and real projects
- Exposure to automation, Fast APIs, and backend systems
- Practical knowledge of data processing and scripting
- Debugging and optimization techniques
- Experience working in a professional development environment
Who Can Apply
- Students pursuing Computer Science, IT, Data Science, or related fields
- Freshers interested in Python development and backend roles
- Candidates looking to build a strong technical portfolio
Dear Candidates,
We have an urgent requirement for a Technical Lead – Full Stack role based in Bangalore. Please find the details below:
Work Location (WFO):
Nagar, Bengaluru, Karnataka
Interview Process:
L1 Interview – Face-to-Face at Office
Experience Required:
4-6 Years (Minimum1+ years in Technical Leadership role)
Role Overview:
The candidate will lead the technical vision and architecture of a compliance platform by designing scalable, secure, and high-performance systems. The role involves driving full-stack development across .NET and open-source technologies, enabling unified AI Agent capabilities, Single Authentication (SSO), and a One-UI experience.
Key Responsibilities:
- Define and own end-to-end architecture including micro-frontends, .NET services, FastAPI APIs, and microservices
- Lead full-stack development using .NET and modern open-source technologies
- Modernize legacy systems (ASP.NET, .NET Core, MS SQL Server) to cloud-native architecture
- Design and implement AI Agents, SSO, and unified UI experiences
- Manage sprint planning, backlogs, and collaborate with Product Owners
- Implement CI/CD pipelines using Jenkins, GitHub Actions
- Drive containerization and orchestration using Docker & Kubernetes
- Ensure secure deployments and cloud infrastructure management
- Establish engineering best practices, code reviews, and architecture governance
- Mentor teams on Clean Architecture, SOLID principles, and DevOps practices
Required Skills:
- ReactJS, FastAPI, Python, REST/GraphQL
- ASP.NET, MVC, .NET Core, Entity Framework, MS SQL Server
- Strong experience in Microservices Architecture
- DevOps: CI/CD, Jenkins, GitOps, Docker, Kubernetes
- Cloud Platforms: AWS / Azure / GCP
- AI/ML & LLM tools: OpenAI, Llama, LangChain, etc.
- Security: RBAC, API security, secrets management
Qualifications:
- BE / BTech in Computer Science
Required Skills:
- Strong proficiency in Python
- Experience with Django, Flask, or FastAPI
- Solid understanding of REST APIs and backend architecture
- Hands-on experience integrating LLM APIs (e.g., OpenAI, Anthropic) into applications
- Familiarity with AI/LLM frameworks such as LangChain or LlamaIndex
- Understanding of Retrieval-Augmented Generation (RAG), embeddings, and semantic search concepts
- Experience or exposure to vector databases like Pinecone, Weaviate, or FAISS
- Experience with databases (MySQL, PostgreSQL, MongoDB)
- Familiarity with Git and version control workflows
- Understanding of asynchronous programming and performance optimization
- Basic knowledge of cloud platforms (AWS, GCP, or Azure)
- Strong problem-solving and analytical skills
Job Summary
We are looking for a skilled Python Developer with 3 years of experience to join our team in Bangalore. The ideal candidate should have strong expertise in Python, Django, and PostgreSQL, along with a good understanding of backend development. Knowledge of Java will be an added advantage.
Key Responsibilities
Develop, test, and maintain scalable backend applications using Python and Django
Design and manage databases using PostgreSQL
Write clean, efficient, and reusable code
Collaborate with cross-functional teams to define, design, and ship new features
Debug and resolve technical issues and optimize application performance
Participate in code reviews and ensure best coding practices
Required Skills
Strong experience in Python
Hands-on experience with Django framework
Good knowledge of PostgreSQL database
Understanding of REST APIs and web services
Familiarity with version control systems (e.g., Git)
Good to Have
Basic knowledge of Java
Experience with cloud platforms or deployment processes
Understanding of front-end technologies is a plus
Qualifications
Bachelor’s degree in Computer Science, Engineering, or related field
Additional Requirements
Immediate joiners or candidates with short notice period preferred
Strong problem-solving and analytical skills
Good communication and teamwork abilities
Job Summary
We are looking for a skilled Python Developer with 3 years of experience to join our
team in Bangalore. The ideal candidate should have strong expertise in Python,
Django, and PostgreSQL, along with a good understanding of backend development.
Knowledge of Java will be an added advantage.
Key Responsibilities
Develop, test, and maintain scalable backend applications using Python and
Django
Design and manage databases using PostgreSQL
Write clean, efficient, and reusable code
Collaborate with cross-functional teams to define, design, and ship new
features
Debug and resolve technical issues and optimize application performance
Participate in code reviews and ensure best coding practices
Required Skills
Strong experience in Python
Hands-on experience with Django framework
Good knowledge of PostgreSQL database
Understanding of REST APIs and web services
Familiarity with version control systems (e.g., Git)
Good to Have
Basic knowledge of Java
Experience with cloud platforms or deployment processes
Understanding of front-end technologies is a plus
Qualifications
Bachelor’s degree in Computer Science, Engineering, or related field
Additional Requirements
Immediate joiners or candidates with short notice period preferred
Strong problem-solving and analytical skills
Good communication and teamwork abilities
About Certa
Certa is a leading innovator in the no-code SaaS workflow space, powering the full lifecycle for suppliers, partners, and third parties. From onboarding and risk assessment to contract management and ongoing monitoring, Certa enables businesses with automation, collaborative workflows, and continuously updated insights. Join us in our mission to revolutionize third-party management!
What You'll Do
- Partner closely with Customer Success Managers to understand client workflows, identify quality gaps, and ensure smooth solution delivery.
- Design, implement, and execute both manual and automated tests for client-facing workflows across our web platform.
- Write robust and maintainable test scripts using Python (Selenium) to validate workflows, integrations, and configurations.
- Own test planning for client-specific features, including writing clear test cases and sanity scenarios — even in the absence of detailed specs.
- Collaborate with Product, Engineering, and Customer Success teams to reproduce client-reported issues, root-cause them, and verify fixes.
- Lead or contribute to exploratory testing, regression cycles, and release validations before client rollouts.
- Proactively identify gaps, edge cases, and risks in client implementations and communicate them effectively to stakeholders.
- Act as a client-facing QA representative during solution validation, ensuring confidence in delivery and post-deployment success.
What We're Looking For
- 3–5 years of experience in Software QA (manual + automation), ideally with exposure to client-facing or Customer Success workflows.
- Strong understanding of core QA principles (priority vs. severity, regression vs. sanity, risk-based testing).
- Hands-on experience writing automation test scripts with Python (Selenium).
- Experience with modern automation frameworks (Playwright + TypeScript or equivalent) is a strong plus.
- Familiarity with SaaS workflows, integrations, or APIs (JSON, REST, etc.).
- Excellent communication skills — able to interface directly with clients, translate feedback into testable requirements, and clearly articulate risks/solutions.
- Proactive, curious, and comfortable navigating ambiguity when working on client-specific use cases.
Good to Have
- Previous experience in a Customer Success, Professional Services, or client-facing QA role.
- Experience with CI/CD pipelines, BDD/TDD frameworks, and test data management.
- Knowledge of security testing, performance testing, or accessibility testing.
- Familiarity with no-code platforms or workflow automation tools.
Perks
- Best-in-class compensation
- Fully remote work
- Flexible schedules
- Engineering-first, high-ownership culture
- Massive learning and growth opportunities
- Paid vacation, comprehensive health coverage, maternity leave
- Yearly offsite, quarterly hacker house
- Workstation setup allowance
- Latest tech tools and hardware
- A collaborative and high-trust team environment
Full-Stack Developer (Backend-Focused)
We are seeking a seasoned Full-Stack Developer with strong expertise in backend engineering using Python and Golang. In this role, you will take ownership of backend systems while contributing to the development of modern, responsive frontend interfaces. The focus will be on building secure, scalable, and high-performance applications, with emphasis on API development, database engineering, and cloud deployment.
Key Responsibilities
- Develop and enhance backend services using Python frameworks such as Django or FastAPI
- Design, build, and maintain RESTful APIs and microservices
- Work extensively with relational and NoSQL databases, including PostgreSQL, MySQL, and MongoDB
- Collaborate with frontend developers to integrate user-facing elements with backend logic
- Implement efficient, secure, and scalable application architectures
- Troubleshoot and resolve software defects across different environments
- Optimize performance and reliability of backend services
- Write clean, maintainable, and well-tested code following best practices
- Contribute to DevOps activities, including CI/CD pipelines and containerization
Required Skills & Qualifications
- 6+ years of experience in full-stack or backend-focused development
- Strong proficiency in Python with hands-on experience in frameworks like Django or FastAPI
- Solid understanding of SQL and NoSQL databases, including data modeling and query optimization
- Familiarity with modern frontend technologies such as React, Vue, or Angular
- Experience with Docker, Kubernetes, and at least one cloud platform (AWS, Azure, or GCP) is preferred
- Strong understanding of system design, distributed systems, and microservices architecture
- Experience with Git and CI/CD automation pipelines
- Excellent problem-solving skills and ability to work collaboratively
Experience: 6+ Years
Location: Chennai & Pune
Work Model: Hybrid
Notice Period: Immediate Joiners Preferred
Role Overview
We are looking for a highly skilled Senior Python Engineer to design, develop, and scale robust backend systems and data-driven applications. The ideal candidate should have strong problem-solving skills, experience with modern Python frameworks, and exposure to cloud and emerging technologies like Generative AI and LLMs.
Key Responsibilities
- Design, develop, and maintain scalable applications using Python
- Build and optimize RESTful APIs using Flask or FastAPI
- Work on data manipulation, processing, and transformation using Python libraries
- Collaborate with cross-functional teams to define and deliver high-quality solutions
- Develop efficient, reusable, and reliable code with strong attention to performance
- Implement containerization and orchestration using Docker and Kubernetes
- Ensure application security, data protection, and compliance best practices
- Manage code versioning using Git and follow CI/CD best practices
- Contribute to cloud-based deployments and infrastructure
- Explore and implement solutions using Generative AI and Large Language Models (LLMs)
Required Skills & Qualifications
- 6+ years of hands-on experience in Python development
- Strong understanding of Python fundamentals and problem-solving skills
- Experience with data manipulation libraries (e.g., Pandas, NumPy)
- Expertise in building REST APIs using Flask or FastAPI
- Solid understanding of ORM frameworks (e.g., SQLAlchemy, Django ORM)
- Experience with cloud platforms (AWS, Azure, or GCP)
- Hands-on experience with Docker and Kubernetes
- Strong knowledge of application security principles
- Proficiency in Git and version control practices
- Exposure to Generative AI concepts and working with LLMs
Summary:
We are seeking a highly skilled Python Backend Developer with proven expertise in FastAPI to join our team as a full-time contractor for 12 months. The ideal candidate will have 5+ years of experience in backend development, a strong understanding of API design, and the ability to deliver scalable, secure solutions. Knowledge of front-end technologies is an added advantage. Immediate joiners are preferred. This role requires full-time commitment—please apply only if you are not engaged in other projects.
Job Type:
Full-Time Contractor (12 months)
Location:
Remote
Experience:
3+ years in backend development
Key Responsibilities:
- Design, develop, and maintain robust backend services using Python and FastAPI.
- Implement and manage Prisma ORM for database operations.
- Build scalable APIs and integrate with SQL databases and third-party services.
- Deploy and manage backend services using Azure Function Apps and Microsoft Azure Cloud.
- Collaborate with front-end developers and other team members to deliver high-quality web applications.
- Ensure application performance, security, and reliability.
- Participate in code reviews, testing, and deployment processes.
Required Skills:
- Expertise in Python backend development with strong experience in FastAPI.
- Solid understanding of RESTful API design and implementation.
- Proficiency in SQL databases and ORM tools (preferably Prisma)
- Hands-on experience with Microsoft Azure Cloud and Azure Function Apps.
- Familiarity with CI/CD pipelines and containerization (Docker).
- Knowledge of cloud architecture best practices.
Added Advantage:
- Front-end development knowledge (React, Angular, or similar frameworks).
- Exposure to AWS/GCP cloud platforms.
- Experience with NoSQL databases.
Eligibility:
- Minimum 3 years of professional experience in backend development.
- Available for full-time engagement.
- Please excuse if you are currently engaged in other projects—we require dedicated availability.
Job Title: Snowflake Platform Administrator
Duration: 6-12 months contract (could be extended upon performance)
Mode: Remote
About the Role
We are looking for a Snowflake Administrator to join our Snowflake Center of Excellence (COE) to manage, secure, and optimize the enterprise Snowflake data platform. The role will focus on platform administration, security governance, and automation while enabling data engineering, analytics, and business teams to effectively leverage Snowflake capabilities.
Key Responsibilities
•Administer and maintain the Snowflake platform, including warehouses, databases, schemas, users, roles, and resource monitors.
•Implement and manage Snowflake security and access governance including RBAC, network policies, and network rules.
•Manage identity and access integration with Azure Active Directory (Azure AD), including role mapping with Azure AD groups.
•Monitor platform performance, usage, and cost to ensure efficient and reliable operations.
•Manage key Snowflake capabilities including data sharing (consumer and provider), cloning, data recovery, integrations (storage/API/notification), and performance optimization.
•Develop automation scripts using SQL and Python for administrative and operational tasks.
•Create and maintain CI/CD workflows using GitHub Actions for Snowflake deployments.
•Collaborate with data engineers, analysts, and architects to ensure secure and scalable data platform usage.
•Stay up to date with Snowflake product releases, new features, and platform best practices, and proactively evaluate their applicability to the organization.
•Contribute to standards, best practices, and governance frameworks within the Snowflake COE.
General Business
•Explore opportunities to leverage AI to improve platform automation and productivity.
Required Experience & Skills:
•5-8 years of relevant experience in Snowflake Administration and platform management.
•Solid understanding of Snowflake architecture, security, features, and performance optimization.
•Experience implementing RBAC, Network Policies, and Network Rules in Snowflake.
•Experience with Snowflake integration with Azure AD for role and access management via AD groups.
•Proficiency in SQL and Python scripting.
•Experience with GitHub and GitHub Actions/Workflow creation.
•Strong analytical and problem-solving skills.
•Functional Domain: FMCH (Fast Moving Consumer Health).
Preferred Additional Skills:
•AI enthusiast and Automation expertise
•Understanding of modern data architectures including data lakes and real-time processing
•Familiarity with BI tools such as Power BI, Tableau, Looker
Education & Languages
•Bachelor’s degree in computer science, Information Technology, or similar quantitative field of study.
•Fluent in English.
•Function effectively within teams of varied cultural backgrounds and expertise sources.
What You’ll Be Doing:
- Design and develop advanced AI/ML models to solve complex business problems
- Work closely with cross-functional teams including data engineers and domain experts
- Perform exploratory data analysis, data cleaning, and model development
- Translate business challenges into data-driven solutions and actionable insights
- Drive innovation in advanced analytics and AI/ML capabilities
- Communicate model insights effectively to both technical and non-technical stakeholders
What We’re Looking For:
- 5+ years of experience in AI/ML model development
- Strong foundation in mathematics, probability, and statistics
- Proficiency in Python and exposure to Azure Machine Learning / Databricks
- Experience with supervised & unsupervised learning techniques
- Domain exposure to Energy / Oil & Gas value chain (preferred)
- Strong problem-solving, stakeholder management, and communication skills
Job Title : Full Stack Developer (Crypto Exchange)
Experience : 4+ Years
Location : Gurugram & Vadodara (On-site)
Role Overview :
We are looking for a Full Stack Developer with strong expertise in both backend and frontend development, along with exposure to crypto exchange systems or fintech platforms.
In this role, you will work on building high-performance, real-time trading applications, contributing to core systems like order execution, pricing engines, and wallet integrations.
Key Responsibilities :
- Design, develop, and maintain scalable backend services and APIs.
- Build and optimize responsive frontend applications for trading interfaces.
- Work on real-time systems such as order books, pricing engines, and trade execution.
- Integrate with blockchain networks, wallets, and third-party APIs.
- Ensure platform security, performance, and reliability.
- Collaborate with product, design, and DevOps teams for end-to-end delivery.
- Participate in system design, architecture discussions, and code reviews.
Required Skills & Qualifications :
- 4+ years of experience in Full Stack Development.
- Strong expertise in :
- Backend : Node.js and/or Python
- Frontend : React.js and/or Next.js
- Experience with REST APIs and microservices architecture.
- Strong understanding of databases (MongoDB, PostgreSQL, MySQL, etc.).
- Hands-on experience with Docker and cloud platforms (AWS preferred).
- Solid understanding of system design, scalability, and performance optimization.
Preferred (Good to Have) :
- Experience working with a crypto exchange or trading platform.
- Understanding of blockchain fundamentals (Ethereum, Bitcoin, etc.).
- Experience with wallet integrations and on-chain transactions.
- Familiarity with WebSockets and real-time data streaming.
- Knowledge of security best practices in fintech/crypto systems.
Why Join Us ?
- Opportunity to work on a high-impact, real-world crypto exchange.
- Build and scale systems from early-stage to production.
- Work in a fast-paced, ownership-driven environment.
- Exposure to cutting-edge blockchain and trading technologies.
Expected Date of Joining Immediate/ 30 days
What makes Techjays an inspiring place to work
At Techjays, we are helping companies reimagine how they build, operate, and scale with AI at the core.
We operate as part of the 1% of companies globally that can truly leverage AI the right way and not just as experimentation, but as secure, scalable, production-grade systems that drive measurable business outcomes.
Our strength lies in combining deep backend engineering with AI system design, building AInative platforms, intelligent workflows, and cloud architectures that are reliable, observable, and enterprise-ready. Our team includes engineers and leaders who have built and scaled products at global technology organizations such as Google, Akamai, NetApp, ADP, Cognizant Consulting, and Capgemini. Today, we function as a high-agency, execution-focused team building advanced AI systems for global clients.
About the role
We are looking for a strong backend engineer who can design and build secure, scalable Python systems that power AI-native applications.
You will work on AI-enabled platforms, production systems, and scalable backend services that support LLM integrations, RAG pipelines, and intelligent workflows.
Years of Experience: 3 - 5 years
Location: Coimbatore
Key Skills
Backend Development (Familiar):
- Python
- Django / Flask
- RESTful APIs
- Websockets
Cloud Technologies (Familiar):
- AWS (EC2, S3, Lambda)
- GCP (Compute Engine, Cloud Storage, Cloud Functions)
- CI/CD pipelines with Jenkins / GitLab CI or Github Actions
Databases (Familiar):
- PostgreSQL
- MySQL
- MongoDB
AI/ML (Familiar):
- Basic understanding of Machine Learning concepts
- Assist in building and integrating Agentic AI workflows
- Familiar with RAG
- Vector Databases (Pinecone or ChromaDB or others)
Tools:
- Git
- Docker
- Linux
Roles and Responsibilities
- Develop and maintain backend services using Python and Django/Flask under guidance.
- Assist in building scalable and secure APIs and backend systems for AI-driven applications.
- Write clean, efficient, and maintainable code following best practices.
- Collaborate with cross-functional teams including frontend developers, data scientists, and product teams.
- Participate in code reviews, debugging, and performance optimization.
- Support integration of AI/ML components such as LLMs and RAG pipelines.
- Continuously learn and improve technical skills in backend and AI technologies.
What We’re Looking for Beyond Skills
- Builder mindset — you think in systems, not just tickets
- Ownership — you take features from idea to production
- Structured thinking in ambiguous environments
- Clear communication and collaborative approach
- Ability to work in a fast-paced, evolving startup environment
What We Offer
- Competitive compensation
- Paid holidays & flexible time off
- Medical insurance (Self & Family up to ₹4 Lakhs per person)
- Opportunity to work on production-grade AI systems
- Exposure to global clients and high-impact projects
- A culture that values clarity, integrity, and continuous growth

Global Digital Transformation Solutions Provider
JOB DETAILS:
- Job Title: Lead I - Data Science - Python, Machine Learning, Spark
- Industry: Global Digital Transformation Solutions Provider
- Experience: 5-10 years
- Job Location: Pune
- CTC Range: Best in Industry
JD for Data Scientist
Hands-on experience with data analysis tools:
Proficient in using tools such as Python and R for data manipulation, querying, and analysis.
Skilled in utilizing libraries like Pandas, NumPy, and Scikit-Learn to perform in-depth data analysis and modeling.
Skilled in machine learning and predictive analytics:
Expertise in building, training, and deploying machine learning models using frameworks such as TensorFlow and PyTorch.
Capable of performing tasks like regression, classification, clustering, and recommendation, leading to data-driven predictions and insights.
Expertise in big data technologies:
Proficient in handling large datasets using big data tools such as Spark.
Skilled in employing distributed computing and parallel processing techniques to ensure efficient data processing, storage, and analysis, enabling enterprise-level solutions and informed decision-making
Skills: Python, SQL, Machine Learning, and Deep Learning, with mandatory expertise in Generative AI.
Must-Haves
5–9 years of relevant experience in Python, SQL, Machine Learning, and Deep Learning, with mandatory expertise in Generative AI
******
NP - Immediate joiners only
Location-Pune
The AI Data Engineer will be responsible for designing, building, and operating scalable data pipelines and curated data assets that power machine learning, generative AI, and intelligent automation solutions in an SLA-driven managed services environment. This role focuses on data ingestion, transformation, governance, and operational reliability across cloud and hybrid environments enabling use cases such as knowledge retrieval (RAG), conversational AI, predictive analytics, and AI-assisted service management. The ideal candidate combines strong data engineering fundamentals with an understanding of AI workload requirements, including quality, lineage, privacy, and performance.
Key Responsibilities
•Design, build, and operate production-grade data pipelines that support AI/ML and generative AI workloads in managed services environments
•Develop curated, analytics-ready datasets and data products to enable model training, grounding, feature generation, and AI search/retrieval
•Implement data ingestion patterns for structured and unstructured sources (APIs, databases, files, event streams, documents)
•Build and maintain transformation workflows with strong testing and validation
•Enable Retrieval-Augmented Generation (RAG) by preparing document corpora, chunking strategies, metadata enrichment, and vector indexing patterns
•Integrate data pipelines with application services
•Support ITSM and enterprise workflow data needs, including ServiceNow data integration, CMDB/incident data quality improvements, and automation enablement
•Implement observability for data pipelines (monitoring, alerting, SLAs/SLOs) and perform root cause analysis for pipeline failures or data quality incidents
•Apply data governance and security best practices
•Collaborate with ML Engineers, DevOps/SRE, and solution architects to operationalize end-to-end AI solutions
•Contribute to reusable patterns, templates, and standards within the Bell Techlogix AI Center of Excellence
Required Qualifications
•Bachelor’s degree in Computer Science, Engineering, Information Systems, or equivalent practical experience
•5+ years of experience in data engineering, analytics engineering, or platform data operations
•Strong proficiency in SQL and Python; experience with data modeling and dimensional concepts
•Hands-on experience with Azure data services (e.g., Data Factory, Synapse, Databricks, Storage, Key Vault) or equivalent cloud tooling
•Experience building reliable pipelines with scheduling, dependency management, and automated testing/validation
•Experience supporting production data platforms with incident management, troubleshooting, and root cause analysis
•Understanding of data security, privacy, and governance principles in enterprise environments
Preferred Qualifications
•Experience enabling AI/ML workloads: feature engineering, training data preparation, and integration with Azure Machine Learning
•Experience with unstructured data processing for generative AI
•Familiarity with vector databases or vector search and RAG patterns
•Experience with event streaming and messaging
•Familiarity with ServiceNow data model and integration patterns (Table API, export, CMDB/ITSM reporting)
•Relevant certifications (Microsoft Azure Data Engineer, Azure AI Engineer, Databricks)
About Us Simbian® is building Agentic AI platform for cybersecurity. Founded by repeat successful security founders, we have gathered an excellent cohort of employees, partners, and customers. Our mission is to solve security using AI and our core values are excellence, replication, and intellectual honesty.
Our promise is to make Simbian the best workplace of your career and we believe a small group of thoughtful passionate people can make all the positive difference in the world. To fuel our fast growth, we are seeking an exceptional candidate who shares our core values of excellence (being the world's best at our craft), replication (share your best ideas with others), and intellectual honesty (tell the truth even if it's bitter).
Our AI Agents automate security operations and provide our customers 10x leverage. Our customers include some of the world's largest companies.
Our initial use cases include: SOC alert triage and investigation Prioritization and classification of vulnerabilities AI based threat hunting
As an Engineering Manager, you will lead a pod of highly skilled engineers responsible for building critical components of Simbian’s platform—from scalable backend services and data pipelines to integrations with security tools and novel AI-driven investigation engines. You’ll be responsible for driving execution, mentoring engineers, and shaping technical direction while working closely with product, AI/ML, and security teams.
This role is ideal for a hands-on leader who thrives in startup environments, is comfortable balancing execution with strategy, and can guide engineers to build reliable, secure, and scalable systems.
Responsibilities
• Lead and mentor a pod of backends, frontend, or platform engineers (depending on pod assignment: e.g., Integrations, Investigation Infra, Threat Hunting, etc.).
• Drive delivery of product and platform features aligned to quarterly OKRs
• Establish engineering best practices for code quality, observability, security, and reliability
• Collaborate with product managers and security SMEs to define technical scope, execution plans, and delivery timelines.
• Provide technical guidance in architecture decisions across areas such as: 1. Scalable microservices 2. Security product integrations (EDR, SIEM, CNAPP, etc.)
• Data pipelines (historical + real-time event ingestion)
• AI/ML systems for reasoning and automation
• Recruit, develop, and retain top engineering talent.
• Ensure pods maintain a high bar for innovation, execution, and collaboration.
Requirements
• 12+ years of professional software engineering experience in security domain, with at least 3+ years leading or managing engineering teams. • Strong background in building scalable backend systems (Python, Go, or Java preferred).
• Experience with cloud-native architectures (Kubernetes, Postgres, vector databases, OpenSearch, etc.).
• Familiarity with data pipelines (ETL/ELT, orchestration frameworks like Dagster/Airflow, streaming systems).
• Exposure to security products and data (SIEM, EDR, CNAPP, vulnerability management) is a strong plus.
• Track record of leading pods/teams to deliver complex technical projects with measurable outcomes.
• Strong communication skills, with the ability to work cross-functionally with product, AI/ML, and security teams.
• Startup mindset: bias for execution, ability to operate with ambiguity, and eagerness to wear multiple hats.
Nice to Have
• Experience with AI/ML pipelines, LLM integration, or security-focused AI applications.
• Knowledge of SOC processes, MITRE ATT&CK, or incident response workflows.
• Contributions to open-source projects in data, security, or AI. • Previous experience scaling teams at an early-stage startup.
Benefits
• Competitive salary commensurate with experience
• Generous early-stage equity with significant upside potential
• Annual performance bonuses tied to company and individual goals
Budget- under 90L annually
This role is responsible for architecting and implementing the Agentic capabilities of the PHI ecosystem. The engineer will lead the development of multi-agent systems, enabling seamless interoperability between AI agents, internal tools, and external services.
The position requires a strong focus on AI safety, secure agent orchestration, and tool-connected AI systems capable of executing complex workflows within the health insurance domain.
1. Agent Orchestration
- Build and manage autonomous AI agents using Agent Development Kit (ADK) and Vertex AI Agent Engine.
- Design and implement multi-agent workflows capable of handling complex tasks.
2. Interoperability
- Implement the Model Context Protocol (MCP) to enable connectivity between:
- AI agents
- Internal PHI tools
- External services and APIs.
3. Multimodal Development
- Build real-time, bidirectional audio applications using the Gemini Live API.
- Integrate image generation models and support multimodal AI capabilities.
4. Safety Engineering
- Implement AI safety layers to protect sensitive healthcare data.
- Use Model Armor and Cloud DLP API to:
- Sanitize prompts
- Prevent exposure of PII/PHI data
- Enforce secure AI interactions.
5. Agent-to-Agent (A2A) Communication
- Configure remote agent connectivity using the A2A SDK.
- Enable cross-agent collaboration and workflow orchestration.
Must-Have Skills
- Advanced proficiency with Agent Development Kit (ADK).
- Strong experience with Vertex AI Agent Engine.
- Hands-on experience with Model Context Protocol (MCP).
- Experience implementing Agent-to-Agent (A2A) workflows using the A2A SDK.
- Expertise in Google Gen AI SDK for Python.
- Experience building multimodal AI applications.
- Proven experience implementing AI safety layers, including:
- Model Armor
- Cloud DLP API
Good-to-Have Skills (Foundation)
Data & Analytics
- BigQuery optimization techniques, including:
- Partitioning
- Clustering
- Denormalization for performance and cost optimization.
Streaming & Real-Time Pipelines
- Experience building real-time data pipelines using:
- Google Pub/Sub
- BigQuery streaming pipelines
We are seeking a Senior Machine Learning Engineer to support the development and deployment of advanced AI capabilities within the PHI ecosystem.
This role focuses on the execution of Generative AI tasks, including model integration and agent deployment. The candidate will be responsible for building RAG-based workflows and ensuring AI interactions remain grounded and accurate using Google Cloud AI tools.
Key Responsibilities
1. GenAI Integration
- Develop and maintain integrations with Gemini 1.5 Pro and Flash models
- Use the Google Gen AI SDK for Python to build and manage model integrations
2. Agent Deployment
- Assist in deploying AI agents to Vertex AI Agent Engine
- Work with the Agent Development Kit (ADK) for agent lifecycle management
3. RAG & Embeddings
- Generate and manage text and multimodal embeddings
- Support semantic search and Retrieval-Augmented Generation (RAG) pipelines
4. Testing & Quality
- Run evaluation scripts to verify model output quality
- Ensure models follow grounding and response accuracy guidelines
Must-Have Skills
- Strong Python programming
- Experience working with REST APIs
- Hands-on experience with Vertex AI Studio
- Experience working with Gemini APIs
- Understanding of Agentic AI concepts
- Familiarity with ADK CLI
- Experience or understanding of RAG architecture
- Knowledge of embedding generation
Good-to-Have Skills (Foundation):
BigQuery
- Basic SQL knowledge
- Experience with data loading
- Ability to debug and troubleshoot queries
Data Streaming
- Familiarity with Google Pub/Sub
- Understanding of synthetic data generation
Visualization
- Basic reporting and dashboards using Looker Studio
As a Backend Engineer, you will be a core member of the Platform Implementation Team, responsible for building the robust, scalable, and secure backend infrastructure for a multi-cloud enterprise Data & AI platform.
You will design and develop high-performance microservices, RESTful APIs, and event-driven architectures that serve as the backbone for enterprise-wide applications.
Working closely with Platform Engineers, Data Modelers, and UI teams, you will ensure seamless data flow between core business systems (CRM, ERP) and the platform, enabling the rollout of critical business services across multiple global Local Business Units (LBUs).
Backend Development
- Design and develop scalable backend services and microservices
- Build and maintain RESTful APIs for enterprise applications
- Define and maintain API contracts using OpenAPI/Swagger
Platform & System Integration
- Enable seamless integration between enterprise systems (CRM, ERP) and the platform
- Support data flow across multiple global business units
Event-Driven Architecture
- Implement asynchronous processing and event-driven systems
- Work with message brokers and streaming platforms
Cross-Functional Collaboration
- Collaborate with platform engineers, data modelers, and frontend teams
- Contribute to architecture discussions and backend design decisions
Must-Have Skills
Experience
- 5–7 years of hands-on experience in backend software engineering
- Experience building enterprise-grade backend systems
Core Programming
Strong proficiency in at least one backend language:
- Python
- Node.js
- Java
Strong understanding of:
- Object-oriented programming (OOP)
- Functional programming principles
API & Microservices
- Extensive experience building RESTful APIs
- Experience designing microservices architectures
- Ability to define API contracts using OpenAPI / Swagger
Cloud Infrastructure
Hands-on experience with cloud platforms:
- Google Cloud Platform (GCP)
- Microsoft Azure
Examples of services:
- Cloud Functions
- Cloud Run
- Azure App Services
Database Management
Experience with both Relational and NoSQL databases
Relational:
- PostgreSQL
- Cloud SQL
NoSQL:
- Schema design
- Complex querying
- Performance optimization
Event-Driven Architecture
Experience with asynchronous processing and message brokers:
- GCP Pub/Sub
- Apache Kafka
- RabbitMQ
Security & Authentication
Strong understanding of:
- OAuth 2.0
- JWT authentication
- Role-Based Access Control (RBAC)
- Data encryption
Software Engineering Best Practices
- Writing clean, maintainable code
- Version control using Git
- Writing unit and integration tests
- Familiarity with CI/CD pipelines
- Containerization using Docker
Good-to-Have Skills
AI & LLM Integration
- Experience integrating Generative AI models
- Exposure to:
- OpenAI
- Vertex AI
- LLM gateways
- Retrieval-Augmented Generation (RAG)
Frontend Exposure
Basic familiarity with frontend frameworks such as:
- React
- Next.js
- Angular
Understanding how backend APIs integrate with UI applications
Advanced Data Stores
Experience with:
- Vector databases (Pinecone, Milvus)
- Knowledge graphs
Domain Knowledge
- Experience in Life Insurance or BFSI sector
- Understanding of enterprise data governance and compliance standards
We are seeking a highly skilled Senior Backend Developer with deep expertise in Python and FastAPI to join our team. This role focuses on building high-performance, scalable backend services capable of handling high request volumes while integrating advanced LLM technologies.
The ideal candidate will design robust distributed systems, implement efficient data storage solutions, and ensure enterprise-grade security within an Azure-based infrastructure. This is a great opportunity to work on AI/ML integrations and mission-critical applications requiring high performance and reliability.
Key Responsibilities:
Backend Development
- Design and maintain high-performance backend services using Python and FastAPI
- Implement advanced FastAPI features such as dependency injection, middleware, and async programming
- Write comprehensive unit tests using pytest
- Design and maintain Pydantic schemas
High-Concurrency Systems
- Implement asynchronous code for high-volume request processing
- Apply concurrency patterns and atomic operations to ensure efficient system performance
Data & Storage
- Optimize MongoDB operations
- Implement Redis caching strategies (TTL, performance tuning, caching patterns)
Distributed Systems
- Implement rate limiting, retry logic, failover mechanisms, and region routing
- Build microservices and event-driven architectures
- Work with EventHub, Blob Storage, and Databricks
AI/ML Integration
- Integrate OpenAI API, Gemini API, and Claude API
- Manage LLM integrations using LiteLLM
- Optimize AI service usage within the Azure ecosystem
Security
- Implement JWT authentication
- Manage API keys and encryption protocols
- Implement PII masking and data security mechanisms
Collaboration
- Work with cross-functional teams on architecture and system design
- Contribute to engineering best practices and technical improvements
- Mentor junior developers where required
Must-Have Skills & Requirements
Experience
- 7+ years of hands-on Python backend development
- Bachelor’s degree in Computer Science, Engineering, or related field
- Experience building high-traffic, scalable systems
Core Technical Skills
Python
- Advanced knowledge of asynchronous programming, concurrency, and atomic operations
FastAPI
- Expert-level experience with dependency injection, middleware, and async code
Testing
- Strong experience with pytest and Pydantic schemas
Databases
- Hands-on experience with MongoDB and Redis
- Strong understanding of caching patterns, TTL, and performance optimization
Distributed Systems
- Experience with rate limiting, retry logic, failover mechanisms, high concurrency processing, and region routing
Microservices
- Experience building microservices and event-driven systems
- Exposure to EventHub, Blob Storage, and Databricks
Cloud
- Strong experience working in Azure environments
AI Integration
- Familiarity with OpenAI API, Gemini API, Claude API, and LiteLLM
Security
- Implementation experience with JWT authentication, API keys, encryption, and PII masking
Soft Skills
- Strong problem-solving and debugging skills
- Excellent communication and collaboration
- Ability to manage multiple priorities
- Detail-oriented approach to code quality
- Experience mentoring junior developers
Good-to-Have Skills
Containerization
- Docker, Kubernetes (preferably within Azure)
DevOps
- CI/CD pipelines and automated deployment
Monitoring & Observability
- Experience with Grafana, distributed tracing, custom metrics
Industry Experience
- Experience in Insurance, Financial Services, or regulated industries
Advanced AI/ML
- Vector databases
- Similarity search optimization
- LangChain / LangSmith
Data Processing
- Real-time data processing and event streaming
Database Expertise
- PostgreSQL with vector extensions
- Advanced Redis clustering
Multi-Cloud
- Experience with AWS or GCP alongside Azure
Performance Optimization
- Advanced caching strategies
- Backend performance tuning
Who We Are
We're a DevOps and Automation company based in Bengaluru, India. We have successfully delivered over 170 automation projects for 65+ global businesses, including Fortune 500 companies that entrust us with their most critical infrastructure and operations. We're bootstrapped, profitable, and scaling rapidly by consistently solving real, impactful problems.
What We Value
- Ownership: As part of our team, you're responsible for strategy and outcomes, not just completing assigned tasks.
- High Velocity: We move fast, iterate faster, and amplify our impact, always prioritizing quality over speed.
Who we seek
We are looking for a Fullstack Developer Intern to join our Engineering team. You’ll build and improve internal products. This is a hands-on internship focused on learning by shipping. Your ultimate goal will be to build highly responsive and innovative AI based software solutions that meet our business needs.
We're looking for individuals who genuinely care, ship fast, and are driven to make a significant impact.
🌏 Job Location: Bengaluru (Work From Office)
What You Will Be Doing
- Build user-facing features using Next.js and TypeScript.
- Convert designs into responsive UI using Tailwind CSS and reusable components.
- Work with APIs to integrate frontend with backend services.
- Implement common product workflows: authentication, forms, dashboards, tables, and navigation.
- Fix bugs, write clean code, and improve performance.
- Collaborate in a PR-based workflow on GitHub.
- Write and maintain documentation for the features you ship.
- Learn and apply best practices: component structure, state management, error handling, accessibility basics.
What We’re Looking For
- Basic to intermediate experience with JavaScript and NextJS.
- Familiarity with TypeScript basics.
- Comfortable with HTML/CSS and responsive design, Tailwind CSS is a plus.
- Understanding of how APIs work and how to consume them from the frontend.
- Strong Git knowledge.
- Strong learning mindset, ownership, and attention to detail.
Benefits
- Work directly with founders and the leadership team.
- Drive projects that create real business impact, not busywork.
- Gain practical skills that traditional education misses.
- Experience rapid growth as you tackle meaningful challenges.
- Fuel your career journey with continuous learning and advancement paths.
- Thrive in a workplace where collaboration powers innovation daily.
👉 Job Title: Senior Backend Developer
🌟 Experience: 5-7 Years
💡Location: Bangalore
👉 Notice Period :- Immediate Joiners
💡 Work Mode - 5 Days work from Office
( Candidate Serving notice period are preffered)
Role Summary
We are seeking a Senior Backend Developer with strong expertise in Python and FastAPI to build scalable, high-performance backend systems integrated with LLM technologies on Azure. The role involves designing distributed systems, optimizing data pipelines, and ensuring secure, enterprise-grade applications.
Key Responsibilities
- Develop backend services using Python & FastAPI (async, middleware)
- Build high-concurrency, scalable systems and microservices
- Work with Azure services and event-driven architectures
- Optimize MongoDB & Redis for performance
- Integrate LLM APIs (OpenAI, Gemini, Claude)
- Implement security (JWT, encryption, API management)
Mandatory Skills (Top 3)
- Strong Python backend development with FastAPI
- Hands-on experience with Microsoft Azure cloud
- Experience in building scalable distributed/microservices systems
Good to Have
- Docker, Kubernetes, CI/CD
- LLM frameworks (LangChain, vector DBs)
- Monitoring tools and real-time data processing
👉 Job Title: Backend Developer
🌟 Experience: 5-7 Years
💡Location: Bangalore
👉 Notice Period :- Immediate Joiners
💡 Work Mode - 5 Days work from Office
( Candidate Serving notice period are preffered)
Role Summary
We are looking for a Backend Engineer to join the Platform Implementation Team, responsible for building scalable, secure, and high-performance backend systems for a multi-cloud Data & AI platform. You will design microservices, develop REST APIs, and enable seamless data integration across enterprise systems like CRM and ERP.
💫 Key Responsibilities
✅ Design and develop scalable microservices and RESTful APIs
✅ Build event-driven architectures for asynchronous processing
✅ Integrate backend systems with cloud platforms (GCP/Azure)
✅ Ensure secure, reliable, and optimized data handling
✅ Collaborate with cross-functional teams (UI, Data, Platform)
✅ Follow best practices in coding, testing, CI/CD, and containerization
💫 Mandatory Skills (Top 3)
✅ Strong backend programming experience (Python / Node.js / Java)
✅ Expertise in API development & Microservices architecture
✅ Hands-on experience with Cloud platforms (GCP or Azure)


























