50+ DevOps Jobs in India
Apply to 50+ DevOps Jobs on CutShort.io. Find your next job, effortlessly. Browse DevOps Jobs and apply today!
AI Engineer (Generative AI & Healthcare)
Location: Remote (India)
Experience: 3 - 12 Years
Domain: HealthTech
The Mission
We are looking for a heavy-hitter AI Engineer to bridge the gap between unstructured clinical chaos and structured medical insights. You won’t just be "playing" with prompts; you will build production-grade, HIPAA-compliant systems that handle sensitive EMR/EHR data using state-of-the-art RAG architectures and Agentic workflows.
What You’ll Do
- Architect & Deploy: Build end-to-end GenAI pipelines and Agentic Workflows capable of navigating complex clinical logic.
- Clinical NLP: Transform unstructured doctor’s notes into structured insights using BioBERT, ClinicalBERT, or custom fine-tuned LLMs.
- Data Orchestration: Work seamlessly within HL7/FHIR standards to ensure interoperability.
- Production Excellence: Deploy and scale models in a robust environment using Docker, Kubernetes, and AWS.
- Security First: Architect every solution with a HIPAA-compliant mindset, ensuring the absolute integrity of sensitive patient data.
Your Technical Toolkit
- Generative AI: Expert-level RAG, Fine-tuning, and Prompt Engineering.
- Healthcare Tech: Deep familiarity with EMR/EHR systems and HL7/FHIR protocols.
- Engineering: 3+ years of experience with Python, LangChain/LlamaIndex, and vector databases.
- DevOps: Proven track record of deploying "production-grade" AI (not just notebooks) on AWS/K8s.
Interested? Apply with a summary of your most complex production-grade AI project.
Job opportunity for Developer -Python Full Stack with Siemens at Bangalore.
Interview Process:
1st round of interview - F2F (in-Person)-Technical
2nd round of interview – F2F /Virtual Interview - Technical
3rd round of interview – Virtual Interview – Technical + HR
Job Title / Designation: Developer -Python Full Stack
Employment Type: Full Time, Permanent
Location: Bangalore
Experience: 3-5 Years Job Description: : Developer -Python Full Stack
We are looking for a python full stack expert who has proven 5+ years of experience in developing automating solutions on Linux based environments. You should be capable of developing python-based web applications or automation solutions and have with excellent knowledge on DB handling and decent knowledge of the K8-based deployment environment.
Required Skills:
- Solid experience in Python back-end technology
- Sound experience in web application development
- Decent knowledge and experience in UI development using JavaScript, React/Angular or related tech stack.
- Strong understanding of software design patterns and testing principles
- Ability to learn and adapt to working with multiple programming languages.
- Experience Docker, ArgoCD, Kubernetes and Terraform
- Understanding of ETL processes to extract data from different data sources is a plus.
- Proven experience in Linux development environments using Python.
- Excellent knowledge in interacting with database systems (SQL, NoSQL) and webservices (REST)
- Experienced in establishing an optimized CI / CD environment relevant to the project.
- Good knowledge on repository management tools like Git, Bit Bucket, etc.
- Excellent debugging skills/strategies.
- Excellent communication skills
- Experienced in working in an Agile environment.
Nice to have
- Good Knowledge in eclipse IDE, developed add-ons/ plugins on eclipse Platform.
- Knowledge of 93K Semiconductor test platforms
- Good know-how of agile management tools like Jira, Azure DevOps.
- Good knowledge of RHEL
- Knowledge of JIRA administration
NOW HIRING · WORLD-CLASS TALENT Backend Tech Lead (Senior Level Engineering Leadership)
Placed by Recruiting Bond on behalf of a Confidential Digital Platform Leader
📍Location: Bengaluru, India (Hybrid / On-Site)
🏢Sector: Technology, Information & Media
👥Company Size: 500 – 1,000 Employees
💼Employment: Full-Time, Permanent
🎯Experience: 6 – 9 Years (Backend Engineering)
🚀 Level: Tech Lead
ABOUT THIS MANDATE
Recruiting Bond has been exclusively retained by one of India's most well-established digital platform organisations — a company operating at the intersection of Technology, Information, and Media — to identify and place a world-class Backend Tech Lead who can drive a transformational engineering agenda at scale.
This is not an ordinary role. The organisation is executing a high-stakes, large-scale modernisation of its backend infrastructure — migrating from legacy monolithic systems to resilient, cloud-native, AI-augmented distributed architectures that serve millions of concurrent users. The person in this seat will be a core pillar of that transformation.
We are looking exclusively for the top 1% — engineers who think in systems, own outcomes, and lead by example.
THE OPPORTUNITY AT A GLANCE
🏗️ Architecture Ownership
Drive system design decisions across the entire backend platform. Shape the future of distributed, fault-tolerant architecture.
🤖 AI-Augmented Engineering
Embed GenAI and LLM tooling directly into the SDLC. Champion automation-first development practices across squads.
🎓 Engineering Leadership
Mentor and grow the next generation of backend engineers. Lead hiring, reviews, and cross-functional technical alignment.
KEY RESPONSIBILITIES
1. Architecture & Platform Modernisation
- Lead the full migration of legacy monolithic systems to a scalable, cloud-native microservices architecture
- Design and own distributed, fault-tolerant backend systems with sub-millisecond SLO targets
- Architect API-first and event-driven platforms using async messaging patterns (Kafka, Pub/Sub, SQS)
- Resolve systemic performance bottlenecks, concurrency conflicts, and scalability ceilings
- Establish backend design standards, coding guidelines, and architectural review processes
2. Distributed Systems Engineering (Production-Grade)
- Design and implement Webhook reliability frameworks with intelligent retry and exponential backoff strategies
- Build idempotent, versioned APIs with enterprise-grade rate limiting and throttling controls
- Implement circuit breakers, bulkheads, and resilience patterns using Resilience4j / Hystrix or equivalents
- Engineer Dead-Letter Queue (DLQ) strategies and event reprocessing pipelines with guaranteed delivery semantics
- Apply Saga orchestration and choreography patterns for distributed transaction integrity
- Execute zero-downtime deployments and canary release strategies with rollback capability
- Design and enforce multi-region disaster recovery and business continuity protocols
3. AI-Driven Engineering Practices
- Champion LLM and GenAI adoption as first-class tooling across the software development lifecycle
- Apply prompt engineering techniques for automated code generation, review, and documentation workflows
- Utilise AI-assisted debugging, root cause analysis, and predictive performance optimisation
- Build automation-first pipelines that reduce toil and accelerate delivery velocity
- Evaluate and integrate emerging AI developer tools into the engineering ecosystem
4. Engineering Leadership & Culture
- Own backend platforms end-to-end with full accountability across development, stability, and performance
- Actively mentor, coach, and elevate engineers at all levels (L3–L6) through structured 1:1s and code reviews
- Drive and lead technical hiring — from designing assessments to final hire decisions
- Partner with Product, Data, DevOps, and Security stakeholders to align engineering with business objectives
- Represent the engineering org in cross-functional roadmap planning and architecture decision reviews
- Foster a culture of technical excellence, psychological safety, and high-velocity delivery
TECHNOLOGY STACK (HANDS-ON PROFICIENCY REQUIRED)
Languages: Java (primary) · Go · Python · Node.js · PHP · Rust
Cloud: AWS · GCP · Azure (Multi-cloud exposure preferred)
Containers: Docker · Kubernetes · Helm · Service Mesh (Istio / Linkerd)
Databases: PostgreSQL · MySQL · MongoDB · Cassandra · Redis · Elasticsearch
Messaging: Apache Kafka · RabbitMQ · AWS SQS/SNS · Google Pub/Sub
Observability: Datadog · Prometheus · Grafana · OpenTelemetry · Jaeger · ELK Stack
CI/CD & IaC: GitHub Actions · Jenkins · ArgoCD · Terraform · Ansible
AI & GenAI: OpenAI / Claude APIs · LangChain · RAG Pipelines · GitHub Copilot · Cursor
QUALIFICATIONS & CANDIDATE PROFILE
Education
- B.E. / B.Tech or M.E. / M.Tech from a Tier-I or Tier-II Institution — CS, IS, ECE, AI/ML streams strongly preferred
- Exceptional real-world engineering track record may be considered in lieu of institution pedigree
Experience
- 6 to 9 years of progressive backend engineering experience with demonstrable ownership and impact
- Proven track record of shipping and scaling production SaaS / Product systems at significant user load
- Exposure to and success within start-up, mid-size, and large-scale product organisations — the full spectrum
- Strong computer science fundamentals: algorithms, data structures, distributed systems theory, OS internals
- Demonstrated career stability — minimum 2 years average tenure per organisation
- The Ideal Candidate Exemplifies
- System-level thinking with an ability to hold context across code, architecture, product, and business
- An ownership mindset — no task is 'not my job'; outcomes and quality are personal commitments
- Strong written and verbal communication skills for asynchronous, cross-functional collaboration
- Intellectual curiosity: actively follows engineering trends, contributes to the community (OSS, blogs, talks)
- Bias for automation, observability, and engineering efficiency at every level
- A mentor's instinct — genuine desire to grow others and raise the capability of the team around them
WHY THIS ROLE STANDS APART
✅ Transformational Scope
Lead platform modernisation at scale. Your architectural choices will define systems serving millions of users for years.
✅ AI-Forward Engineering Culture
Be at the forefront of AI-augmented development. This org invests in tools and practices that make great engineers exceptional.
✅ Established, Stable Platform
Join a company with 500–1,000 employees, proven product-market fit, and the resources to execute on a serious technical vision.
✅ Career-Defining Leadership
Operate with strategic influence, direct access to senior leadership, and a clear path toward Principal / Staff / VP Engineering.
HOW TO APPLY
This search is being managed exclusively by Recruiting Bond
Submit your application with an updated resume
Only shortlisted candidates will be contacted. All applications are treated with the strictest confidentiality.
⚡ We move fast — qualified candidates can expect a response within 48–72 business hours.
Recruiting Bond | Bengaluru, Karnataka, India | 2026
Senior BackEnd Engineer
The ideal candidate will have a strong background in building scalable applications, a deep understanding of back-end technologies, and experience with cloud infrastructure. As a Back End Engineer, you will be responsible for designing, developing, and maintaining a scalable workflow management system. You will work closely with cross-functional teams to build robust and efficient applications that meet the needs of our users. Your expertise in Scala, Python, AI Agents/APIs, and GCP will be crucial in ensuring our system is reliable, performant, and scalable.
Key Responsibilities:
Back-End Development:
- Build and maintain back-end services and APIs using Scala.
- Implement and optimize Orchestration workflow system involving database queries and operations.
- Build API integrations with Third Party APIs and services.
- Ensure robust and scalable server-side logic.
Cloud Integration:
- Deploy, manage, and monitor applications on Google Cloud Platform (GCP).
- Utilize GCP services to enhance application performance and scalability.
- Implement cloud-based solutions for data storage, processing, and analytics.
Collaboration And Communication:
- Work closely with cross-functional teams to define, design, and ship new features.
- Participate in code reviews and contribute to sharing team knowledge.
- Document development processes, coding standards, and project requirements.
Qualifications:
- Educational Background:
- Completed a masters/bachelor degree in Computer Science, Engineering, or a related field.
- Technical Skills:
- Proficiency in Scala programming language.
- Strong experience with React and ReactJS.
- Familiarity with Google Cloud Platform (GCP) and its services.
- Knowledge of front-end development tools and best practices.
- Understanding of RESTful API design and implementation.
- Soft Skills:
- Excellent problem-solving skills and attention to detail.
- Strong communication and collaboration abilities.
- Eagerness to learn and adapt to new technologies and challenges.
Preferred Qualifications:
- Experience with version control systems such as Git.
- Familiarity with CI/CD pipelines and DevOps practices.
- Understanding of workflow management systems and their requirements.
- Experience with containerization technologies like Docker.
Must have Skills
- Scala - 4 Years
- React.Js - 1 Years
- RESTful API - 4 Years
- Docker - 2 Years
- Python - 3 Years
- Artificial Intelligence - 2 Years
Key Responsibilities
- Design, build, and maintain highly available and scalable cloud infrastructure on Microsoft Azure.
- Implement and manage CI/CD pipelines using Azure DevOps (YAML-based pipelines preferred).
- Define and monitor SLIs, SLOs, and error budgets to improve service reliability.
- Automate infrastructure provisioning using IaC tools (ARM, Bicep, Terraform).
- Monitor system health using Azure Monitor, Log Analytics, and Application Insights.
- Perform incident response, root cause analysis (RCA), and postmortems.
- Improve system resilience through automation, self-healing, and performance tuning.
- Collaborate with development teams to implement DevOps and reliability best practices.
- Manage containerized workloads using Docker and Kubernetes (AKS preferred).
- Implement security, compliance, and governance controls in CI/CD workflows.
- Optimize cloud costs and resource utilization (FinOps awareness).
- Maintain runbooks, SOPs, and operational documentation.
Required Qualifications
- Bachelor’s degree in Computer Science, Engineering, or related field.
- 3–8 years of experience in SRE, DevOps, or Production Engineering roles.
- Hands-on experience with Microsoft Azure cloud services.
- Strong experience with Azure DevOps pipelines (CI/CD).
- Solid scripting skills (PowerShell, Bash, or Python).
- Experience with Infrastructure as Code (Terraform, ARM, or Bicep).
- Strong understanding of Linux/Windows system administration.
- Experience with monitoring and observability tools.
- Good understanding of networking concepts and cloud security basics.
- Experience with Git and modern branching strategies.
About the role
We’re hiring an IT Systems Administrator for an NBFC to secure endpoints, SaaS, and networks across ~50 branches, ~250+ field staff, and ~50+ office users.
This is primarily an IT Admin + Security role, with secondary exposure to AWS cloud ops + light DevOps + basic DB access management.
If you’re an IT Admin aiming to break into AWS Cloud Ops + DevOps, this role is a strong next step — you’ll own core IT/security and get hands-on exposure to cloud operations and deployments.
Key responsibilities (Primary: IT Admin + Security)
- Manage endpoint security for laptops and mobiles (policies, patching, encryption, antivirus/EDR); drive MDM implementation now/future (e.g., Intune/Jamf).
- Administer Google Workspace (Gmail/Drive/Calendar): users, groups, permissions, SSO, MFA, sharing controls.
- Own joiner–mover–leaver lifecycle: provisioning/deprovisioning, access controls, periodic access reviews.
- Secure branch connectivity: VPN, internal Wi-Fi, internet usage controls; coordinate troubleshooting and standardization across branches.
- Manage HO security stack: firewall operations, rule changes with change control, monitoring/log review (basic but consistent).
- Secure SaaS tools (CRM/HRMS/comms like Slack/Zoom): role-based access, MFA enforcement, offboarding, integration/OAuth controls.
- Maintain IT asset inventory: procurement coordination, issuance/return, audits, warranty/AMC, license renewals; remote lock/wipe for lost devices.
- Handle security incidents: phishing, account compromise, device loss/theft — contain, investigate, recover, and prevent recurrence.
- Run backups and basic DR testing; maintain SOPs/documentation and train staff on cyber hygiene.
- Provide hands-on user support: laptop builds, software installs, Outlook/Excel issues, VPN/Wi-Fi troubleshooting, escalations and vendor coordination.
Secondary responsibilities (AWS + DevOps + DB ops support)
- Support AWS administration: IAM users/roles/policies, MFA, access key hygiene, basic log review (e.g., CloudTrail).
- Manage AWS access controls: security groups/firewall rules, IP allowlists/whitelisting (admin tools, databases, vendor access).
- Assist engineering with DevOps operations:
- CI/CD support (deployment coordination, rollbacks, environment configuration)
- Secrets/credentials management and rotation (no shared creds)
- DNS + SSL/TLS certificates, basic monitoring/alerting coordination
- Bonus: Docker/Kubernetes and Terraform exposure
- Basic database operations (admin-lite):
- DB user creation, roles/permissions, least-privilege access
- IP allowlisting/whitelisting for DB access via VPN/approved sources
- Backup/restore verification coordination and basic monitoring signals (connections/storage)
Requirements
- 3+ years in IT security / systems administration (BFSI or branch-heavy org preferred).
- Hands-on with Google Workspace administration.
- Strong endpoint/security fundamentals: encryption, patching, AV/EDR, remote support, device compliance.
- Comfortable with networks: VPN/Wi-Fi/LAN troubleshooting; firewall basics and change discipline.
- Strong operational discipline: asset tracking, vendor management, documentation, ticketing, user communication.
- Practical AWS familiarity (IAM, access controls, logging) and ability to support DevOps workflows.
Nice to have
- Experience implementing MDM at scale (Intune/Jamf/SureMDM).
- Exposure to SOC2 / ISO27001 evidence, controls, and audit workflows.
- Scripting for automation (PowerShell/Bash/Python).
- Familiarity with managed databases and secure access patterns.
Hiring: AWS DevOps Developer
📍 Location: Bangalore
🧑💻 Experience: 4–7 Years
📌 Job Summary
We are looking for a skilled AWS DevOps Developer with strong experience in AWS cloud infrastructure, CI/CD automation, containerization, and Infrastructure as Code. The ideal candidate should have hands-on experience building scalable and secure cloud environments.
🛠 Required Technical Skills
☁️ AWS Services
- Amazon EC2
- Amazon S3
- IAM
- VPC
- Amazon EKS
- RDS
- Route 53
- CloudWatch
- AWS Lambda
🔄 DevOps & CI/CD
- Jenkins (Pipelines, Shared Libraries)
- Git / GitHub
- Maven / Build tools
- CI/CD pipeline design & implementation
🐳 Containers & Orchestration
- Docker
- Kubernetes (EKS preferred)
- Helm
🏗 Infrastructure as Code
- Terraform
- Ansible
📊 Monitoring & Logging
- CloudWatch
- Prometheus
- Grafana
📋 Roles & Responsibilities
- Design and implement scalable AWS infrastructure
- Build and maintain CI/CD pipelines
- Deploy containerized applications using Docker & Kubernetes
- Automate infrastructure provisioning using Terraform
- Implement monitoring and alerting solutions
- Ensure security, compliance, and cost optimization
- Troubleshoot production issues and improve system reliability
➕ Good to Have
- AWS Certification (Solutions Architect / DevOps Engineer)
- Experience with Microservices architecture
- Knowledge of DevSecOps practices
- Experience in Agile methodology
Dot Net Full Stack Developer
Job Overview
We are seeking a skilled .NET Developer who can design, develop, and maintain both conventional .NET applications and modern cloud-ready solutions. The ideal candidate will have expertise in Microsoft SQL Server, Azure DevOps CI/CD, Azure AD-based SSO, and integration with enterprise applications using MuleSoft APIs. The role also involves modernizing legacy applications, migrating to Azure Cloud, and building responsive web applications using Razor Pages, Bootstrap, and jQuery, as well as modern alternatives like Blazor, Tailwind CSS, and React/Angular.
Responsibilities:
- Develop and maintain .NET Framework (4.x) and .NET 9 applications.
- Build responsive web applications using Razor Pages, Bootstrap v5.3.3, and jQuery 3.7.1.
- Document functionalities through reverse engineering and through communication with other developers. Draw architecture diagrams, and maintain application documentation
- Design and optimize SQL Server schemas, stored procedures, and queries.
- Integrate .NET applications with enterprise systems via MuleSoft APIs.
- Implement Single Sign-On (SSO) using Azure Active Directory.
- Design and maintain CI/CD pipelines using Azure DevOps.
- Migrate legacy .NET Framework apps to .NET 9 and deploy to Azure.
- Implement containerization using Docker and orchestration with Kubernetes.
- Ensure application security, scalability, and performance optimization.
- Collaborate with architects, QA, and business teams in agile environments.
- Develop and enhance software products mainly located in the European geography, and thus ability to support during CET timezone is must.
Required Framework & Technologies
- .NET Framework (4.x) and .NET 9
- C# programming language
- ASP.NET MVC, ASP.NET Core, Razor Pages
- Bootstrap CSS Framework v5.3.3
- jQuery 3.7.1
- Modern Alternatives: Blazor (Server/WebAssembly), Tailwind CSS, React, Angular
- Entity Framework Core, LINQ, Dapper
- Microsoft SQL Server (T-SQL, Stored Procedures, Performance Tuning)
- MuleSoft API Integration
- Azure Active Directory (SSO, OAuth, JWT)
- Azure DevOps (CI/CD pipelines, Release Management)
- Git, YAML pipelines
- Azure App Services, Azure Functions, Azure Kubernetes Service (AKS)
- Docker, Kubernetes
- Application Insights, Azure Monitor
Preferred Qualifications
- Bachelor's degree in Computer Science, Information Technology, or related field.
- Strong proficiency in C# and .NET technologies including .NET 9.
- Experience with Razor Pages, Bootstrap, and jQuery for front-end development.
- Familiarity with modern alternatives like Blazor, Tailwind CSS, and React/Angular.
- Hands-on experience with Azure DevOps and CI/CD pipelines.
- Knowledge of Azure AD authentication and SSO implementation.
- Experience in integrating applications using MuleSoft APIs.
- Familiarity with cloud migration strategies and Azure services.
Experience
- 3+ years of experience in .NET application development.
- 2+ years of experience in Azure Cloud ecosystem and DevOps.
- Experience in migrating legacy applications to modern .NET platforms.
- Experience in containerization and orchestration (Docker, Kubernetes).
What is in it for you?
- Opportunity to work on a technically challenging and impactful product.
- Joining a values-driven, employee-centric organization that prioritizes well-being.
- Being part of a growing start-up setting new standards for employee experience while delivering breakthrough digital products.
- Exposure to an international distributed work environment with industry-leading clients.
- A Hybrid-first setup, giving you the flexibility to work from anywhere 40 percent of the time in a week.
- First-hand experience working directly with large client organizations, solving meaningful challenges (not in an “outsourced” model).
- Collaborative and supportive team environment that values empathy and camaraderie.
- Professional development and continuous learning opportunities.
- Competitive salary package and a strong emphasis on work-life balance.
Must have Skills
- .Net - 3 Years
- DevOps - 2 Years
- C Sharp - 3 Years
- .NET 9 - 3 Years
- Razor Pages - 2 Years
- ASP.NET Core - 2 Years
- ASP.NET MVC - 2 Years
- Blazor - 3 Years
- Azure DevOps - 2 Years
- Docker - 2 Years
- Kubernetes - 2 Years
- Microsoft SQL Services - 2 Years
- YAML - 3 Years
- Azure Monitor - 2 Years
- CI/CD pipeline - 2 Years
A DevOps Engineer plays a crucial role in enhancing and streamlining IT operations, particularly in the context of cloud computing and agile software development.
Responsible for managing the cloud infrastructure, CI/CD pipelines
Works as an integral software delivery team member delivering world-class, scalable and robust solutions.
Major Functions/Responsibilities:
- Design, Implement, and manage CI/CD pipelines on Microsoft Azure using Azure DevOps Services
- Automate deployment processes and infrastructure management using tools like Terraform, ARM templates or Azure CLI
- Create, configure, and execute on-going or newly proposed processes for multiple projects.
- Manage version control systems and ensure proper branching, merging and release workflows.
- Collaborate with development teams to design and implement cloud-native applications and scalable solutions.
- Manage Azure services like Azure Functions, App services, Azure Kubernetes Service (AKS) and Azure Container Registry (ACR)
- Implement monitoring and logging strategies using tools like Azure Monitor, Azure Log analytics and Application Insights.
- Identify areas for improvement within processes and practices.
- Optimize Azure resources to minimize costs while ensuring performance and scalability
- Strong problem-solving skills and the ability to troubleshoot complex issues.
- Create system documentation for training and reference.
- Implement security best practices including RBAC, Key Vault, Network Security, identity & access management.
Recommended Education/Experience/ Skills:
- 6+ years of DevOps experience in improving efficiency and achieving Continuous Integration, Continuous Testing and Continuous Deployment.
- 2+ years of experience with infrastructure automation tools (such as Terraform, Ansible, chef).
- Knowledge of AI in the context of NLP or other specialized AI fields and experience in AI based applications
- Experience with Docker, Azure Kubernetes Service for container management.
- Experience with Windows and Linux system administration with knowledge of installations, performance tuning, security, and shell scripting.
- Prior DevOps experience in improving efficiency and achieving Continuous Integration, Continuous Testing and Continuous Deployment.
- Proven experience in building DevOps infrastructure and creating multiple environments.
- Experience with or strong understanding of modern service-oriented architecture.
- Experience with scripting languages such as PowerShell, Python or Bash
- Knowledge of networking load balancers.
- Understanding of cloud security principles.
- Experience with SDLC Management software and solutions and knowledge of Agile and Scrum methodologies
- Experience in integration of automated testing and deployment for cloud-based applications with Continuous Integration tools.
- Experience in any modern language (C#, HTML, CSS, Java, etc).
- Collaboration and communication skills to work with cross-functional teams.
- Expertise in version control systems like Azure DevOps, Git managing repositories.
- Relevant DevOps-related certifications, such as Azure DevOps expert , Azure AI services etc
Description
SRE Engineer
Role Overview
As a Site Reliability Engineer, you will play a critical role in ensuring the availability and performance of our customer-facing platform. You will work closely with DevOps, DBA, and Development teams to provision and maintain infrastructure, deploy and monitor our applications, and automate workflows. Your contributions will have a direct impact on customer satisfaction and overall experience.
Responsibilities and Deliverables
• Manage, monitor, and maintain highly available systems (Windows and Linux)
• Analyze metrics and trends to ensure rapid scalability.
• Address routine service requests while identifying ways to automate and simplify.
• Create infrastructure as code using Terraform, ARM Templates, Cloud Formation.
• Maintain data backups and disaster recovery plans.
• Design and deploy CI/CD pipelines using GitHub Actions, Octopus, Ansible, Jenkins, Azure DevOps.
• Adhere to security best practices through all stages of the software development lifecycle
• Follow and champion ITIL best practices and standards.
• Become a resource for emerging and existing cloud technologies with a focus on AWS.
Organizational Alignment
• Reports to the Senior SRE Manager
• This role involves close collaboration with DevOps, DBA, and security teams.
Technical Proficiencies
• Hands-on experience with AWS is a must-have.
• Proficiency analyzing application, IIS, system, security logs and CloudTrail events
• Practical experience with CI/CD tools such as GitHub Actions, Jenkins, Octopus
• Experience with observability tools such as New Relic, Application Insights, AppDynamics, or DataDog.
• Experience maintaining and administering Windows, Linux, and Kubernetes.
• Experience in automation using scripting languages such as Bash, PowerShell, or Python.
• Configuration management experience using Ansible, Terraform, Azure Automation Run book or similar.
• Experience with SQL Server database maintenance and administration is preferred.
• Good Understanding of networking (VNET, subnet, private link, VNET peering).
• Familiarity with cloud concepts including certificates, Oauth, AzureAD, ASE, ASP, AKS, Azure Apps,
Load Balancers, Application Gateway, Firewall, Load Balancer, API Management, SQL Server, Databases on Azure
Experience
• 7+ years of experience in SRE or System Administration role
• Demonstrated ability building and supporting high availability Windows/Linux servers, with emphasis on the WISA stack (Windows/IIS/SQL Server/ASP.net)
• 3+ years of experience working with cloud technologies including AWS, Azure.
• 1+ years of experience working with container technology including Docker and Kubernetes.
• Comfortable using Scrum, Kanban, or Lean methodologies.
Education
• Bachelor’s Degree or College Diploma in Computer Science, Information Systems, or equivalent
experience.
Additional Job Details:
• Working hours: 2:00 PM / 3:00 PM to 11:30 PM IST
• Interview process: 3 technical rounds
• Work model: 3 days’ work from office
We are looking for a full-time DevOps Engineer to support and advance application delivery, infrastructure, and operational excellence initiatives. This role enables secure, reliable, and scalable releases across Clarity portals, integrations, and internal platforms, supporting modernization and consolidation efforts within the Microsoft ecosystem.
Job Responsibilities:
DevOps, CI/CD G Release Management :
- Design, build, and maintain CI/CD pipelines using Azure DevOps.
- Own and improve the release management process in partnership with engineering leadership.
- Standardize branching strategies, build definitions, and deployment patterns.
- Support controlled releases with approvals, rollback strategies, and audit trails.
SharePoint G DevOps Integrations
- Integrate SharePoint-based solutions with Azure DevOps pipelines and workflows.
- Align SharePoint customization and deployment practices with DevOps standards.
- Enable traceability between development work items, documentation, and releases.
Infrastructure G Linux Deployments
- Support and manage Linux server deployments.
- Automate infrastructure provisioning using Infrastructure as Code.
- Ensure secure, stable, and scalable environments across all stages.
Incident Management G Operational Support
- Collaborate with development teams and non-technical stakeholders during incidents.
- Communicate status, impact, and resolution clearly to leadership.
- Participate in root cause analysis and preventive improvements.
Security, Compliance G Governance
- Embed security and compliance controls into pipelines.
- Support HIPAA and benefits data protection requirements.
- Maintain documentation and audit readiness.
Required Skills:
- Bachelor’s degree in computer science, Information Technology, or a related field.
- 5+ years in DevOps or Infrastructure Engineering.
- Strong experience with Azure DevOps.
- Experience integrating SharePoint with DevOps workflows.
- Hands-on experience with Linux server deployments.
- Experience working in regulated or compliance-driven environments.
Build production-grade cloud infrastructure that powers enterprise applications with cutting-edge DevOps practices.
What you'll do:
- Design CI/CD pipelines (GitHub Actions, Jenkins, GitLab CI)
- Containerize apps with Docker, deploy on Kubernetes clusters
- Manage infrastructure as code (Terraform, CloudFormation)
- Set up monitoring (Prometheus, Grafana, ELK Stack)
- Cloud migrations (AWS EC2, EKS, RDS → GCP equivalent)
- Optimize costs and performance for live production systems
What we need:
- Basic Python/Bash scripting
- Docker basics, Git workflows
- Cloud exposure (AWS/GCP/Azure free tier projects)
- Problem-solving mindset, eagerness to learn
Real impact:
- Deploy apps used by 1000+ daily users
- Work with senior DevOps engineers on client deliverables
- Build portfolio for FAANG-level interviews

Consumer Internet, Technology & Travel and Tourism Platform
Job Details
- Job Title: Lead DevOps Engineer
- Industry: Consumer Internet, Technology & Travel and Tourism Platform
- Function - IT
- Experience Required: 7-10 years
- Employment Type: Full Time
- Job Location: Bengaluru
- CTC Range: Best in Industry
Criteria:
- Strong Lead DevOps / Infrastructure Engineer Profiles.
- Must have 7+ years of hands-on experience working as a DevOps / Infrastructure Engineer.
- Candidate’s current title must be Lead DevOps Engineer (or equivalent Lead role) in the current organization
- Must have minimum 2+ years of team management / technical leadership experience, including mentoring engineers, driving infrastructure decisions, or leading DevOps initiatives.
- Must have strong hands-on experience with Kubernetes (container orchestration) including deployment, scaling, and cluster management.
- Must have experience with Infrastructure as Code (IaC) tools such as Terraform, Ansible, Chef, or Puppet.
- Must have strong scripting and automation experience using Python, Go, Bash, or similar scripting languages.
- Must have working experience with distributed databases or data systems such as MongoDB, Redis, Cassandra, Elasticsearch, or Puppet.
- Must have strong hands-on experience in Observability & Monitoring, CI/CD architecture, and Networking concepts in production environments.
- (Company) – Must be from B2C Product Companies only.
- (Education) – B.E/ B.Tech
Preferred
- Experience working in microservices architecture and event-driven systems.
- Exposure to cloud infrastructure, scalability, reliability, and cost optimization practices.
- (Skills) – Understanding of programming languages such as Go, Python, or Java.
- (Environment) – Experience working in high-growth startup or large-scale production environments.
Job Description
As a DevOps Engineer, you will be working on building and operating infrastructure at scale, designing and implementing a variety of tools to enable product teams to build and deploy their services independently, improving observability across the board, and designing for security, resiliency, availability, and stability. If the prospect of ensuring system reliability at scale and exploring cutting-edge technology to solve problems, excites you, then this is your fit.
Job Responsibilities:
- Own end-to-end infrastructure right from non-prod to prod environment including self-managed DBs
- Codify our infrastructure
- Do what it takes to keep the uptime above 99.99%
- Understand the bigger picture and sail through the ambiguities
- Scale technology considering cost and observability and manage end-to-end processes
- Understand DevOps philosophy and evangelize the principles across the organization
- Strong communication and collaboration skills to break down the silos

Consumer Internet, Technology & Travel and Tourism Platform
Job Details
- Job Title: Senior DevOps Engineer
- Industry: Consumer Internet, Technology & Travel and Tourism Platform
- Function - IT
- Experience Required: 4-7 years
- Employment Type: Full Time
- Job Location: Bengaluru
- CTC Range: Best in Industry
Criteria:
- Strong DevOps / Infrastructure Engineer Profiles.
- Must have 4+ years of hands-on experience working as a DevOps Engineer / Infrastructure Engineer / SRE / DevOps Consultant.
- Must have hands-on experience with Kubernetes and Docker, including deployment, scaling, or containerized application management.
- Must have experience with Infrastructure as Code (IaC) or configuration management tools such as Terraform, Ansible, Chef, or Puppet.
- Must have strong automation and scripting experience using Python, Go, Bash, Shell, or similar scripting languages.
- Must have working experience with distributed databases or data systems such as MongoDB, Redis, Cassandra, Elasticsearch, or Kafka.
- Candidate must demonstrate strong expertise in at least one of the following areas - Databases / Distributed Data Systems, Observability & Monitoring, CI/CD Pipelines. Networking Concepts, Kubernetes / Container Platforms
- Candidates must be from B2C Product-based companies only.
- (Education) – BE / B.Tech or equivalent
Preferred
- Experience working with microservices or event-driven architectures.
- Exposure to cloud infrastructure, monitoring, reliability, and scalability practices.
- (Skills) – Understanding of programming languages such as Go, Python, or Java.
- Preferred (Environment) – Experience working in high-scale production or fast-growing product startups.
Job Description
As a DevOps Engineer, you will be working on building and operating infrastructure at scale, designing and implementing a variety of tools to enable product teams to build and deploy their services independently, improving observability across the board, and designing for security, resiliency, availability, and stability. If the prospect of ensuring system reliability at scale and exploring cutting-edge technology to solve problems, excites you, then this is your fit.
Job Responsibilities:
- Own end-to-end infrastructure right from non-prod to prod environment including self-managed DBs
- Codify our infrastructure
- Do what it takes to keep the uptime above 99.99%
- Understand the bigger picture and sail through the ambiguities
- Scale technology considering cost and observability and manage end-to-end processes
- Understand DevOps philosophy and evangelize the principles across the organization
- Strong communication and collaboration skills to break down the silos
Role & Responsibilities
As a DevOps Engineer, you will be working on building and operating infrastructure at scale, designing and implementing a variety of tools to enable product teams to build and deploy their services independently, improving observability across the board, and designing for security, resiliency, availability, and stability. If the prospect of ensuring system reliability at scale and exploring cutting-edge technology to solve problems excites you, then this is your fit.
Job Responsibilities:
- Own end-to-end infrastructure right from non-prod to prod environment including self-managed DBs
- Codify infrastructure
- Ensure uptime above 99.99%
- Understand the bigger picture and navigate through ambiguities
- Scale technology considering cost and observability and manage end-to-end processes
- Understand DevOps philosophy and evangelize the principles across the organization
- Demonstrate strong communication and collaboration skills to break down silos
Ideal Candidate
Strong DevOps / Infrastructure Engineer Profiles
Mandatory Requirements:
Experience 1:
Must have 4+ years of hands-on experience working as a DevOps Engineer / Infrastructure Engineer / SRE / DevOps Consultant.
Experience 2:
Must have hands-on experience with Kubernetes and Docker, including deployment, scaling, or containerized application management.
Experience 3:
Must have experience with Infrastructure as Code (IaC) or configuration management tools such as Terraform, Ansible, Chef, or Puppet.
Experience 4:
Must have strong automation and scripting experience using Python, Go, Bash, Shell, or similar scripting languages.
Experience 5:
Must have working experience with distributed databases or data systems such as MongoDB, Redis, Cassandra, Elasticsearch, or Kafka.
Experience 6:
Must demonstrate strong expertise in at least one of the following areas:
- Databases / Distributed Data Systems
- Observability & Monitoring
- CI/CD Pipelines
- Networking Concepts
- Kubernetes / Container Platforms
Company Background:
Candidates must be from B2C product-based companies only.
Education:
BE / B.Tech or equivalent.
Preferred:
- Experience working with microservices or event-driven architectures.
- Exposure to cloud infrastructure, monitoring, reliability, and scalability practices.
- Understanding of programming languages such as Go, Python, or Java.
- Experience working in high-scale production or fast-growing product startups.
Perks, Benefits and Work Culture
We take our work seriously and are proud of the associations we have built along the way. But we also know how to have fun. With a seamless communication structure and a “no cubicle culture,” the people here are extremely approachable. You will have several opportunities to exercise your potential. We break the regular office monotony and believe in a free-flowing work culture. It’s a great place to be, and we are confident you will enjoy working here.
If you want, I can also convert this into a recruiter-friendly screening checklist version.
At BigThinkCode, our technology solves complex problems. We are looking for talented Cloud Devops engineer to join our Cloud Infrastructure team at Chennai.
Please find below our job description, if interested apply / reply sharing your profile to connect and discuss.
Company: BigThinkCode Technologies
URL: https://www.bigthinkcode.com/
Role: Devops Engineer
Experience required: 2–3 years
Work location: Chennai
Joining time: Immediate – 4 weeks
Work Mode: Work from office (Hybrid)
About the Role:
We are looking for a DevOps Engineer with 2+ years of hands-on experience to support our infrastructure, CI/CD pipelines, and cloud environments. The candidate will work closely with senior DevOps engineers and development teams to ensure reliable deployments, system stability, and operational efficiency. The ideal candidate should be eager to learn new tools and technologies as required by the project.
Key Responsibilities:
· Assist in designing, implementing, and maintaining CI/CD pipelines using Jenkins, GitHub Actions, or similar tools
· Deploy, manage, and monitor applications on AWS cloud environments
· Manage and maintain Linux servers (Ubuntu/CentOS)
· Support Docker-based application deployments and container lifecycle management
· Work closely with developers to troubleshoot build, deployment, and runtime issues
· Assist in implementing security best practices, including IAM, secrets management, and basic system hardening
· Document processes, runbooks, and standard operating procedures
Core Requirements:
· 2–3 years of experience in a DevOps / Cloud / Infrastructure role
· Core understanding of DevOps principles and cloud computing
· Strong understanding of Linux fundamentals
· Hands-on experience with Docker for application deployment and container management
· Working knowledge of CI/CD tools, especially Jenkins and GitHub Actions
· Experience with AWS services, including EC2, IAM, S3, VPC, RDS, and other basic services.
· Good understanding of networking concepts, including:
o DNS, ports, firewalls, security groups
o Load balancing basics
· Experience working with web servers such as Nginx or Apache
· Understanding of SSL/TLS certificates and HTTPS configuration
· Basic knowledge of databases (PostgreSQL/MySQL) from an operational perspective
· Ability to troubleshoot deployment, server, and application-level issues
· Willingness and ability to learn new tools and technologies as required for project needs
Nice to Have
· Experience with monitoring and logging tools such as Prometheus, Grafana, CloudWatch, ELK.
· Basic experience or understanding of Kubernetes.
· Basic Python or shell scripting to automate routine operational tasks.
Why Join Us:
· Collaborative work environment.
· Exposure to modern tools and scalable application architectures.
· Medical cover for employee and eligible dependents.
· Tax beneficial salary structure.
· Comprehensive leave policy
· Competency development training programs.
About NonStop io Technologies:
NonStop io Technologies is a value-driven company with a strong focus on process-oriented software engineering. We specialize in Product Development and have a decade's worth of experience in building web and mobile applications across various domains. NonStop io Technologies follows core principles that guide its operations and believes in staying invested in a product's vision for the long term. We are a small but proud group of individuals who believe in the 'givers gain' philosophy and strive to provide value in order to seek value. We are committed to and specialize in building cutting-edge technology products and serving as trusted technology partners for startups and enterprises. We pride ourselves on fostering innovation, learning, and community engagement. Join us to work on impactful projects in a collaborative and vibrant environment.
Brief Description:
We are looking for an Engineering Manager who combines technical depth with leadership strength. This role involves leading one or more product engineering pods, driving architecture decisions, ensuring delivery excellence, and working closely with stakeholders to build scalable web and mobile technology solutions. As a key part of our leadership team, you’ll play a pivotal role in mentoring engineers, improving processes, and fostering a culture of ownership, innovation, and continuous learning.
Roles and Responsibilities:
● Team Management: Lead, coach, and grow a team of 15-20 software engineers, tech leads, and QA engineers
● Technical Leadership: Guide the team in building scalable, high-performance web and mobile applications using modern frameworks and technologies
● Architecture Ownership: Architect robust, secure, and maintainable technology solutions aligned with product goals
● Project Execution: Ensure timely and high-quality delivery of projects by driving engineering best practices, agile processes, and cross-functional collaboration
● Stakeholder Collaboration: Act as a bridge between business stakeholders, product managers, and engineering teams to translate requirements into technology plans
● Culture & Growth: Help build and nurture a culture of technical excellence, accountability, and continuous improvement
● Hiring & Onboarding: Contribute to recruitment efforts, onboarding, and learning & development of team members.
Requirements:
● 8+ years of software development experience, with 2+ years in a technical leadership or engineering manager role
● Proven experience in architecting and building web and mobile applications at scale
● Hands-on knowledge of technologies such as JavaScript/TypeScript, Angular, React, Node.js, .NET, Java, Python, or similar stacks
● Solid understanding of cloud platforms (AWS/Azure/GCP) and DevOps practices
● Strong interpersonal skills with a proven ability to manage stakeholders and lead diverse teams
● Excellent problem-solving, communication, and organizational skills
● Nice to have:
- Prior experience in working with startups or product-based companies
- Experience mentoring tech leads and helping shape engineering culture
- Exposure to AI/ML, data engineering, or platform thinking
Why Join Us?
● Opportunity to work on a cutting-edge healthcare product
● A collaborative and learning-driven environment
● Exposure to AI and software engineering innovations
● Excellent work ethics and culture.
If you're passionate about technology and want to work on impactful projects, we'd love to hear from you!
Customer currently uses ELK stack, and the goal is to standardize and modernize logs, metrics, and traces using OpenTelemetry, while improving visibility, reliability, and operational intelligence.
Observability Architecture & Modernization
· Assess the existing ELK-based observability setup and define a modern observability architecture
· Design and implement standardized logging, metrics, and distributed tracing using OpenTelemetry
· Define observability best practices for cloud-native and Azure-based applications
· Ensure consistent telemetry collection across microservices, APIs, and infrastructure
Logging, Metrics & Tracing
· Instrument applications using OpenTelemetry SDKs (SpringBoot, .NET, Python, Javascript – as applicable)
· Support Kubernetes and container-based workloads (if applicable)
· Configure and optimize log pipelines, trace exporters, and metric collectors
· Integrate OpenTelemetry with ELK / OpenSearch / Azure Monitor / other backends
· Define SLIs, SLOs, and alerting strategies
· Knowldege in integrating the GitHub and Jira metrics as DORA metrics to observability.
Operational Excellence
· Improve observability performance, cost efficiency, and data retention strategies
· Create dashboards, runbooks, and documentation
AI-based Anomaly Detection & Triage (Good to Have )
· Design or integrate AI/ML-based anomaly detection for logs, metrics, and traces
· Worked on AIOps capabilities for automated incident triage and insights
Required Technical Skills
Core Observability
· Strong hands-on experience with ELK Stack (Elasticsearch, Logstash, Kibana)
· Deep understanding of logs, metrics, traces, and distributed systems
· Practical experience with OpenTelemetry (Collectors, SDKs, exporters, receivers)
Cloud & Platforms
· Strong experience with Microsoft Azure to integrate with Observability platform.
· Experience with Kubernetes / AKS to integrate with Observability platform.
· Knowledge of Azure monitoring tools (Azure Monitor, Log Analytics, Application Insights)
· Experience with Kubernetes / AKS is a strong plus.
Soft Skills
· Strong architecture and problem-solving skills
· Clear communication and documentation skills
· Hands-on mindset with an architect-level view
Good to Have / Preferred Skills
· Experience with AIOps / anomaly detection platforms
· Exposure to tools like Prometheus, Grafana, Jaeger, OpenSearch, Datadog, Dynatrace, New Relic (any)
· Experience with incident management, SRE practices, and reliability engineering
Proper Job Description
A Senior CSM ServiceNow Implementer position, requiring a technical background with experience in Customer Service Management (CSM). Applicants should have 7–10 years of IT experience, including at least 5 years in ServiceNow development and a minimum of 2 years focused on CSM implementations. This role involves delivering enterprise-level solutions within an Agile framework.
Responsibilities include configuring and customizing features within the ServiceNow CSM module, such as case management, transform maps, access control lists (ACLs), Agent Workspace, Flow Designer, business rules, UI policies, UI scripts, and scheduled jobs.
The role also requires designing and implementing integrations with internal and external systems using REST and SOAP APIs. Experience working with IntegrationHub and MID Server is essential.
The candidate will develop scalable, maintainable solutions following ServiceNow best practices and organizational DevOps guidelines, utilizing up-to-date platform features to maintain code quality.
Collaboration with architects, product owners, and QA teams is expected throughout the delivery cycle. The candidate may also provide technical guidance and mentorship to junior developers to support consistency and standards adherence.
Experience
o 7–10 years of overall IT/software development experience
o Minimum 5 years of ServiceNow development experience
o At least 2 years of experience delivering CSM implementations projects
Technical Skills
o Configuring and customizing features within the ServiceNow CSM module
o Transform maps, access control lists (ACLs), Agent Workspace, Flow Designer, business rules, UI policies, UI scripts, and scheduled jobs.
o Strong expertise in building and consuming REST and SOAP APIs
o Experience working with Integration Hub and MID Server
o Deep understanding of ServiceNow root models, including CMDB, case tables, and SLAs
o Familiar with ServiceNow Agent Workspace, Virtual Agent, and Configurable Workflows
Certifications (Mandatory)
o Certified System Administrator (CSA)
o Certified Implementation Specialist – CSM
Soft Skills
o Strong analytical and problem-solving skills
o Effective communication and stakeholder engagement abilities
o Ability to work independently and deliver in a fast-paced environment
Location and Flexible locations: PAN India (Any CG Location)
About NonStop io Technologies:
NonStop io Technologies is a value-driven company with a strong focus on process-oriented software engineering. We specialize in Product Development and have a decade's worth of experience in building web and mobile applications across various domains. NonStop io Technologies follows core principles that guide its operations and believes in staying invested in a product's vision for the long term. We are a small but proud group of individuals who believe in the 'givers gain' philosophy and strive to provide value in order to seek value. We are committed to and specialize in building cutting-edge technology products and serving as trusted technology partners for startups and enterprises. We pride ourselves on fostering innovation, learning, and community engagement. Join us to work on impactful projects in a collaborative and vibrant environment.
Brief Description:
We are looking for a skilled and proactive DevOps Engineer to join our growing engineering team. The ideal candidate will have hands-on experience in building, automating, and managing scalable infrastructure and CI CD pipelines. You will work closely with development, QA, and product teams to ensure reliable deployments, performance, and system security.
Roles and Responsibilities:
● Design, implement, and manage CI CD pipelines for multiple environments
● Automate infrastructure provisioning using Infrastructure as Code tools
● Manage and optimize cloud infrastructure on AWS, Azure, or GCP
● Monitor system performance, availability, and security
● Implement logging, monitoring, and alerting solutions
● Collaborate with development teams to streamline release processes
● Troubleshoot production issues and ensure high availability
● Implement containerization and orchestration solutions such as Docker and Kubernetes
● Enforce DevOps best practices across the engineering lifecycle
● Ensure security compliance and data protection standards are maintained
Requirements:
● 4 to 7 years of experience in DevOps or Site Reliability Engineering
● Strong experience with cloud platforms such as AWS, Azure, or GCP - Relevant Certifications will be a great advantage
● Hands-on experience with CI CD tools like Jenkins, GitHub Actions, GitLab CI, or Azure DevOps
● Experience working in microservices architecture
● Exposure to DevSecOps practices
● Experience in cost optimization and performance tuning in cloud environments
● Experience with Infrastructure as Code tools such as Terraform, CloudFormation, or ARM
● Strong knowledge of containerization using Docker
● Experience with Kubernetes in production environments
● Good understanding of Linux systems and shell scripting
● Experience with monitoring tools such as Prometheus, Grafana, ELK, or Datadog
● Strong troubleshooting and debugging skills
● Understanding of networking concepts and security best practices
Why Join Us?
● Opportunity to work on a cutting-edge healthcare product
● A collaborative and learning-driven environment
● Exposure to AI and software engineering innovations
● Excellent work ethic and culture
If you're passionate about technology and want to work on impactful projects, we'd love to hear from you!
JOB DETAILS:
* Job Title: Java Lead-Java, MS, Kafka-TVM - Java (Core & Enterprise), Spring/Micronaut, Kafka
* Industry: Global Digital Transformation Solutions Provider
* Salary: Best in Industry
* Experience: 9 to 12 years
* Location: Trivandrum, Thiruvananthapuram
Job Description
Experience
- 9+ years of experience in Java-based backend application development
- Proven experience building and maintaining enterprise-grade, scalable applications
- Hands-on experience working with microservices and event-driven architectures
- Experience working in Agile and DevOps-driven development environments
Mandatory Skills
- Advanced proficiency in core Java and enterprise Java concepts
- Strong hands-on experience with Spring Framework and/or Micronaut for building scalable backend applications
- Strong expertise in SQL, including database design, query optimization, and performance tuning
- Hands-on experience with PostgreSQL or other relational database management systems
- Strong experience with Kafka or similar event-driven messaging and streaming platforms
- Practical knowledge of CI/CD pipelines using GitLab
- Experience with Jenkins for build automation and deployment processes
- Strong understanding of GitLab for source code management and DevOps workflows
Responsibilities
- Design, develop, and maintain robust, scalable, and high-performance backend solutions
- Develop and deploy microservices using Spring or Micronaut frameworks
- Implement and integrate event-driven systems using Kafka
- Optimize SQL queries and manage PostgreSQL databases for performance and reliability
- Build, implement, and maintain CI/CD pipelines using GitLab and Jenkins
- Collaborate with cross-functional teams including product, QA, and DevOps to deliver high-quality software solutions
- Ensure code quality through best practices, reviews, and automated testing
Good-to-Have Skills
- Strong problem-solving and analytical abilities
- Experience working with Agile development methodologies such as Scrum or Kanban
- Exposure to cloud platforms such as AWS, Azure, or GCP
- Familiarity with containerization and orchestration tools such as Docker or Kubernetes
Skills: java, spring boot, kafka development, cicd, postgresql, gitlab
Must-Haves
Java Backend (9+ years), Spring Framework/Micronaut, SQL/PostgreSQL, Kafka, CI/CD (GitLab/Jenkins)
Advanced proficiency in core Java and enterprise Java concepts
Strong hands-oacn experience with Spring Framework and/or Micronaut for building scalable backend applications
Strong expertise in SQL, including database design, query optimization, and performance tuning
Hands-on experience with PostgreSQL or other relational database management systems
Strong experience with Kafka or similar event-driven messaging and streaming platforms
Practical knowledge of CI/CD pipelines using GitLab
Experience with Jenkins for build automation and deployment processes
Strong understanding of GitLab for source code management and DevOps workflows
*******
Notice period - 0 to 15 days only
Job stability is mandatory
Location: only Trivandrum
F2F Interview on 21st Feb 2026
About the role:
We are looking for a Staff Site Reliability Engineer who can operate at a staff level across multiple teams and clients. If you care about designing reliable platforms, influencing system architecture, and raising reliability standards across teams, you’ll enjoy working at One2N.
At One2N, you will work with our startups and enterprise clients, solving One-to-N scale problems where the proof of concept is already established and the focus is on scalability, maintainability, and long-term reliability. In this role, you will drive reliability, observability, and infrastructure architecture across systems, influencing design decisions, defining best practices, and guiding teams to build resilient, production-grade systems.
Key responsibilities:
- Own and drive reliability and infrastructure strategy across multiple products or client engagements
- Design and evolve platform engineering and self-serve infrastructure patterns used by product engineering teams
- Lead architecture discussions around observability, scalability, availability, and cost efficiency.
- Define and standardize monitoring, alerting, SLOs/SLIs, and incident management practices.
- Build and review production-grade CI/CD and IaC systems used across teams
- Act as an escalation point for complex production issues and incident retrospectives.
- Partner closely with engineering leads, product teams, and clients to influence system design decisions early.
- Mentor young engineers through design reviews, technical guidance, and best practices.
- Improve Developer Experience (DX) by reducing cognitive load, toil, and operational friction.
- Help teams mature their on-call processes, reliability culture, and operational ownership.
- Stay ahead of trends in cloud-native infrastructure, observability, and platform engineering, and bring relevant ideas into practice
About you:
- 9+ years of experience in SRE, DevOps, or software engineering roles
- Strong experience designing and operating Kubernetes-based systems on AWS at scale
- Deep hands-on expertise in observability and telemetry, including tools like OpenTelemetry, Datadog, Grafana, Prometheus, ELK, Honeycomb, or similar.
- Proven experience with infrastructure as code (Terraform, Pulumi) and cloud architecture design.
- Strong understanding of distributed systems, microservices, and containerized workloads.
- Ability to write and review production-quality code (Golang, Python, Java, or similar)
- Solid Linux fundamentals and experience debugging complex system-level issues
- Experience driving cross-team technical initiatives.
- Excellent analytical and problem-solving skills, keen attention to detail, and a passion for continuous improvement.
- Strong written, communication, and collaboration skills, with the ability to work effectively in a fast-paced, agile environment.
Nice to have:
- Experience working in consulting or multi-client environments.
- Exposure to cost optimization, or large-scale AWS account management
- Experience building internal platforms or shared infrastructure used by multiple teams.
- Prior experience influencing or defining engineering standards across organizations.
Engineering Manager – FOY (FoyForYou.com)
Function: Software Engineering → Leadership, Product Engineering, Architecture
Skills: Team Leadership, System Design, Mobile + Web Architecture, DevOps, CI/CD, Backend (Node JS), Cloud (AWS/GCP), Data-driven Decision Making
About FOY
FOY (FoyForYou.com) is one of India’s fastest-growing beauty & wellness destinations. We offer customers a curated selection of 100% authentic products, trusted brands, and a seamless shopping experience. Our mission is to make beauty effortless, personal, and accessible for every Indian.
As we scale aggressively across mobile, web, logistics, personalization, and omnichannel commerce—our engineering team is the core engine behind FOY’s growth. We’re looking for a technical leader who is excited to build the future of beauty commerce.
Job Description
We’re hiring an Engineering Manager (7–12 years) who can lead a high-performing engineering team, drive product delivery, architect scalable systems, and cultivate a culture of excellence.
This role is ideal for someone who has been a strong individual contributor and is now passionate about leading people, driving execution, and shaping FOY’s technology roadmap.
Responsibilities
1. People & Team Leadership
- Lead, mentor, and grow a team of backend, frontend, mobile, and DevOps engineers.
- Drive high performance through code quality, engineering discipline, and structured reviews.
- Build a culture of ownership, speed, and continuous improvement.
- Hire top talent and help scale the engineering org.
2. Technical Leadership & Architecture
- Work with product, design, data, and business to define the technical direction for FOY.
- Architect robust and scalable systems for:
- Mobile commerce
- High-scale APIs
- Personalization & recommendations
- Product/catalog systems
- Payments, checkout, and logistics
- Ensure security, reliability, and high availability across all systems.
3. Execution & Delivery
- Own end-to-end delivery of key product features and platform capabilities.
- Drive sprint planning, execution, tracking, and timely delivery.
- Build and improve engineering processes—CI/CD pipelines, testing automation, release cycles.
- Reduce tech debt and ensure long-term maintainability.
4. Quality, Performance & Observability
- Drive standards across:
- Code quality
- System performance
- Monitoring & alerting
- Incident management
- Work closely with QA, mobile, backend, and data teams to ensure smooth releases.
5. Cross-Functional Collaboration
- Partner with product managers to translate business goals into technical outcomes.
- Work with designers, marketing, operations, and supply chain to deliver user-centric solutions.
- Balance trade-offs between speed and engineering excellence.
Requirements
- 7–12 years of hands-on engineering experience with at least 2–4 years in a leadership or tech lead role.
- Strong experience in backend development (Node/Python preferred).
- Good exposure to mobile or web engineering (Native Android/iOS).
- Strong understanding of system design, APIs, microservices, caching, databases, and cloud infrastructure (AWS/GCP).
- Experience running Agile/Scrum teams with measurable delivery outputs.
- Ability to dive deep into code when required while enabling the team to succeed independently.
- Strong communication and stakeholder management skills.
- Passion for building user-focused products and solving meaningful problems.
Bonus Points
- Experience in e-commerce, high-scale consumer apps, or marketplaces.
- Prior experience in a fast-growing startup environment.
- Strong DevOps/Cloud knowledge (CI/CD, Kubernetes, SRE principles).
- Data-driven approach to engineering decisions.
- A GitHub/portfolio showcasing real projects or open-source contributions.
Why Build Your Engineering Career at FOY?
At FOY, engineering drives business. We move fast, build big, and aim high.
We look for:
1. Rockstar Team Players
Your leadership directly impacts product, growth, and customer experience.
2. Owners With Passion
You’ll be trusted with large problem spaces and will operate with autonomy and accountability.
3. Big Dreamers
We’re scaling quickly. If you dream big and thrive in ambitious environments, you’ll feel at home at FOY.
Join Us
If building and scaling world-class engineering teams excites you, we’d love to meet you.
Apply now and help FOY redefine beauty commerce in India.
JOB DETAILS:
* Job Title: Specialist I - DevOps Engineering
* Industry: Global Digital Transformation Solutions Provider
* Salary: Best in Industry
* Experience: 7-10 years
* Location: Bengaluru (Bangalore), Chennai, Hyderabad, Kochi (Cochin), Noida, Pune, Thiruvananthapuram
Job Description
Job Summary:
As a DevOps Engineer focused on Perforce to GitHub migration, you will be responsible for executing seamless and large-scale source control migrations. You must be proficient with GitHub Enterprise and Perforce, possess strong scripting skills (Python/Shell), and have a deep understanding of version control concepts.
The ideal candidate is a self-starter, a problem-solver, and thrives on challenges while ensuring smooth transitions with minimal disruption to development workflows.
Key Responsibilities:
- Analyze and prepare Perforce repositories — clean workspaces, merge streams, and remove unnecessary files.
- Handle large files efficiently using Git Large File Storage (LFS) for files exceeding GitHub’s 100MB size limit.
- Use git-p4 fusion (Python-based tool) to clone and migrate Perforce repositories incrementally, ensuring data integrity.
- Define migration scope — determine how much history to migrate and plan the repository structure.
- Manage branch renaming and repository organization for optimized post-migration workflows.
- Collaborate with development teams to determine migration points and finalize migration strategies.
- Troubleshoot issues related to file sizes, Python compatibility, network connectivity, or permissions during migration.
Required Qualifications:
- Strong knowledge of Git/GitHub and preferably Perforce (Helix Core) — understanding of differences, workflows, and integrations.
- Hands-on experience with P4-Fusion.
- Familiarity with cloud platforms (AWS, Azure) and containerization technologies (Docker, Kubernetes).
- Proficiency in migration tools such as git-p4 fusion — installation, configuration, and troubleshooting.
- Ability to identify and manage large files using Git LFS to meet GitHub repository size limits.
- Strong scripting skills in Python and Shell for automating migration and restructuring tasks.
- Experience in planning and executing source control migrations — defining scope, branch mapping, history retention, and permission translation.
- Familiarity with CI/CD pipeline integration to validate workflows post-migration.
- Understanding of source code management (SCM) best practices, including version history and repository organization in GitHub.
- Excellent communication and collaboration skills for cross-team coordination and migration planning.
- Proven practical experience in repository migration, large file management, and history preservation during Perforce to GitHub transitions.
Skills: Github, Kubernetes, Perforce, Perforce (Helix Core), Devops Tools
Must-Haves
Git/GitHub (advanced), Perforce (Helix Core) (advanced), Python/Shell scripting (strong), P4-Fusion (hands-on experience), Git LFS (proficient)
JOB DETAILS:
* Job Title: Lead I - Azure, Terraform, GitLab CI
* Industry: Global Digital Transformation Solutions Provider
* Salary: Best in Industry
* Experience: 3-5 years
* Location: Trivandrum/Pune
Job Description
Job Title: DevOps Engineer
Experience: 4–8 Years
Location: Trivandrum & Pune
Job Type: Full-Time
Mandatory skills: Azure, Terraform, GitLab CI, Splunk
Job Description
We are looking for an experienced and driven DevOps Engineer with 4 to 8 years of experience to join our team in Trivandrum or Pune. The ideal candidate will take ownership of automating cloud infrastructure, maintaining CI/CD pipelines, and implementing monitoring solutions to support scalable and reliable software delivery in a cloud-first environment.
Key Responsibilities
- Design, manage, and automate Azure cloud infrastructure using Terraform.
- Develop scalable, reusable, and version-controlled Infrastructure as Code (IaC) modules.
- Implement monitoring and logging solutions using Splunk, Azure Monitor, and Dynatrace.
- Build and maintain secure and efficient CI/CD pipelines using GitLab CI or Harness.
- Collaborate with cross-functional teams to enable smooth deployment workflows and infrastructure updates.
- Analyze system logs, performance metrics, and s to troubleshoot and optimize performance.
- Ensure infrastructure security, compliance, and scalability best practices are followed.
Mandatory Skills
Candidates must have hands-on experience with the following technologies:
- Azure – Cloud infrastructure management and deployment
- Terraform – Infrastructure as Code for scalable provisioning
- GitLab CI – Pipeline development, automation, and integration
- Splunk – Monitoring, logging, and troubleshooting production systems
Preferred Skills
- Experience with Harness (for CI/CD)
- Familiarity with Azure Monitor and Dynatrace
- Scripting proficiency in Python, Bash, or PowerShell
- Understanding of DevOps best practices, containerization, and microservices architecture
- Exposure to Agile and collaborative development environments
Skills Summary
Azure, Terraform, GitLab CI, Splunk (Mandatory) Additional: Harness, Azure Monitor, Dynatrace, Python, Bash, PowerShell
Skills: Azure, Splunk, Terraform, Gitlab Ci
******
Notice period - 0 to 15days only
Job stability is mandatory
Location: Trivandrum/Pune
JOB DETAILS:
* Job Title: Head of Engineering/Senior Product Manager
* Industry: Digital transformation excellence provider
* Salary: Best in Industry
* Experience: 12-20 years
* Location: Mumbai
Job Description
Role Overview
The VP / Head of Technology will lead company’s technology function across engineering, product development, cloud infrastructure, security, and AI-led initiatives. This role focuses on delivering scalable, high-quality technology solutions across company’s core verticals including eCommerce, Procurement & e-Sourcing, ERP integrations, Sustainability/ESG, and Business Services.
This leader will drive execution, ensure technical excellence, modernize platforms, and collaborate closely with business and delivery teams.
Roles and Responsibilities:
Technology Execution & Architecture Leadership
· Own and execute the technology roadmap aligned with business goals.
· Build and maintain scalable architecture supporting multiple verticals.
· Enforce engineering best practices, code quality, performance, and security.
· Lead platform modernization including microservices, cloud-native architecture, API-first systems, and integration frameworks.
Product & Engineering Delivery
· Manage multi-product engineering teams across eCommerce platforms, procurement systems, ERP integrations, analytics, and ESG solutions.
· Own the full SDLC — requirements, design, development, testing, deployment, support.
· Implement Agile, DevOps, CI/CD for faster releases and improved reliability.
· Oversee product/platform interoperability across all company systems.
Vertical-Specific Technology Leadership
Procurement Tech:
· Lead architecture and enhancements of procurement and indirect spend platforms.
· Ensure interoperability with SAP Ariba, Coupa, Oracle, MS Dynamics, etc.
eCommerce:
· Drive development of scalable B2B/B2C commerce platforms, headless commerce, marketplace integrations, and personalization capabilities.
Sustainability/ESG:
· Support development of GHG tracking, reporting systems, and sustainability analytics platforms.
Business Services:
· Enhance operational platforms with automation, workflow management, dashboards, and AI-driven efficiency tools.
Data, Cloud, Security & Infrastructure
· Own cloud infrastructure strategy (Azure/AWS/GCP).
· Ensure adherence to compliance standards (SOC2, ISO 27001, GDPR).
· Lead cybersecurity policies, monitoring, threat detection, and recovery planning.
· Drive observability, cost optimization, and system scalability.
AI, Automation & Innovation
· Integrate AI/ML, analytics, and automation into product platforms and service delivery.
· Build frameworks for workflow automation, supplier analytics, personalization, and operational efficiency.
· Lead R&D for emerging tech aligned to business needs.
Leadership & Team Management
· Lead and mentor engineering managers, architects, developers, QA, and DevOps.
· Drive a culture of ownership, innovation, continuous learning, and performance accountability.
· Build capability development frameworks and internal talent pipelines.
Stakeholder Collaboration
· Partner with Sales, Delivery, Product, and Business Teams to align technology outcomes with customer needs.
· Ensure transparent reporting on project status, risks, and technology KPIs.
· Manage vendor relationships, technology partnerships, and external consultants.
Education, Training, Skills, and Experience Requirements:
Experience & Background
· 16+ years in technology execution roles, including 5–7 years in senior leadership.
· Strong background in multi-product engineering for B2B platforms or enterprise systems.
· Proven delivery experience across: eCommerce, ERP integrations, procurement platforms, ESG solutions, and automation.
Technical Skills
· Expertise in cloud platforms (Azure/AWS/GCP), microservices architecture, API frameworks.
· Strong grasp of procurement tech, ERP integrations, eCommerce platforms, and enterprise-scale systems.
· Hands-on exposure to AI/ML, automation tools, data engineering, and analytics stacks.
· Strong understanding of security, compliance, scalability, performance engineering.
Leadership Competencies
· Execution-focused technology leadership.
· Strong communication and stakeholder management skills.
· Ability to lead distributed teams, manage complexity, and drive measurable outcomes.
· Innovation mindset with practical implementation capability.
Education
· Bachelor’s or Master’s in Computer Science/Engineering or equivalent.
· Additional leadership education (MBA or similar) is a plus, not mandatory.
Travel Requirements
· Occasional travel for client meetings, technology reviews, or global delivery coordination.
Must-Haves
· 10+ years of technology experience, with with at least 6 years leading large (50-100+) multi product engineering teams.
· Must have worked on B2B Platforms. Experience in Procurement Tech or Supply Chain
· Min. 10+ Years of Expertise in Cloud-Native Architecture, Expert-level design in Azure, AWS, or GCP using Microservices, Kubernetes (K8s), and Docker.
· Min. 8+ Years of Expertise in Modern Engineering Practices, Advanced DevOps, CI/CD pipelines, and automated testing frameworks (Selenium, Cypress, etc.).
· Hands-on leadership experience in Security & Compliance.
· Min. 3+ Years of Expertise in AI & Data Engineering, Practical implementation of LLMs, Predictive Analytics, or AI-driven automation
· Strong technology execution leadership, with ownership of end-to-end technology roadmaps aligned to business outcomes.
· Min. 6+ Years of Expertise in B2B eCommerce Logic Architecture of Headless Commerce, marketplace integrations, and complex B2B catalog management.
· Strong product management exposure
· Proven experience in leading end-to-end team operations
· Relevant experience in product-driven organizations or platforms
· Strong Subject Matter Expertise (SME)
Education: - Master degree.
**************
Joining time / Notice Period: Immediate - 45days.
Location: - Andheri,
5 days working (3 - 2 days’ work from office)
Role & Responsibilities
We are looking for a hands-on AWS Cloud Engineer to support day-to-day cloud operations, automation, and reliability of AWS environments. This role works closely with the Cloud Operations Lead, DevOps, Security, and Application teams to ensure stable, secure, and cost-effective cloud platforms.
Key Responsibilities-
- Operate and support AWS production environments across multiple accounts
- Manage infrastructure using Terraform and support CI/CD pipelines
- Support Amazon EKS clusters, upgrades, scaling, and troubleshooting
- Build and manage Docker images and push to Amazon ECR
- Monitor systems using CloudWatch and third-party tools; respond to incidents
- Support AWS networking (VPCs, NAT, Transit Gateway, VPN/DX)
- Assist with cost optimization, tagging, and governance standards
- Automate operational tasks using Python, Lambda, and Systems Manager
Ideal Candidate
- Strong hands-on AWS experience (EC2, VPC, IAM, S3, ALB, CloudWatch)
- Experience with Terraform and Git-based workflows
- Hands-on experience with Kubernetes / EKS
- Experience with CI/CD tools (GitHub Actions, Jenkins, etc.)
- Scripting experience in **Python or Bash**
- Understanding of monitoring, incident management, and cloud security basics
Nice to Have-
- AWS Associate-level certifications
- Experience with Karpenter, Prometheus, New Relic
- Exposure to FinOps and cost optimization practices
Summary
We are seeking a skilled and experienced Business Analyst cum Project Manager to lead the delivery of digital solutions across diverse projects. This hybrid role requires a professional who can effectively bridge business needs with technical execution, manage project timelines, and ensure successful outcomes through structured methodologies.
Key Responsibilities
- Lead end-to-end project delivery, from initiation to closure, ensuring alignment with business goals.
- Gather, analyze, and document business requirements and translate them into clear user stories and functional specifications.
- Manage project plans, timelines, risks, and deliverables using Agile, Waterfall, or hybrid methodologies.
- Facilitate stakeholder meetings, workshops, sprint planning, and retrospectives.
- Collaborate with cross-functional teams including developers, testers, and business stakeholders.
- Conduct user acceptance testing (UAT) and ensure solutions meet business expectations.
- Coordinate and support testing activities, including functional, integration, and compliance-related testing.
- Maintain comprehensive documentation including business process models, user guides, and project reports.
- Monitor project progress and provide regular updates to stakeholders.
Required Skills & Experience
- Minimum 5 years of experience in a combined Business Analyst and Project Manager role.
- Strong understanding of SDLC, Agile, Waterfall, and hybrid project delivery frameworks.
- Proficiency in tools such as Jira, Azure DevOps, Microsoft Power Platform, Visio, PowerPoint, and Word.
- Experience with UML modeling (Use Case, Sequence, Activity, and Class diagrams).
- Hands-on experience in testing, including ISO standards–related projects and other compliance or regulatory-driven initiatives.
- Strong analytical, problem-solving, and conceptual thinking skills.
- Excellent communication and stakeholder management abilities.
- Ability to manage multiple projects and priorities in a fast-paced environment.
- Familiarity with UX principles and customer journey mapping concepts.
Preferred Qualifications
- Bachelor’s degree in Business, Information Systems, Computer Science, or a related field.
- Experience working with remote and cross-functional teams.
- Exposure to data-driven decision-making and digital product development.
- Prior involvement in quality assurance, audit preparation, or ISO certification support projects is an added advantage.
Note
This is an immediate requirement. The position will be initially offered as a contract role and is expected to be converted to a permanent position based on performance and business requirements.
Job Details
- Job Title: SDE-3
- Industry: Technology
- Domain - Information technology (IT)
- Experience Required: 5-8 years
- Employment Type: Full Time
- Job Location: Bengaluru
- CTC Range: Best in Industry
Role & Responsibilities
As a Software Development Engineer - 3, Backend Engineer at company, you will play a critical role in architecting, designing, and delivering robust backend systems that power our platform. You will lead by example, driving technical excellence and mentoring peers while solving complex engineering problems. This position offers the opportunity to work with a highly motivated team in a fast-paced and innovative environment.
Key Responsibilities:
Technical Leadership-
- Design and develop highly scalable, fault-tolerant, and maintainable backend systems using Java and related frameworks.
- Provide technical guidance and mentorship to junior developers, fostering a culture of learning and growth.
- Review code and ensure adherence to best practices, coding standards, and security guidelines.
System Architecture and Design-
- Collaborate with cross-functional teams, including product managers and frontend engineers, to translate business requirements into efficient technical solutions.
- Own the architecture of core modules and contribute to overall platform scalability and reliability.
- Advocate for and implement microservices architecture, ensuring modularity and reusability.
Problem Solving and Optimization-
- Analyze and resolve complex system issues, ensuring high availability and performance of the platform.
- Optimize database queries and design scalable data storage solutions.
- Implement robust logging, monitoring, and alerting systems to proactively identify and mitigate issues.
Innovation and Continuous Improvement-
- Stay updated on emerging backend technologies and incorporate relevant advancements into our systems.
- Identify and drive initiatives to improve codebase quality, deployment processes, and team productivity.
- Contribute to an advocate for a DevOps culture, supporting CI/CD pipelines and automated testing.
Collaboration and Communication-
- Act as a liaison between the backend team and other technical and non-technical teams, ensuring smooth communication and alignment.
- Document system designs, APIs, and workflows to maintain clarity and knowledge transfer across the team.
Ideal Candidate
- Strong Java Backend Engineer.
- Must have 5+ years of backend development with strong focus on Java (Spring / Spring Boot)
- Must have been SDE-2 for at least 2.5 years
- Hands-on experience with RESTful APIs and microservices architecture
- Strong understanding of distributed systems, multithreading, and async programming
- Experience with relational and NoSQL databases
- Exposure to Kafka/RabbitMQ and Redis/Memcached
- Experience with AWS / GCP / Azure, Docker, and Kubernetes
- Familiar with CI/CD pipelines and modern DevOps practices
- Product companies (B2B SAAS preferred)
- have stayed for at least 2 years with each of the previous companies
- (Education): B.Tech in computer science from Tier 1, Tier 2 colleges
Review Criteria:
- Strong MLOps profile
- 8+ years of DevOps experience and 4+ years in MLOps / ML pipeline automation and production deployments
- 4+ years hands-on experience in Apache Airflow / MWAA managing workflow orchestration in production
- 4+ years hands-on experience in Apache Spark (EMR / Glue / managed or self-hosted) for distributed computation
- Must have strong hands-on experience across key AWS services including EKS/ECS/Fargate, Lambda, Kinesis, Athena/Redshift, S3, and CloudWatch
- Must have hands-on Python for pipeline & automation development
- 4+ years of experience in AWS cloud, with recent companies
- (Company) - Product companies preferred; Exception for service company candidates with strong MLOps + AWS depth
Preferred:
- Hands-on in Docker deployments for ML workflows on EKS / ECS
- Experience with ML observability (data drift / model drift / performance monitoring / alerting) using CloudWatch / Grafana / Prometheus / OpenSearch.
- Experience with CI / CD / CT using GitHub Actions / Jenkins.
- Experience with JupyterHub/Notebooks, Linux, scripting, and metadata tracking for ML lifecycle.
- Understanding of ML frameworks (TensorFlow / PyTorch) for deployment scenarios.
Job Specific Criteria:
- CV Attachment is mandatory
- Please provide CTC Breakup (Fixed + Variable)?
- Are you okay for F2F round?
- Have candidate filled the google form?
Role & Responsibilities:
We are looking for a Senior MLOps Engineer with 8+ years of experience building and managing production-grade ML platforms and pipelines. The ideal candidate will have strong expertise across AWS, Airflow/MWAA, Apache Spark, Kubernetes (EKS), and automation of ML lifecycle workflows. You will work closely with data science, data engineering, and platform teams to operationalize and scale ML models in production.
Key Responsibilities:
- Design and manage cloud-native ML platforms supporting training, inference, and model lifecycle automation.
- Build ML/ETL pipelines using Apache Airflow / AWS MWAA and distributed data workflows using Apache Spark (EMR/Glue).
- Containerize and deploy ML workloads using Docker, EKS, ECS/Fargate, and Lambda.
- Develop CI/CT/CD pipelines integrating model validation, automated training, testing, and deployment.
- Implement ML observability: model drift, data drift, performance monitoring, and alerting using CloudWatch, Grafana, Prometheus.
- Ensure data governance, versioning, metadata tracking, reproducibility, and secure data pipelines.
- Collaborate with data scientists to productionize notebooks, experiments, and model deployments.
Ideal Candidate:
- 8+ years in MLOps/DevOps with strong ML pipeline experience.
- Strong hands-on experience with AWS:
- Compute/Orchestration: EKS, ECS, EC2, Lambda
- Data: EMR, Glue, S3, Redshift, RDS, Athena, Kinesis
- Workflow: MWAA/Airflow, Step Functions
- Monitoring: CloudWatch, OpenSearch, Grafana
- Strong Python skills and familiarity with ML frameworks (TensorFlow/PyTorch/Scikit-learn).
- Expertise with Docker, Kubernetes, Git, CI/CD tools (GitHub Actions/Jenkins).
- Strong Linux, scripting, and troubleshooting skills.
- Experience enabling reproducible ML environments using Jupyter Hub and containerized development workflows.
Education:
- Master’s degree in computer science, Machine Learning, Data Engineering, or related field.
JOB DETAILS:
* Job Title: DevOps Engineer (Azure)
* Industry: Technology
* Salary: Best in Industry
* Experience: 2-5 years
* Location: Bengaluru, Koramangala
Review Criteria
- Strong Azure DevOps Engineer Profiles.
- Must have minimum 2+ years of hands-on experience as an Azure DevOps Engineer with strong exposure to Azure DevOps Services (Repos, Pipelines, Boards, Artifacts).
- Must have strong experience in designing and maintaining YAML-based CI/CD pipelines, including end-to-end automation of build, test, and deployment workflows.
- Must have hands-on scripting and automation experience using Bash, Python, and/or PowerShell
- Must have working knowledge of databases such as Microsoft SQL Server, PostgreSQL, or Oracle Database
- Must have experience with monitoring, alerting, and incident management using tools like Grafana, Prometheus, Datadog, or CloudWatch, including troubleshooting and root cause analysis
Preferred
- Knowledge of containerisation and orchestration tools such as Docker and Kubernetes.
- Knowledge of Infrastructure as Code and configuration management tools such as Terraform and Ansible.
- Preferred (Education) – BE/BTech / ME/MTech in Computer Science or related discipline
Role & Responsibilities
- Build and maintain Azure DevOps YAML-based CI/CD pipelines for build, test, and deployments.
- Manage Azure DevOps Repos, Pipelines, Boards, and Artifacts.
- Implement Git branching strategies and automate release workflows.
- Develop scripts using Bash, Python, or PowerShell for DevOps automation.
- Monitor systems using Grafana, Prometheus, Datadog, or CloudWatch and handle incidents.
- Collaborate with dev and QA teams in an Agile/Scrum environment.
- Maintain documentation, runbooks, and participate in root cause analysis.
Ideal Candidate
- 2–5 years of experience as an Azure DevOps Engineer.
- Strong hands-on experience with Azure DevOps CI/CD (YAML) and Git.
- Experience with Microsoft Azure (OCI/AWS exposure is a plus).
- Working knowledge of SQL Server, PostgreSQL, or Oracle.
- Good scripting, troubleshooting, and communication skills.
- Bonus: Docker, Kubernetes, Terraform, Ansible experience.
- Comfortable with WFO (Koramangala, Bangalore).
Strong Azure DevOps Engineer Profiles.
Mandatory (Experience 1) – Must have minimum 2+ years of hands-on experience as an Azure DevOps Engineer with strong exposure to Azure DevOps Services (Repos, Pipelines, Boards, Artifacts).
Mandatory (Experience 2) – Must have strong experience in designing and maintaining YAML-based CI/CD pipelines, including end-to-end automation of build, test, and deployment workflows.
Mandatory (Experience 3) – Must have hands-on scripting and automation experience using Bash, Python, and/or PowerShell
Mandatory (Experience 4) – Must have working knowledge of databases such as Microsoft SQL Server, PostgreSQL, or Oracle Database
Mandatory (Experience 5) – Must have experience with monitoring, alerting, and incident management using tools like Grafana, Prometheus, Datadog, or CloudWatch, including troubleshooting and root cause analysis
Preferred
JOB DETAILS:
* Job Title: Associate III - Azure Data Engineer
* Industry: Global digital transformation solutions provide
* Salary: Best in Industry
* Experience: 4 -6 years
* Location: Trivandrum, Kochi
Job Description: Azure Data Engineer (4–6 Years Experience)
Job Type: Full-time
Locations: Kochi, Trivandrum
Must-Have Skills
Azure & Data Engineering
- Azure Data Factory (ADF)
- Azure Databricks (PySpark)
- Azure Synapse Analytics
- Azure Data Lake Storage Gen2
- Azure SQL Database
Programming & Querying
- Python (PySpark)
- SQL / Spark SQL
Data Modelling
- Star & Snowflake schema
- Dimensional modelling
Source Systems
- SQL Server
- Oracle
- SAP
- REST APIs
- Flat files (CSV, JSON, XML)
CI/CD & Version Control
- Git
- Azure DevOps / GitHub Actions
Monitoring & Scheduling
- ADF triggers
- Databricks jobs
- Log Analytics
Security
- Managed Identity
- Azure Key Vault
- Azure RBAC / Access Control
Soft Skills
- Strong analytical & problem-solving skills
- Good communication and collaboration
- Ability to work in Agile/Scrum environments
- Self-driven and proactive
Good-to-Have Skills
- Power BI basics
- Delta Live Tables
- Synapse Pipelines
- Real-time processing (Event Hub / Stream Analytics)
- Infrastructure as Code (Terraform / ARM templates)
- Data governance tools like Azure Purview
- Azure Data Engineer Associate (DP-203) certification
Educational Qualifications
- Bachelor’s degree in Computer Science, Information Technology, or a related field.
Skills: Azure Data Factory, Azure Databricks, Azure Synapse, Azure Data Lake Storage
Must-Haves
Azure Data Factory (4-6 years), Azure Databricks/PySpark (4-6 years), Azure Synapse Analytics (4-6 years), SQL/Spark SQL (4-6 years), Git/Azure DevOps (4-6 years)
Skills: Azure, Azure data factory, Python, Pyspark, Sql, Rest Api, Azure Devops
Relevant 4 - 6 Years
python is mandatory
******
Notice period - 0 to 15 days only (Feb joiners’ profiles only)
Location: Kochi
F2F Interview 7th Feb
JOB DETAILS:
* Job Title: Lead I - (Web Api, C# .Net, .Net Core, Aws (Mandatory)
* Industry: Global digital transformation solutions provide
* Salary: Best in Industry
* Experience: 6 -9 years
* Location: Hyderabad
Job Description
Role Overview
We are looking for a highly skilled Senior .NET Developer who has strong experience in building scalable, high‑performance backend services using .NET Core and C#, with hands‑on expertise in AWS cloud services. The ideal candidate should be capable of working in an Agile environment, collaborating with cross‑functional teams, and contributing to both design and development. Experience with React and Datadog monitoring tools will be an added advantage.
Key Responsibilities
- Design, develop, and maintain backend services and APIs using .NET Core and C#.
- Work with AWS services (Lambda, S3, ECS/EKS, API Gateway, RDS, etc.) to build cloud‑native applications.
- Collaborate with architects and senior engineers on solution design and implementation.
- Write clean, scalable, and well‑documented code.
- Use Postman to build and test RESTful APIs.
- Participate in code reviews and provide technical guidance to junior developers.
- Troubleshoot and optimize application performance.
- Work closely with QA, DevOps, and Product teams in an Agile setup.
- (Optional) Contribute to frontend development using React.
- (Optional) Use Datadog for monitoring, logging, and performance metrics.
Required Skills & Experience
- 6+ years of experience in backend development.
- Strong proficiency in C# and .NET Core.
- Experience building RESTful services and microservices.
- Hands‑on experience with AWS cloud platform.
- Solid understanding of API testing using Postman.
- Knowledge of relational databases (SQL Server, PostgreSQL, etc.).
- Strong problem‑solving and debugging skills.
- Experience working in Agile/Scrum teams.
Good to Have
- Experience with React for frontend development.
- Exposure to Datadog for monitoring and logging.
- Knowledge of CI/CD tools (GitHub Actions, Jenkins, AWS CodePipeline, etc.).
- Containerization experience (Docker, Kubernetes).
Soft Skills
- Strong communication and collaboration abilities.
- Ability to work in a fast‑paced environment.
- Ownership mindset with a focus on delivering high‑quality solutions.
Skills
.NET Core, C#, AWS, Postman
Notice period - 0 to 15 days only
Location: Hyderabad
Virtual Interview: 7th Feb 2026
First round will be Virtual
2nd round will be F2F
JOB DETAILS:
* Job Title: Tester III - Software Testing- Playwright + API testing
* Industry: Global digital transformation solutions provide
* Salary: Best in Industry
* Experience: 4 -10 years
* Location: Hyderabad
Job Description
Responsibilities:
- Design, develop, and maintain automated test scripts for web applications using Playwright.
- Perform API testing using industry-standard tools and frameworks.
- Collaborate with developers, product owners, and QA teams to ensure high-quality releases.
- Analyze test results, identify defects, and track them to closure.
- Participate in requirement reviews, test planning, and test strategy discussions.
- Ensure automation coverage, maintain reusable test frameworks, and optimize execution pipelines.
Required Experience:
- Strong hands-on experience in Automation Testing for web-based applications.
- Proven expertise in Playwright (JavaScript, TypeScript, or Python-based scripting).
- Solid experience in API testing (Postman, REST Assured, or similar tools).
- Good understanding of software QA methodologies, tools, and processes.
- Ability to write clear, concise test cases and automation scripts.
- Experience with CI/CD pipelines (Jenkins, GitHub Actions, Azure DevOps) is an added advantage.
Good to Have:
- Knowledge of cloud environments (AWS/Azure)
- Experience with version control tools like Git
- Familiarity with Agile/Scrum methodologies
Skills: automation testing, sql, api testing, soap ui testing, playwright
JOB DETAILS:
- Job Title: Lead II - Software Engineering- React Native - React Native, Mobile App Architecture, Performance Optimization & Scalability
- Industry: Global digital transformation solutions provider
- Experience: 7-9 years
- Working Days: 5 days/week
- Job Location: Mumbai
- CTC Range: Best in Industry
Job Description
Job Title
Lead React Native Developer (6–8 Years Experience)
Position Overview
We are looking for a Lead React Native Developer to provide technical leadership for our mobile applications. This role involves owning architectural decisions, setting development standards, mentoring teams, and driving scalable, high-performance mobile solutions aligned with business goals.
Must-Have Skills
- 6–8 years of experience in mobile application development
- Extensive hands-on experience leading React Native projects
- Expert-level understanding of React Native architecture and internals
- Strong knowledge of mobile app architecture patterns
- Proven experience with performance optimization and scalability
- Experience in technical leadership, team management, and mentorship
- Strong problem-solving and analytical skills
- Excellent communication and collaboration abilities
- Proficiency in modern React Native development practices
- Experience with Expo toolkit and libraries
- Strong understanding of custom hooks development
- Focus on writing clean, maintainable, and scalable code
- Understanding of mobile app lifecycle
- Knowledge of cross-platform design consistency
Good-to-Have Skills
- Experience with microservices architecture
- Knowledge of cloud platforms such as AWS, Firebase, etc.
- Understanding of DevOps practices and CI/CD pipelines
- Experience with A/B testing and feature flag implementation
- Familiarity with machine learning integration in mobile applications
- Exposure to innovation-driven technical decision-making
Skills: React native, mobile app development, devops, machine learning
******
Notice period - 0 to 15 days only (Need Feb Joiners)
Location: Navi Mumbai, Belapur
We are hiring a Senior DevOps Engineer (5–10 years experience) with strong hands-on expertise in AWS, CI/CD, Docker, Kubernetes, and Linux. The role involves designing, automating, and managing scalable cloud infrastructure and deployment pipelines. Experience with Terraform/Ansible, monitoring tools, and security best practices is required. Immediate joiners preferred.
Job Description: Python Automation Engineer Location: Bangalore (Office-based) Experience: 1–2 Years Joining: Immediate to 30 Days Role Overview We are looking for a Python Automation Engineer who combines strong programming skills with hands-on automation expertise. This role involves developing automation scripts, designing automation frameworks, and contributing independently to automation solutions, with leads delegating tasks and solution directions. The ideal candidate is not a novice—they have solid real-world Python experience and are comfortable working across API automation, automation tooling, and CI/CD-driven environments. Key Responsibilities Design, develop, and maintain automation scripts and reusable automation frameworks using Python Build and enhance API automation for REST-based services and common backend frameworks Independently own automation tasks and deliver solutions with minimal supervision Collaborate with leads and engineering teams to understand automation requirements Maintain clean, modular, and scalable automation code Occasionally review automation code written by other team members Integrate automation suites with CI/CD pipelines Package and ship automation tools/frameworks using containerization Required Skills & Qualifications Python (Core Requirement) Strong, in-depth hands-on experience in Python, including: Object-Oriented Programming (OOP) and modular design Writing reusable libraries and frameworks Exception handling, logging, and debugging Asynchronous concepts, performance-aware coding Unit testing and test automation practices Code quality, readability, and maintainability API Automation Strong experience automating REST APIs Hands-on with common Python API libraries (e.g., requests, httpx, or equivalent) Understanding of API request/response handling, validations, and workflows Familiarity with different backend frameworks and fast APIs DevOps & Engineering Practices (Must-Have) Strong knowledge of Git Experience with CI/CD tools (Jenkins, GitHub Actions, GitLab, or similar) Ability to integrate automation suites into pipelines Hands-on experience with Docker for shipping automation tools/frameworks Good-to-Have Skills UI automation using Selenium (Page Object Model, cross-browser testing, headless execution) Exposure to Playwright for UI automation Basic working knowledge of Java and/or JavaScript (reading, writing small scripts, debugging) Understanding of API authentication, retries, mocking, and related best practices Domain Exposure Experience or interest in SaaS platforms Exposure to AI / ML-based platforms is a plus What We’re Looking For A strong engineering mindset, not just tool usage Someone who can build automation systems, not only execute test cases Comfortable working independently while aligning with technical leads Passion for clean code, scalable automation, and continuous improvement SKILLA IN 1 WORKKD TO PUT IN KEYSKILL SECTION
Job Title Manager - Talent Acquisition
Function Human Resource Dept.
Reports to HR Head
Span of control Team – Talent Acquisition Specialists
Principal Purpose • Collaborate with department managers regularly and proactively
identify future hiring needs. Attract and recruit candidates at the
right cost, time, and quality. Explore and optimize all channels of
sourcing - Internal & External. Build a talent pipeline for future
hiring needs.
• Drive excellence, experience design and data-driven decision
making.
Key Responsibilities Principal
• Identify talent needs and translate them into an agreed recruitment
plan, aimed at the fulfilment of the needs within time, budget and
quality constraints.
• Develop an in-depth knowledge of the job specifications to include
experience, skills and behavioral competencies needed for success
in each role.
• Conduct in-depth vacancy intake discussions leading to agreement
with hiring manager on proposed recruitment plan.
• Partner with stakeholders to understand business requirements,
educate them on market dynamics and constantly evolve the
recruitment process.
• Create a hiring plan with deliverables, timelines and a formal
tracking process.
• Coordinate, Schedule and Interview candidates within the
framework of the position specification. Possess strong ability to
screen, interview and prepare a candidate slate within an
appropriate and consistent timeline.
• Conduct in-depth interviews of potential candidates,
demonstrating ability to anticipate hiring manager preferences.
• Build and maintain a network of potential candidates through
proactive sourcing/research and ongoing relationship management
• Recommend ideas and strategies related to recruitment that will
contribute to the growth of the company, implement new
processes and fine-tuning standard processes for recruiting that fit
within the Organization's mission to deliver high-value results to
our customers.
• Participate in special projects/initiatives, including assessment of
best practices in interviewing techniques, leveraging of internal
sources of talent and identification of top performers for senior-
level openings.
• Build an “Employer Brand” in the Talent Market and Drive
Improvements in the Talent Acquisition Process
• Collaborate with marketing and communications teams for
integrated branding campaigns.
• Monitor and improve onboarding satisfaction scores and early
attrition rates by tracking feedback from new recruits across.
• Coordinate with HR operations, IT, medical admin, and business
functions to ensure Day 1 readiness (system access, ID cards,
induction slotting, etc.).
• Ensure fast TAT, high-quality selection, and seamless onboarding
process management.
• Develop KPI dashboards (time-to-fill, cost-per-hire, quality-of-hire,
interview-to-offer ratio) and present insights into leadership.
• Mentor and develop a high-performing recruitment team; manage
performance and succession planning.
Desired Skills
• Strategic thinker with analytical mindset.
• Change agent able to scale processes across multiple teams or
geographies.
• Project management and process optimization abilities.
• Strong employer branding and candidate experience focus.
Desired Experience & Qualification
• 10+ years of experience in HR with major exposure in Talent
Acquisition, preferably in the IT industry.
• Bachelor’s or Master’s degree in Human Resources (Mandatory)
🚀 We’re Hiring: Senior Full Stack Developer (Python FastAPI & React.js)
📍 Location: Chennai, Tamil Nadu (On-site)
🧠 Experience: 5+ Years
🕒 Employment Type: Full-time
About Lumera Software Solutions
Lumera Software Solutions is a product development–focused organization building technology solutions for world-class supply chain leaders. Our products help global enterprises optimize, automate, and gain real-time visibility across complex supply chain operations.
We design and build scalable, high-performance software used by industry-leading supply chain organizations worldwide, collaborating closely with global stakeholders to solve real, high-impact problems.
Our teams work closely across engineering, design, and product to deliver reliable, well-architected solutions, while collaborating with a globally distributed team across regions and time zones.
🔥 Why This Role Is Different
This role is designed for senior engineers and architect-leaning developers who want to take technical ownership and influence how complex, enterprise-grade systems are built.
You will be building mission-critical product features used by world-class supply chain leaders, working on problems that demand strong system design, scalability, and long-term architectural thinking.
This is a core product development role, not a maintenance or support-driven position.
💼 What You’ll Do
- Act as a senior individual contributor, owning complex, backend-heavy features end-to-end
- Design and develop scalable, production-grade backend services and APIs using Python & FastAPI
- Build and maintain front-end components using React.js, with primary focus on backend-driven functionality
- Participate in system design, architecture reviews, and technical decision-making
- Apply AI-assisted development tools to improve development speed, testing, and code quality
- Collaborate with a globally distributed engineering and product team while working from our Chennai office
- Work closely with databases to design efficient schemas and optimize queries
- Apply strong understanding of DevOps concepts including CI/CD, containerization, deployment, monitoring, and observability
- Mentor engineers through code reviews and design discussions
- Drive improvements in performance, reliability, security, and scalability
🛠️ What We’re Looking For
- 5+ years of hands-on experience in full stack roles with a backend-heavy focus
- Strong proficiency in Python with FastAPI, including API design and performance considerations
- Solid experience building and consuming RESTful APIs at scale
- Good working experience with React.js, JavaScript (ES6+), HTML, and CSS
- Strong understanding of backend architecture, system design, and data modeling
- Good understanding of DevOps concepts such as CI/CD pipelines, Docker, cloud deployments, and monitoring
- Ability to make sound technical trade-offs with long-term maintainability in mind
- Experience mentoring developers or leading by technical example
- Comfortable working in a focused, work-from-office (WFO) product development environment
- Educational Qualification: B.E / B.Tech (Computer Science or related fields)**
➕ Nice to Have (Not Mandatory)
- Experience with cloud platforms (AWS / Azure / GCP)
- Familiarity with Docker and containerization
- Experience setting up or working with CI/CD pipelines
- Exposure to product development teams or building in-house products
- Exposure to AI/ML concepts, data pipelines, or integrating AI services into products
- Experience using AI developer tools to enhance productivity
🎯 What We Offer
- Competitive salary based on skills and impact
- High ownership and visibility of your work
- Fast learning curve with real technical challenges
- A collaborative, no-nonsense engineering culture
- Long-term growth as the company scales
✨ Lumera Software Solutions is an equal opportunity employer. We value talent, ownership, and diversity.
About the Role
We are looking for a Senior Program Manager (DevX) to lead enterprise-scale initiatives that improve developer productivity, engineering workflows, and platform efficiency. This role is central to our software transformation journey, ensuring developers have the right tools, platforms, and processes to deliver at scale.
What You’ll Do
- Lead end-to-end DevX and Developer Productivity programs
- Drive engineering efficiency across SDLC, CI/CD, and platform tooling
- Partner with Engineering, Platform, and Product teams on tooling and transformation roadmaps
- Standardize and optimize developer, database, and SDLC tools
- Reduce friction from code → build → test → deploy
- Establish program governance, metrics, and executive reporting
- Manage risks, dependencies, and large-scale change initiatives
What We’re Looking For
- 10+ years in Technical Program Management / Engineering Program Management
- Proven experience driving DevX, Platform, or Engineering Transformation programs
- Strong understanding of Agile, DevOps, and modern SDLC practices
- Experience working with large, cross-functional engineering teams
- Excellent stakeholder and executive communication skills
Tools & Platforms
- Developer Tools: GitHub, VS Code, IntelliJ, Postman
- CI/CD & SDLC: Azure DevOps, Jenkins, GitHub Actions, JIRA, Confluence, SonarQube
- Platforms: Docker, Kubernetes
- Database Tools: SSMS, DBeaver, DataGrip, Azure Data Studio, pgAdmin
Nice to Have
- Platform or Cloud Engineering background
- Experience in large-scale software or engineering transformations
- Ex-Engineer or Technical background
Role: DevOps Engineer
Experience: 7+ Years
Location: Pune / Trivandrum
Work Mode: Hybrid
𝐊𝐞𝐲 𝐑𝐞𝐬𝐩𝐨𝐧𝐬𝐢𝐛𝐢𝐥𝐢𝐭𝐢𝐞𝐬:
- Drive CI/CD pipelines for microservices and cloud architectures
- Design and operate cloud-native platforms (AWS/Azure)
- Manage Kubernetes/OpenShift clusters and containerized applications
- Develop automated pipelines and infrastructure scripts
- Collaborate with cross-functional teams on DevOps best practices
- Mentor development teams on continuous delivery and reliability
- Handle incident management, troubleshooting, and root cause analysis
𝐌𝐚𝐧𝐝𝐚𝐭𝐨𝐫𝐲 𝐒𝐤𝐢𝐥𝐥𝐬:
- 7+ years in DevOps/SRE roles
- Strong experience with AWS or Azure
- Hands-on with Docker, Kubernetes, and/or OpenShift
- Proficiency in Jenkins, Git, Maven, JIRA
- Strong scripting skills (Shell, Python, Perl, Ruby, JavaScript)
- Solid networking knowledge and troubleshooting skills
- Excellent communication and collaboration abilities
𝐏𝐫𝐞𝐟𝐞𝐫𝐫𝐞𝐝 𝐒𝐤𝐢𝐥𝐥𝐬:
- Experience with Helm, monitoring tools (Splunk, Grafana, New Relic, Datadog)
- Knowledge of Microservices and SOA architectures
- Familiarity with database technologies
Review Criteria:
- Strong MLOps profile
- 8+ years of DevOps experience and 4+ years in MLOps / ML pipeline automation and production deployments
- 4+ years hands-on experience in Apache Airflow / MWAA managing workflow orchestration in production
- 4+ years hands-on experience in Apache Spark (EMR / Glue / managed or self-hosted) for distributed computation
- Must have strong hands-on experience across key AWS services including EKS/ECS/Fargate, Lambda, Kinesis, Athena/Redshift, S3, and CloudWatch
- Must have hands-on Python for pipeline & automation development
- 4+ years of experience in AWS cloud, with recent companies
- (Company) - Product companies preferred; Exception for service company candidates with strong MLOps + AWS depth
Preferred:
- Hands-on in Docker deployments for ML workflows on EKS / ECS
- Experience with ML observability (data drift / model drift / performance monitoring / alerting) using CloudWatch / Grafana / Prometheus / OpenSearch.
- Experience with CI / CD / CT using GitHub Actions / Jenkins.
- Experience with JupyterHub/Notebooks, Linux, scripting, and metadata tracking for ML lifecycle.
- Understanding of ML frameworks (TensorFlow / PyTorch) for deployment scenarios.
Job Specific Criteria:
- CV Attachment is mandatory
- Please provide CTC Breakup (Fixed + Variable)?
- Are you okay for F2F round?
- Have candidate filled the google form?
Role & Responsibilities:
We are looking for a Senior MLOps Engineer with 8+ years of experience building and managing production-grade ML platforms and pipelines. The ideal candidate will have strong expertise across AWS, Airflow/MWAA, Apache Spark, Kubernetes (EKS), and automation of ML lifecycle workflows. You will work closely with data science, data engineering, and platform teams to operationalize and scale ML models in production.
Key Responsibilities:
- Design and manage cloud-native ML platforms supporting training, inference, and model lifecycle automation.
- Build ML/ETL pipelines using Apache Airflow / AWS MWAA and distributed data workflows using Apache Spark (EMR/Glue).
- Containerize and deploy ML workloads using Docker, EKS, ECS/Fargate, and Lambda.
- Develop CI/CT/CD pipelines integrating model validation, automated training, testing, and deployment.
- Implement ML observability: model drift, data drift, performance monitoring, and alerting using CloudWatch, Grafana, Prometheus.
- Ensure data governance, versioning, metadata tracking, reproducibility, and secure data pipelines.
- Collaborate with data scientists to productionize notebooks, experiments, and model deployments.
Ideal Candidate:
- 8+ years in MLOps/DevOps with strong ML pipeline experience.
- Strong hands-on experience with AWS:
- Compute/Orchestration: EKS, ECS, EC2, Lambda
- Data: EMR, Glue, S3, Redshift, RDS, Athena, Kinesis
- Workflow: MWAA/Airflow, Step Functions
- Monitoring: CloudWatch, OpenSearch, Grafana
- Strong Python skills and familiarity with ML frameworks (TensorFlow/PyTorch/Scikit-learn).
- Expertise with Docker, Kubernetes, Git, CI/CD tools (GitHub Actions/Jenkins).
- Strong Linux, scripting, and troubleshooting skills.
- Experience enabling reproducible ML environments using Jupyter Hub and containerized development workflows.
Education:
- Master’s degree in computer science, Machine Learning, Data Engineering, or related field.
Skills - MLOps Pipeline Development | CI/CD (Jenkins) | Automation Scripting | Model Deployment & Monitoring | ML Lifecycle Management | Version Control & Governance | Docker & Kubernetes | Performance Optimization | Troubleshooting | Security & Compliance
Responsibilities:
1. Design, develop, and implement MLOps pipelines for the continuous deployment and
integration of machine learning models
2. Collaborate with data scientists and engineers to understand model requirements and
optimize deployment processes
3. Automate the training, testing and deployment processes for machine learning models
4. Continuously monitor and maintain models in production, ensuring optimal
performance, accuracy and reliability
5. Implement best practices for version control, model reproducibility and governance
6. Optimize machine learning pipelines for scalability, efficiency and cost-effectiveness
7. Troubleshoot and resolve issues related to model deployment and performance
8. Ensure compliance with security and data privacy standards in all MLOps activities
9. Keep up to date with the latest MLOps tools, technologies and trends
10. Provide support and guidance to other team members on MLOps practices
Required skills and experience:
• 3-10 years of experience in MLOps, DevOps or a related field
• Bachelor’s degree in computer science, Data Science or a related field
• Strong understanding of machine learning principles and model lifecycle management
• Experience in Jenkins pipeline development
• Experience in automation scripting
Responsibilities
- Design, develop, and implement MLOps pipelines for the continuous deployment and integration of machine learning models
- Collaborate with data scientists and engineers to understand model requirements and optimize deployment processes
- Automate the training, testing and deployment processes for machine learning models
- Continuously monitor and maintain models in production, ensuring optimal performance, accuracy and reliability
- Implement best practices for version control, model reproducibility and governance
- Optimize machine learning pipelines for scalability, efficiency and cost-effectiveness
- Troubleshoot and resolve issues related to model deployment and performance
- Ensure compliance with security and data privacy standards in all MLOps activities
- Keep up to date with the latest MLOps tools, technologies and trends
- Provide support and guidance to other team members on MLOps practices
Required Skills And Experience
- 3-10 years of experience in MLOps, DevOps or a related field
- Bachelors degree in computer science, Data Science or a related field
- Strong understanding of machine learning principles and model lifecycle management
- Experience in Jenkins pipeline development
- Experience in automation scripting

US based large Biotech company with WW operations.
Senior Cloud Engineer Job Description
Position Title: Senior Cloud Engineer -- AWS [LONG TERM-CONTRACT POSITION]
Location: Remote [REQUIRES WORKING IN CST TIME ZONE]
Position Overview
The Senior Cloud Engineer will play a critical role in designing, deploying, and managing scalable, secure, and highly available cloud infrastructure across multiple platforms (AWS, Azure, Google Cloud). This role requires deep technical expertise, leadership in cloud
strategy, and hands-on experience with automation, DevOps practices, and cloud-native technologies. The ideal candidate will work collaboratively with cross-functional teams to deliver robust cloud solutions, drive best practices, and support business objectives
through innovative cloud engineering.
Key Responsibilities
Design, implement, and maintain cloud infrastructure and services, ensuring high availability, performance, and security across multi-cloud environments (AWS, Azure, GCP)
Develop and manage Infrastructure as Code (IaC) using tools such as Terraform, CloudFormation, and Ansible for automated provisioning and configuration
Lead the adoption and optimization of DevOps methodologies, including CI/CD pipelines, automated testing, and deployment processes
Collaborate with software engineers, architects, and stakeholders to architect cloud-native solutions that meet business and technical requirements
Monitor, troubleshoot, and optimize cloud systems for cost, performance, and reliability, using cloud monitoring and logging tools
Ensure cloud environments adhere to security best practices, compliance standards, and governance policies, including identity and access management, encryption, and vulnerability management
Mentor and guide junior engineers, sharing knowledge and fostering a culture of continuous improvement and innovation
Participate in on-call rotation and provide escalation support for critical cloud infrastructure issues
Document cloud architectures, processes, and procedures to ensure knowledge transfer and operational excellence
Stay current with emerging cloud technologies, trends, and best practices,
Required Qualifications
- Bachelors or Masters degree in Computer Science, Engineering, Information Systems, or a related field, or equivalent work experience
- 6–10 years of experience in cloud engineering or related roles, with a proven track record in large-scale cloud environments
- Deep expertise in at least one major cloud platform (AWS, Azure, Google Cloud) and experience in multi-cloud environments
- Strong programming and scripting skills (Python, Bash, PowerShell, etc.) for automation and cloud service integration
- Proficiency with DevOps tools and practices, including CI/CD (Jenkins, GitLab CI), containerization (Docker, Kubernetes), and configuration management (Ansible, Chef)
- Solid understanding of networking concepts (VPC, VPN, DNS, firewalls, load balancers), system administration (Linux/Windows), and cloud storage solutions
- Experience with cloud security, governance, and compliance frameworks
- Excellent analytical, troubleshooting, and root cause analysis skills
- Strong communication and collaboration abilities, with experience working in agile, interdisciplinary teams
- Ability to work independently, manage multiple priorities, and lead complex projects to completion
Preferred Qualifications
- Relevant cloud certifications (e.g., AWS Certified Solutions Architect, AWS DevOps Engineer, Microsoft AZ-300/400/500, Google Professional Cloud Architect)
- Experience with cloud cost optimization and FinOps practices
- Familiarity with monitoring/logging tools (CloudWatch, Kibana, Logstash, Datadog, etc.)
- Exposure to cloud database technologies (SQL, NoSQL, managed database services)
- Knowledge of cloud migration strategies and hybrid cloud architectures
REVIEW CRITERIA:
MANDATORY:
- Strong Hands-On AWS Cloud Engineering / DevOps Profile
- Mandatory (Experience 1): Must have 12+ years of experience in AWS Cloud Engineering / Cloud Operations / Application Support
- Mandatory (Experience 2): Must have strong hands-on experience supporting AWS production environments (EC2, VPC, IAM, S3, ALB, CloudWatch)
- Mandatory (Infrastructure as a code): Must have hands-on Infrastructure as Code experience using Terraform in production environments
- Mandatory (AWS Networking): Strong understanding of AWS networking and connectivity (VPC design, routing, NAT, load balancers, hybrid connectivity basics)
- Mandatory (Cost Optimization): Exposure to cost optimization and usage tracking in AWS environments
- Mandatory (Core Skills): Experience handling monitoring, alerts, incident management, and root cause analysis
- Mandatory (Soft Skills): Strong communication skills and stakeholder coordination skills
ROLE & RESPONSIBILITIES:
We are looking for a hands-on AWS Cloud Engineer to support day-to-day cloud operations, automation, and reliability of AWS environments. This role works closely with the Cloud Operations Lead, DevOps, Security, and Application teams to ensure stable, secure, and cost-effective cloud platforms.
KEY RESPONSIBILITIES:
- Operate and support AWS production environments across multiple accounts
- Manage infrastructure using Terraform and support CI/CD pipelines
- Support Amazon EKS clusters, upgrades, scaling, and troubleshooting
- Build and manage Docker images and push to Amazon ECR
- Monitor systems using CloudWatch and third-party tools; respond to incidents
- Support AWS networking (VPCs, NAT, Transit Gateway, VPN/DX)
- Assist with cost optimization, tagging, and governance standards
- Automate operational tasks using Python, Lambda, and Systems Manager
IDEAL CANDIDATE:
- Strong hands-on AWS experience (EC2, VPC, IAM, S3, ALB, CloudWatch)
- Experience with Terraform and Git-based workflows
- Hands-on experience with Kubernetes / EKS
- Experience with CI/CD tools (GitHub Actions, Jenkins, etc.)
- Scripting experience in Python or Bash
- Understanding of monitoring, incident management, and cloud security basics
NICE TO HAVE:
- AWS Associate-level certifications
- Experience with Karpenter, Prometheus, New Relic
- Exposure to FinOps and cost optimization practices

Global digital transformation solutions provider.
JOB DETAILS:
Job Role: Lead I - .Net Developer - .NET, Azure, Software Engineering
Industry: Global digital transformation solutions provider
Work Mode: Hybrid
Salary: Best in Industry
Experience: 6-8 years
Location: Hyderabad
Job Description:
• Experience in Microsoft Web development technologies such as Web API, SOAP XML
• C#/.NET .Netcore and ASP.NET Web application experience Cloud based development experience in AWS or Azure
• Knowledge of cloud architecture and technologies
• Support/Incident management experience in a 24/7 environment
• SQL Server and SSIS experience
• DevOps experience of Github and Jenkins CI/CD pipelines or similar
• Windows Server 2016/2019+ and SQL Server 2019+ experience
• Experience of the full software development lifecycle
• You will write clean, scalable code, with a view towards design patterns and security best practices
• Understanding of Agile methodologies working within the SCRUM framework AWS knowledge
Must-Haves
C#/.NET/.NET Core (experienced), ASP.NET Web application (experienced), SQL Server/SSIS (experienced), DevOps (Github/Jenkins CI/CD), Cloud architecture (AWS or Azure)
.NET (Senior level), Azure (Very good knowledge), Stakeholder Management (Good)
Mandatory skills: Net core with Azure or AWS experience
Notice period - 0 to 15 days only
Location: Hyderabad
Virtual Drive - 17th Jan
Job Title: Python Developer (5–8+ Years Experience)
Location: Mumbai (Onsite)
Experience: 5–8+ Years
Salary: ₹9,00,000 – ₹12,00,000 per Annum (depending on experience & skill set)
Employment Type: Full-time
Job Description
We are looking for an experienced Python Developer to join our growing team in Mumbai. The ideal candidate will have strong hands-on experience in Python development, building scalable backend systems, and working with databases and APIs.
Key Responsibilities
- Design, develop, test, and maintain Python-based applications
- Build and integrate RESTful APIs
- Work with frameworks such as Django / Flask / FastAPI
- Write clean, reusable, and efficient code
- Collaborate with frontend developers, QA, and project managers
- Optimize application performance and scalability
- Debug, troubleshoot, and resolve technical issues
- Participate in code reviews and follow best coding practices
- Work with databases and ensure data security and integrity
- Deploy and maintain applications in staging/production environments
Required Skills & Qualifications
- 5–8+ years of hands-on experience in Python development
- Strong experience with Django / Flask / FastAPI
- Good understanding of REST APIs
- Experience with MySQL / PostgreSQL / MongoDB
- Familiarity with Git and version control workflows
- Knowledge of OOP concepts and design principles
- Experience with Linux-based environments
- Understanding of basic security and performance optimization
- AI tool integration: GitHub Copilot, Windsurf, Cursor, AIDE, etc
- Ability to work independently as well as in a team
Good to Have (Preferred Skills)
- Experience with AWS / cloud services
- Knowledge of Docker / CI-CD pipelines
- Good level understanding of prompt engineering
- Exposure to Microservices Architecture
- Basic frontend knowledge (HTML, CSS, JavaScript)
- Experience working in an Agile/Scrum environment
- Experience working with AI APIs such as ChatGPT, OpenAI, Gemini, Claude APIs
- Integrating AI APIs into web applications
- Experience using AI for automation, content generation, data processing, or workflow optimization
Experience:
- Total: 5+ years (Required)
- Python: 5 years (Required)
JOB DETAILS:
- Job Title: Senior Devops Engineer 2
- Industry: Ride-hailing
- Experience: 5-7 years
- Working Days: 5 days/week
- Work Mode: ONSITE
- Job Location: Bangalore
- CTC Range: Best in Industry
Required Skills: Cloud & Infrastructure Operations, Kubernetes & Container Orchestration, Monitoring, Reliability & Observability, Proficiency with Terraform, Ansible etc., Strong problem-solving skills with scripting (Python/Go/Shell)
Criteria:
1. Candidate must be from a product-based or scalable app-based start-ups company with experience handling large-scale production traffic.
2. Minimum 5 yrs of experience working as a DevOps/Infrastructure Consultant
3. Own end-to-end infrastructure right from non-prod to prod environment including self-managed
4. Candidate must have experience in database migration from scratch
5. Must have a firm hold on the container orchestration tool Kubernetes
6. Must have expertise in configuration management tools like Ansible, Terraform, Chef / Puppet
7. Understanding programming languages like GO/Python, and Java
8. Working on databases like Mongo/Redis/Cassandra/Elasticsearch/Kafka.
9. Working experience on Cloud platform - AWS
10. Candidate should have Minimum 1.5 years stability per organization, and a clear reason for relocation.
Description
Job Summary:
As a DevOps Engineer at company, you will be working on building and operating infrastructure at scale, designing and implementing a variety of tools to enable product teams to build and deploy their services independently, improving observability across the board, and designing for security, resiliency, availability, and stability. If the prospect of ensuring system reliability at scale and exploring cutting-edge technology to solve problems, excites you, then this is your fit.
Job Responsibilities:
● Own end-to-end infrastructure right from non-prod to prod environment including self-managed DBs
● Codify our infrastructure
● Do what it takes to keep the uptime above 99.99%
● Understand the bigger picture and sail through the ambiguities
● Scale technology considering cost and observability and manage end-to-end processes
● Understand DevOps philosophy and evangelize the principles across the organization
● Strong communication and collaboration skills to break down the silos
Job Requirements:
● B.Tech. / B.E. degree in Computer Science or equivalent software engineering degree/experience
● Minimum 5 yrs of experience working as a DevOps/Infrastructure Consultant
● Must have a firm hold on the container orchestration tool Kubernetes
● Must have expertise in configuration management tools like Ansible, Terraform, Chef / Puppet
● Strong problem-solving skills, and ability to write scripts using any scripting language
● Understanding programming languages like GO/Python, and Java
● Comfortable working on databases like Mongo/Redis/Cassandra/Elasticsearch/Kafka.
What’s there for you?
Company’s team handles everything – infra, tooling, and self-manages a bunch of databases, such as
● 150+ microservices with event-driven architecture across different tech stacks Golang/ java/ node
● More than 100,000 Request per second on our edge gateways
● ~20,000 events per second on self-managed Kafka
● 100s of TB of data on self-managed databases
● 100s of real-time continuous deployment to production
● Self-managed infra supporting
● 100% OSS














