50+ Terraform Jobs in India
Apply to 50+ Terraform Jobs on CutShort.io. Find your next job, effortlessly. Browse Terraform Jobs and apply today!
Company Description
Rootflo is an innovative AI-driven company transforming finance back offices through advanced revenue intelligence and workflow automation solutions. Specializing in empowering financial institutions, Rootflo partners with leading banks, lending platforms, and insurance companies to revolutionize their operations. Leveraging the power of artificial intelligence, Rootflo provides efficient and intelligent systems that drive business outcomes. The company is dedicated to delivering impactful advancements for modern financial processes.
About the Role
We are looking for a motivated and technically sound Junior Cloud / Infrastructure Engineer with approximately 1 year of hands-on experience in designing, deploying, and managing cloud infrastructure on AWS, GCP, or Azure using Kubernetes. The candidate will work closely with senior engineers to support and maintain scalable, secure, and highly available cloud environments while gaining exposure across the full cloud stack.
Key Responsibilities
Cloud Infrastructure
- Assist in provisioning and managing cloud resources (VMs, storage, networking, databases) on AWS / GCP / Azure in Kubernetes
- Support infrastructure-as-code (IaC) implementation using Terraform
- Monitor cloud resource usage, costs, and performance; raise alerts for anomalies
- Set up log monitoring and other tracking
- Implementing cost optimizations and efficient resource utilisation
Networking & Security
- Configure VPCs, subnets, security groups, IAM roles, and access policies
- Assist in implementing firewalls, SSL/TLS certificates, and VPN connectivity
- Support compliance and security best practices (CIS benchmarks, least privilege)
CI/CD & DevOps Support
- Work with CI/CD pipelines (Jenkins, GitHub Actions, GitLab CI, or similar)
- Assist in containerization and orchestration using Docker and Kubernetes (EKS / GKE / AKS)
- Support deployment automation and version control workflows
Monitoring & Incident Response
- Use monitoring tools such as CloudWatch, Stackdriver, Azure Monitor, Grafana, or Datadog
- Respond to infrastructure alerts and assist in root cause analysis (RCA)
- Participate in on-call rotations and incident triage as required
Collaboration
- Work closely with development, QA, and product teams to support release cycles
- Maintain documentation for infrastructure, runbooks, and architecture diagrams
- Participate in code reviews for infrastructure scripts and configurations
Required Skills & Qualifications
Cloud Platforms (Any One Primary)
- AWS: EC2, S3, RDS, Lambda, VPC, IAM, ECS/EKS, CloudWatch, Route 53
- GCP: Compute Engine, GCS, Cloud SQL, GKE, Cloud Functions, VPC, IAM, Stackdriver
- Azure: VMs, Azure Storage, Azure SQL, AKS, Azure Functions, VNet, AAD, Azure Monitor
Infrastructure & Automation
- Hands-on with Terraform and/or CloudFormation / ARM Templates / GCP Deployment Manager
- Proficiency in Linux/Unix system administration
- Scripting skills in Bash, Python, or PowerShell
Containers & DevOps
- Working knowledge of Docker (build, push, run, Compose)
- Basic understanding of Kubernetes concepts (pods, deployments, services, ingress)
- Familiarity with at least one CI/CD tool
Networking Fundamentals
- Understanding of DNS, HTTP/HTTPS, TCP/IP, load balancers, CDNs
- Experience with cloud-native networking (VPCs, peering, NAT gateways)
Soft Skills
- Strong problem-solving ability and attention to detail
- Good written and verbal communication
- Eagerness to learn and adapt in a fast-paced environment
Good to Have
- Cloud certification: AWS Solutions Architect Associate / GCP ACE / Azure AZ-900 or AZ-104
- Experience with GitOps tools like ArgoCD or Flux
- Exposure to service mesh (Istio, Linkerd)
- Knowledge of Ansible or Chef/Puppet for configuration management
- Familiarity with observability tools: Prometheus, Grafana, ELK Stack
- Experience with cost optimization and FinOps practices
Senior Data Engineer (Databricks, BigQuery, Snowflake)
Experience: 8+ Years in Data Engineering
Location: Remote | Onsite (Noida, Gurgaon, Pune, Nagpur, Jaipur, Gandhinagar)
Budget: Open / Competitive
Job Summary:
We are seeking a highly skilled Senior Data Engineer to design, build, and optimize scalable data solutions that support advanced analytics and machine learning initiatives. You will lead the development of reliable, high-performance data systems and collaborate closely with data scientists to enable data-driven decision-making.
In this role, we expect a forward-thinking professional who utilizes AI-augmented development tools (such as Cursor, Windsurf, or GitHub Copilot) to increase engineering velocity and maintain high code standards in a modern enterprise environment.
Key Responsibilities:
- Scalable Pipelines: Design, develop, and optimize end-to-end data pipelines using SQL, Python, and PySpark.
- ETL/ELT Workflows: Build and maintain workflows to transform raw data into structured, analytics-ready datasets.
- ML Integration: Partner with data scientists to deploy and integrate machine learning models into production environments.
- Cloud Infrastructure: Manage and scale data infrastructure within AWS and Azure ecosystems.
- Data Warehousing: Utilize Databricks and Snowflake for big data processing and enterprise warehousing.
- Automation & IaC: Implement workflow orchestration using Apache Airflow and manage infrastructure as code via Terraform.
- Performance Tuning: Optimize data storage, retrieval, and system performance across data warehouse platforms.
- Governance & Compliance: Ensure data quality and security using tools like Unity Catalog or Hive Metastore.
- AI-Augmented Development: Integrate AI tools and LLM APIs into data pipelines and use AI IDEs to streamline debugging and documentation.
Technical Requirements:
- Experience: 8+ years of core Data Engineering experience in large-scale enterprise or consulting environments.
- Languages: Expert proficiency in SQL and Python for complex data processing.
- Big Data: Hands-on experience with PySpark and large-scale distributed computing.
- Architecture: Strong understanding of ETL frameworks, data pipeline architecture, and data warehousing best practices.
- Cloud Platforms: Deep working knowledge of AWS and Azure.
- Modern Tooling: Proven experience with Databricks, Snowflake, and Apache Airflow.
- Infrastructure: Experience with Terraform or similar IaC tools for scalable deployments.
- AI Competency: Proficiency in using AI IDEs (Cursor/Windsurf) and integrating AI/ML models into production data flows.
Preferred Qualifications:
- Exposure to data governance and cataloging tools (e.g., Unity Catalog).
- Knowledge of performance tuning for massive-scale big data systems.
- Familiarity with real-time data processing frameworks.
- Experience in digital transformation and sustainability-focused data projects.
About BootLabs:
BootLabs is a boutique tech consulting and digital engineering company that partners with leading enterprises to build and support scalable, cloud-native, and data-driven platforms. We specialise in delivering high-impact solutions across cloud, AI/GenAI, platform engineering, and enterprise application support. Our teams work closely with global clients across highly regulated industries, including BFSI, Healthcare, E-commerce, and other domains, ensuring reliability, security, and operational excellence for mission-critical systems.
JD:
Extensive experience in deploying, and supporting enterprise workloads across Azure IaaS, PaaS, and SaaS environments.
Experience managing enterprise Landing Zones & Azure Policies.
Exposure to Azure Data Lake and Azure Databricks (basic to intermediate).
Knowledge of Terraform and any DevOps CI/CD pipelines.
Hybrid connectivity experience (VPN Gateway / ExpressRoute).
Strong networking fundamentals (NSG, UDR, Peering).
OS-level troubleshooting (Windows & Linux).
Experience with enterprise security tools such as Prisma / Cortex or similar vulnerability & endpoint security tools
Experience in production support, governance, and security guardrails.
Experience handling production incidents, RCA preparation, and performance tuning. Change management & CAB coordination.
Hands-on experience with native Azure security services like Azure Firewall, Azure Application Gateway with WAF etc
Cost optimization ,Monitoring & alerting implementation
Job Title : DevOps Engineer
Experience : 3+ Years
Location : Indiranagar, Bengaluru (Work From Office – 5 Days)
Employment Type : Full-Time
Work Timings : 11:00 AM to 7:00 PM IST
Notice Period : Immediate Joiners Preferred
Role Overview :
We are seeking a skilled DevOps Engineer with 3+ years of experience in building and managing scalable cloud-native infrastructure.
The ideal candidate will have strong expertise in Kubernetes and Helm, along with hands-on experience in deploying and maintaining production-grade systems on cloud platforms.
This role offers an opportunity to work in a high-growth startup environment, contributing to both existing systems and new infrastructure development.
Key Responsibilities :
- Design, deploy, and manage scalable infrastructure using Kubernetes.
- Build and maintain CI/CD pipelines for efficient and automated deployments.
- Manage and optimize cloud environments (preferably GCP).
- Implement Infrastructure as Code using Helm/Terraform.
- Monitor system performance and ensure high availability and reliability.
- Handle bug fixes, system improvements, and performance optimization.
- Collaborate with engineering teams to design scalable microservices architecture.
- Implement logging, monitoring, and alerting solutions.
- Ensure security best practices including IAM, secrets management, and network policies.
Mandatory Skills :
- Strong hands-on experience with Kubernetes.
- Expertise in Helm Charts.
- Experience with Google Cloud Platform (GCP).
- Hands-on experience with ArgoCD or similar CI/CD tools.
- Knowledge of CI/CD tools like Jenkins, GitHub Actions, GitLab CI.
- Experience in database hosting and scaling.
Nice to Have :
- Exposure to other cloud platforms (AWS/Azure).
- Experience with modern DevOps and automation tools.
- Ability to quickly learn and adapt to new technologies.
Team & Work Scope :
- No dedicated DevOps team currently – high ownership role.
- Work on both existing systems (maintenance & improvements) and new system builds (greenfield projects).
- Opportunity to shape DevOps practices and infrastructure from scratch.
Preferred Candidate Profile :
- 3+ years of relevant DevOps experience.
- Strong problem-solving and debugging skills.
- Experience working in fast-paced startup environments.
- Understanding of scalability, security, and performance optimization.
- Good communication and collaboration skills.
Hiring Process :
- Profile Screening
- GT Assessment
- Technical Interview – Round 1
- Technical Interview – Round 2
- Final Round (if required with US team)
Must have skills:
● Experience: 6+ years of hands-on experience in Cloud Platform Engineering, DevOps, or Site Reliability Engineering (SRE).
● Multi-Cloud Infrastructure: Proficiency in architecting, deploying, and maintaining cloud infrastructure across GCP and Azure (VPC, IAM, Cloud Storage/Blob, Cloud Run/Functions, Pub/Sub, GKE/AKS, Cloud SQL).
● Container Orchestration: Extensive experience with Kubernetes (GKE or AKS) and Docker for managing and scaling containerized applications.
● Infrastructure as Code (IaC) & Automation: Strong proficiency using Terraform along with Python and Bash/Shell scripting for infrastructure automation.
● CI/CD Automation: Experience building and managing CI/CD pipelines using Jenkins, GitHub Actions, GitLab CI, or ArgoCD.
● Observability & Monitoring: Experience using tools such as Datadog, Prometheus, Grafana, or Splunk for monitoring, logging, and alerting.
● Secrets & Security Management: Experience managing sensitive credentials using HashiCorp Vault, GCP Secret Manager, or Azure Key Vault.
● Architecture & Networking: Understanding of microservices architecture, service-oriented architecture, event-driven systems (Pub/Sub), and cloud networking principles.
Good to have skills:
● AI/ML Infrastructure: Familiarity with infrastructure for ML workloads such as Vertex AI, Azure Machine Learning, GPU node pools, or Vector Databases.
● Advanced Kubernetes: Working knowledge of Kyverno for policy management, Karpenter for cluster autoscaling, or building Kubernetes operators using Go.
● Multi-Cloud Management: Familiarity with Crossplane for managing multi-cloud environments and building cloud-native platforms.
● Cloud Reliability & FinOps: Understanding of disaster recovery, fault tolerance, and cost allocation practices through resource tagging.
● Domain & Compliance: Experience working in regulated environments such as BFSI or Insurance.
The DevOps Engineer will play a critical role in operationalizing artificial intelligence across Bell Techlogix client environments. This role focuses on building and supporting cloud infrastructure, CI/CD pipelines, and automation frameworks that power AI and machine learning workloads. The ideal candidate has experience supporting AI platforms such as Azure AI, Azure Machine Learning, Azure OpenAI, and ServiceNow or conversational AI platforms, and understands the operational requirements of production AI systems, including reliability, scalability, and security.
Key Responsibilities
•Design, build, and operate cloud infrastructure and platform services that support AI and machine learning workloads in production, SLA-driven managed services environments
•Implement CI/CD and MLOps pipelines to enable automated training, testing, deployment, and rollback of AI and ML models
•Develop and maintain Infrastructure as Code to provision AI-ready environments consistently across dev/test/prod
•Support AI platform operations including monitoring model health, pipeline execution, compute utilization, and data dependencies
•Partner with Machine Learning Engineers and Data Engineers to standardize deployment patterns for AI services and LLM-based solutions
•Enable secure and scalable AI integrations using APIs, messaging, and event-driven architectures
•Implement observability solutions for AI platforms, including logging, metrics, alerting, and drift detection integrations
•Troubleshoot AI platform incidents, perform root cause analysis, and implement remediation to improve reliability and automation coverage
•Apply security best practices for AI environments including secrets management, identity and access controls, network isolation, and policy enforcement
•Support AI-driven automation use cases across platforms such as Microsoft Copilot, ServiceNow, and conversational AI tools
•Collaborate with service desk, security, and architecture teams to continuously improve AI service delivery and operational maturity
Required Qualifications
•Bachelor’s degree in Computer Science, Engineering, or equivalent practical experience
•5+ years of experience in DevOps, cloud engineering, or platform operations, with exposure to AI or data workloads
•Hands-on experience with Microsoft Azure, including compute, networking, storage, and monitoring services
•Experience building CI/CD pipelines using Azure DevOps, GitHub Actions, or similar tools
•Working knowledge of Infrastructure as Code (Terraform and/or Bicep/ARM)
•Scripting experience using PowerShell and/or Python
•Experience supporting production platforms with incident management, change control, and root cause analysis
•Understanding of cloud security fundamentals and enterprise governance requirements
Preferred Qualifications
•Experience with Azure Machine Learning, Azure AI Services, Azure OpenAI, or MLOps frameworks
•Exposure to containerization and orchestration technologies (Docker, Kubernetes, AKS)
•Experience supporting data pipelines or feature stores used by machine learning systems
•Familiarity with ServiceNow, AI-driven ITSM workflows, or automation platforms
•Experience with observability tools
•Knowledge of Responsible AI, data governance, and compliance considerations for AI systems
•Relevant certifications (Microsoft Azure Administrator, Azure DevOps Engineer, Azure AI Engineer)
Company Overview:
Planview has one mission: to build the future of connected work with market-leading portfolio management and work management solutions. Planview is a recognized innovator and industry leader, our solutions enable organizations to connect the business from ideas to impact, empowering companies to accelerate the achievement of what matters most. Our solutions span every class of work, resource, and organization to address the varying needs of diverse and distributed teams, departments, and enterprises.
As a Sr CloudOps Engineer II, you will oversee teams of Engineers and be a champion for configuration management, technologies in the cloud, and continuous improvement. You will work closely with global leaders to ensure that our applications, infrastructure, and processes are scalable, secure, and supportable. By leveraging your production experience and development skills you will work hand in hand with Engineers (Dev, DevOps, DBOps) to design and implement solutions that improve delivery of value to customers, reduce costs, and eliminate toil.
Responsibilities (What you will do):
- Guide the professional development of Engineers and support the teams to accomplish business goals
- Work closely with leaders in the Israel to align on priorities and architect, deliver, and manage our products
- Build systems that are secure, scalable, and self-healing.
- Manage and improve deployment pipelines.
- Triage and remediate production issues.
- Participate in on-call rotations for escalations.
Qualifications (What you will bring):
- Bachelor's degree is CS or equivalent experience in related field.
- 2+ years managing Engineering teams.
- 8+ years of experience as a site reliability or platform engineer, preferably in a fast-scaling environment
- 5+ years administering Linux and Windows environments.
- 3+ years programming / scripting experience (e.g., Python, JavaScript, PowerShell)
- Strong technical knowledge in OS’s (Linux and Windows), virtualizations, storage systems, networking, and firewall implementations
- Maintaining production environments in the On Premise (90%) and Cloud (10%) (e.g., AWS, Google Cloud, Azure)
- Solid understanding of networking principles and how it applies to data flow and security.
- Automating deployments of cloud based available services (e.g., AWS EC2 / RDS, Docker, Kubernetes)
- Experience managing CI/CD infrastructures, with a strong proficiency in platforms like bitbucket and Jenkins to streamline deployment pipelines and ensure efficient software delivery.
- Management of resources using Infrastructure as Code tools (e.g., CloudFormation, Terraform, Chef)
- Knowledge of observability tools such as LogicMonitor, New Relic, Prometheus, and Coralogix, as well as their implementation.
- Worked within Agile and Lean software development teams.
- Experience working in globally distributed teams.
- Ability to look on the big picture and manage risks.
About the role
We are looking for talented Senior Backend Engineers (5+ years of experience) to join our team and take ownership of different parts of our stack. You will be working alongside a team of Engineers locally and directly with the U.S. Engineering team on all aspects of product/application development. You will leverage your experiences and abilities to inform decisions across product development and technology. You will help us build the foundation of our 2nd Headquarters in Pune: its culture, its processes, and its practices. There are a ton of interesting problems to solve, so come hungry. If your colleagues describe you as curious, driven, kind, and creative you are a culture fit.
What Success Looks Like
- You write, review and ship code in production. Your employer or client's success depends on the software you build
- You use Generative AI tools on a daily basis to enhance the quality and efficacy of your software and non-software deliverables
- You are a self-starter and enjoy working with minimal supervision
- You evaluate and make technical architecture decisions with a long-term view, optimizing for speed, quality, and safety
- You take pride in the product you create and the code that you write
- Your team can rely on you to get them out of a sticky situation in production
- You can work well on a team of sales executives, designers and engineers in an in-person environment
- You are passionate about the enterprise software development lifecycle and feel strongly about improving it
- You are a first principles engineer who exercises curiosity about the technologies you work with
- You can learn quickly about technologies, software and code that you are not familiar with, often from rudimentary documentation
- You take ownership of the code that you write, and you help the team operate with everything that you build, throughout its lifecycle
- You communicate openly and solicit feedback on important decisions, keeping the team aligned on your rationale
- You exercise an optimistic mindset and are willing to go the extra mile to make things work
Areas of Ownership
Our hiring process is designed for you to demonstrate a generalist set of capabilities, with a specialization in Backend Technologies.
Required Technical Experience (MUST HAVE):
- Expertise in Python -
- Deep hands-on experience with Terraform -
- Proficiency in Kubernetes -
- Experience with cloud platforms (GCP strongly preferred, AWS/Azure acceptable) -
Additional experience with some of the following:
- Backend Frameworks and Technologies (Node.js, NuxtJS, Express.js)
- Programming languages (JavaScript, TypeScript, Java, C++, Go)
- RPCs (REST, gRPC or GraphQL)
- Databases (SQL, NoSQL, Postgres, MongoDB, or Firebase)
- CI/CD (Jenkins, CircleCI, GitLab or similar)
- Source code versioning tools such as Git or Perforce
- Microservices architecture
Ways to stand out
- Familiarity with AI Platforms
- Extensive experience with building enterprise-scale applications with >99% SLAs
- Deep expertise across the full required stack: Python, Terraform, Kubernetes, and GCP
You'll Get...
- Competitive Salary
- Medical Insurance Benefits
- Employer Provident Fund contributions with Gratuity after 5 years of service
- Company-sponsored US onsite trips for high performers, based on business requirements
- Potential international transfer support for top performers, based on business requirements
- Technology (hardware, software, trainings, etc.) equipment and/or allowance
- The opportunity to re-shape an entire industry
- Beautiful office environment
- Meal allowance and/or food provision on site
Culture
Who we are: Our Co-Founder and CTO is a Serial Gen AI Inventor who grew up in Pune, India, is a BITS Pilani graduate, and worked at NVIDIA's Pune office for 6 years. There, he was promoted 5 times in 6 years and was transferred to the NVIDIA Headquarters in Santa Clara, California. After making significant contributions to NVIDIA, he proceeded to attend Harvard for his dual Masters in Engineering and MBA from HBS. Our other Co-Founder/CEO is a successful Serial Entrepreneur who has built multiple companies. As a team, we work very hard, have a curious mind-set, and believe in a low-ego high output approach.
Virtual Hiring Drive Site Reliability Engineer (SRE)
Date: 25th April 2026, Saturday (Single-Day Drive)
Mode: 100% Virtual - All interview rounds on the same day
Experience: 3 to 7 Years
Note : We are looking for quick joiners who can join us within 30 days.
About the Role
We are looking for a Site Reliability Engineer who understands the realities of running production systems at scale. If building reliable, scalable, and observable systems excites you, you'll enjoy working with us.
At One2N, we solve One-to-N problems where proof of concept is already built and the real challenge lies in scalability, maintainability, performance, and reliability.
You will work closely with startups and mid-sized clients, helping them architect production-grade infrastructure and observability systems.
Key Responsibilities
- Design and build platform engineering solutions with a self-serve model
- Architect and optimize observability systems (metrics, logs, traces)
- Implement monitoring, logging, alerting & dashboards
- Build and optimize CI/CD pipelines
- Automate repetitive operational and infrastructure tasks (IaC-first approach)
- Improve Developer Experience (DX)
- Guide teams on SRE best practices & on-call processes
- Participate in code reviews and mentor engineers
- Contribute to cloud-native and platform engineering initiatives
Must-Have Skills
- 3 - 7 years experience in DevOps / SRE / Platform Engineering
- Strong hands-on with Kubernetes on AWS
- Expertise in observability tools like Datadog / Honeycomb / ELK / Grafana / Prometheus
- Experience with Docker & Microservices architecture
- Infrastructure as Code using Terraform / Pulumi
- Strong Linux troubleshooting skill
- Programming knowledge in Golang / Python / Java
- Automation & scripting expertise
Must-Have Skills:
- Hands-on experience with airgap Kubernetes clusters, ideally in regulated industries (finance, healthcare, etc.).
- Strong expertise in CI/CD pipelines, programmable infrastructure, and automation.
- Proficiency in Linux troubleshooting, observability (Prometheus, Grafana, ELK), and multi-region disaster recovery.
- Security & compliance knowledge for regulated industries.
- Preferred: Experience with GKE, RKE, Rook-Ceph and certifications like CKA, CKAD.
Who You Are
- A Kubernetes expert who thrives on scalability, automation, and security.
- Passionate about optimizing infrastructure, CI/CD, and high-availability systems.
- Comfortable troubleshooting Linux, improving observability, and ensuring disaster recovery readiness.
- A problem solver who simplifies complexity and drives cloud-native adoption.
What You’ll Do
- Architect & automate Kubernetes solutions for airgap and multi-region clusters.
- Optimize CI/CD pipelines & cloud-native deployments.
- Work with open-source projects, selecting the right tools for the job.
- Educate & guide teams on modern cloud-native infrastructure best practices.
- Solve real-world scaling, security, and infrastructure automation challenges.
Why Join Us?
- Work on high-impact Kubernetes projects in regulated industries.
- Solve real-world automation & infrastructure challenges with cutting-edge tools.
- Grow in a team that values learning, open-source contributions, and innovation.
Job Title : AWS Data Engineer
Experience : 4+ Years
Location : Bengaluru (HSR – Hybrid, 3 Days WFO)
Notice Period : Immediate Joiner
💡 Role Overview :
We are looking for a skilled AWS Data Engineer to design, build, and scale modern data platforms. The role involves working with AWS-native services, Python, Spark, and DBT to deliver secure, scalable, and high-performance data solutions in an Agile environment.
🔥 Mandatory Skills :
Python, SQL, Spark, AWS (S3, Glue, EMR, Redshift, Athena, Lambda), DBT, ETL/ELT pipeline development, Airflow/Step Functions, Data Lake (Parquet/ORC/Iceberg), Terraform & CI/CD, Data Governance & Security
🚀 Key Responsibilities :
- Design, build, and optimize ETL/ELT pipelines using Python, DBT, and AWS services
- Develop and manage scalable data lakes on S3 using formats like Parquet, ORC, and Iceberg
- Build end-to-end data solutions using Glue, EMR, Lambda, Redshift, and Athena
- Implement data governance, security, and metadata management using Glue Data Catalog, Lake Formation, IAM, and KMS
- Orchestrate workflows using Airflow, Step Functions, or AWS-native tools
- Ensure reliability and automation via CloudWatch, CloudTrail, CodePipeline, and Terraform
- Collaborate with data analysts and data scientists to deliver actionable insights
- Work in an Agile environment to deliver high-quality data solutions
✅ Mandatory Skills :
- Strong Python (including AWS SDKs), SQL, Spark
- Hands-on experience with AWS data stack (S3, Glue, EMR, Redshift, Athena, Lambda)
- Experience with DBT and ETL/ELT pipeline development
- Workflow orchestration using Airflow / Step Functions
- Knowledge of data lake formats (Parquet, ORC, Iceberg)
- Exposure to DevOps practices (Terraform, CI/CD)
- Strong understanding of data governance and security best practices
- Minimum 4–7 years in Data Engineering (3+ years on AWS)
➕ Good to Have :
- Understanding of Data Mesh architecture
- Experience with platforms like Data.World
- Exposure to Hadoop / HDFS ecosystems
🤝 What We’re Looking For :
- Strong problem-solving and analytical skills
- Ability to work in a collaborative, cross-functional environment
- Good communication and stakeholder management skills
- Self-driven and adaptable to fast-paced environments
📝 Interview Process :
- Online Assessment
- Technical Interview
- Fitment Round
- Client Round
Required Skills
- 8+ years of DevOps / Cloud Engineering experience
- Strong hands-on experience with AWS services (EC2, S3, RDS, IAM, VPC, etc.)
- Expertise in Kubernetes (deployment, scaling, cluster management)
- Strong experience in PostgreSQL and AWS RDS administration
- Proficiency in Terraform for infrastructure automation
- Experience building and maintaining CI/CD pipelines (Jenkins, GitLab CI, etc.)
- Strong knowledge of Java (mandatory) and application deployment lifecycle
- Experience with Docker and containerization
- Solid understanding of networking, security, and system architecture
- Strong troubleshooting and problem-solving skills

Job Title: Consultant – Enterprise Application Development
Location: Bengaluru (Hybrid / On-site)
Engagement: Full-Time
Experience: 10 – 15 years preferred
About Us: Introducing VTT, a comprehensive mobility service provider catering to diverse multinational sectors like IT/ITES, KPO/BPO, Financial, Pharma, and more across Indian cities. Our “Managed Mobility Program” includes Fleet Management, Technology, Resource Management, Car Rentals, Logistics, and Special Services (Ambulance and PWD vehicles). Trusted by Fortune companies such as Cisco, Morgan Stanley, Wells Fargo, Google, PWC, and others, we pride ourselves on leveraging expertise and cutting-edge technology for safe, efficient, and uninterrupted service delivery. With a commitment to excellence, we ensure best-in-class standards for all our clients. Trip to school is now timely, comfy and secure! Our well maintained f leet is here to enrich your child’s commute, keeping students punctual and safe thanks to GPS tracking paired with well-trained drivers. Our routes are carefully planned, our drivers attentive, and everything hassle-free.
Role Overview
We are looking for a seasoned Consultant with comprehensive expertise in enterprise-level application development across backend, frontend, mobile, DevOps, and cloud. The role demands a strong architectural mindset combined with hands-on execution. The Consultant will also play a critical role in understanding the current system architecture end-to-end, driving technical improvements, building the tech team foundation, and establishing structured technical documentation.
Key Responsibilities
• Understand the complete architecture of the existing systems, including web, mobile, backend services, and cloud environment.
• Provide hands-on leadership across backend, frontend, mobile, DevOps, and cloud infrastructure.
• Architect and optimize enterprise-grade applications for scalability, security, performance, and reliability.
• Conduct technical due diligence on current systems and propose improvements or refactoring plans.
• Build the foundation for the internal engineering team including hiring support, role definitions, and best-practice processes.
• Drive engineering workflows including coding standards, branching strategy, CI/CD, monitoring, and release management.
• Create comprehensive technical documentation covering system architecture, API specs, deployment playbooks, and SOPs.
• Review code and provide mentorship to engineering resources.
• Coordinate with product and business teams to translate requirements into technical design and actionable development roadmap.
• Troubleshoot and resolve deep-stack issues during development or production.
Technical Expertise Required
Backend
• Java / Spring Boot
• Node.js
•Microservices architecture
• REST / GraphQL
Frontend
• React js
• Responsive UI, component-based architecture, state management
Mobile
• Flutter
• React Native
Cloud & DevOps
• AWS (ECS / EKS / EC2 / RDS / Lambda / S3 / IAM / CloudWatch etc.)
• CI/CD pipelines (GitHub Actions / Jenkins / GitLab CI or equivalent)
• Docker / Kubernetes
• Infrastructure-as-code (Terraform / CloudFormation)
Database
• MongoDB
• Knowledge of PostgreSQL / MySQL is an added advantage
Professional Attributes
• Strong architectural thinking with the ability to simplify complex systems.
• Excellent communication and stakeholder management skills.
• Ability to work independently without constant supervision.
• Capability to mentor, lead, and build an engineering team from scratch.
• Process-driven mindset with a focus on best practices and documentation.
Deliverables
• Architectural understanding and documentation of current systems.
• Recommendations and implementation plan for system upgrades or restructuring.
• Establishment of core engineering processes and standards.
• Hiring support and technical evaluation of developers.
About Us
We believe the future of software development is AI-native — where engineers operate at a higher level of abstraction and quality remains non-negotiable.
Incubyte is a software craft consultancy where the “how” of building software matters as much as the “what”.
We partner with companies of all sizes, from helping enterprises build, scale, and modernize to early-stage founders bring their ideas to life.
Our engineers operate in an AI-native development model, using AI as a collaborator across the SDLC to accelerate development while upholding the discipline of software craftsmanship. Guided by Software Craftsmanship and Extreme Programming practices, we build reliable, maintainable, and scalable systems with speed, without compromising quality. If this way of building software resonates with you, we’d like to talk.
Our Guiding Principles
These principles define how we work at Incubyte. They are non-negotiable.
Relentless Pursuit of Quality with Pragmatism
We build high-quality systems without losing sight of delivery.
Extreme Ownership
We take responsibility end-to-end for decisions, execution, and outcomes.
Proactive Collaboration
We collaborate closely, challenge each other, and solve problems together.
Active Pursuit of Mastery
We continuously improve our craft and raise our bar.
Invite, Give, and Act on Feedback
We seek, give, and act on feedback to get better every day.
Ensuring Client Success
We act as trusted partners and focus on real outcomes, not just output.
Job Description
This is a remote position.
Experience Level
This role is ideal for engineers with 3–15 years of experience and a strong background in building secure, scalable platforms.
We are looking for hands-on DevOps and Backend Engineers with real-world experience in handling production incidents, distributed systems, and modern infrastructure challenges.
What You’ll Do as a Software Craftsperson
- Design and document real-world DevOps and backend scenarios based on production incidents such as outages, scaling challenges, and secure deployments
- Translate real engineering experiences into benchmark tasks that contribute to training next-generation AI systems
- Contribute to building secure, scalable, Kubernetes-native architectures across modern infrastructure environments
- Work across critical engineering domains including CI/CD pipelines, observability, identity & access management, infrastructure-as-code, and backend services
- Collaborate with internal teams to design and simulate realistic engineering workflows and system behaviors
- Apply practical engineering judgment to model distributed systems challenges and improve system resilience and reliability
Requirements
What You’ll Bring
5–15 years of experience in DevOps and Backend Engineering with a strong foundation in building secure, scalable systems.
Strong hands-on expertise in DevOps and backend technologies including:
- Kubernetes, Terraform, and CI/CD pipelines
- Tools such as k9s, k3s (GitLab CI preferred)
- Backend technologies such as Go, Python, or Java
- Experience with Docker, gRPC, and Kubernetes-native services
Demonstrated experience working with secure, offline or air-gapped deployments (highly preferred)
Familiarity with distributed systems and backend architecture, with exposure to ML or distributed pipelines being a plus.
Hands-on experience across multiple core functional areas, with exposure to at least five of the following:
- Identity & Access Management
- Observability (Prometheus + Grafana)
- CI/CD Pipelines
- Keycloak
- GitLab CI
- Terraform OSS
- Kubernetes ecosystem tools
Strong problem-solving ability with real-world experience in handling production systems, incidents, and infrastructure challenges
Ability to work across multiple layers of the stack, from infrastructure to backend services, while ensuring scalability, reliability, and security
Benefits
Life at Incubyte
We are a remote-first company with structured flexibility. Teams commit to shared rhythms during core hours, ensuring smooth collaboration while maintaining autonomy. Twice a year, we come together in person for a co-working sprint and once a year for a retreat - with all travel expenses covered.
Our environment is built for crafters: experimenting with real-world systems, solving complex infrastructure challenges, and contributing to cutting-edge AI initiatives. We are all lifelong learners, and our work is our passion.
Perks
Dedicated learning & development budget
Sponsorship for conference talks
Comprehensive medical & term insurance
Employee-friendly leave policies
Home Office fund
Medical Insurance
Senior Data (Platform) Engineer
Location: Hyderabad | Department: Technology, Data
About the Role
Are you passionate about building reliable, scalable data platforms that make analytics and AI development easier? As a Senior Data Platform Engineer, you will be hands-on in building, operating, and improving our core data platform and AI/LLM enablement tooling.
You’ll focus on infrastructure, orchestration, CI/CD, and reusable frameworks that support analytics engineering and AI-driven use cases. You’ll work closely with Analytics Engineering and Insights teams and support other departments as they integrate with our data systems.
What You'll Do
Data Platform & Infrastructure
- Build, deploy, and operate cloud infrastructure for data and AI workloads using Infrastructure as Code (Terraform).
- Provision and manage cloud resources across development, staging, and production environments.
- Develop and maintain CI/CD pipelines for data transformations, orchestration workflows, and platform services.
- Operate and scale containerized workloads on Kubernetes, including Airflow, internal APIs, and AI/LLM services.
- Troubleshoot and resolve infrastructure, pipeline, and orchestration failures to ensure platform reliability.
- Maintain and support existing ML services and pipelines to ensure stability and reliability (No expectation to design or develop new ML models or training pipelines).
- Continuously monitor and optimize platform performance and cost.
Framework, Tooling and Enablement
- Build and maintain reusable frameworks and patterns for dbt, Airflow, Cloud data warehouses (Snowflake, BigQuery, Redshift, Databricks, etc.), Internal data and AI APIs
- Build and support infrastructure and pipelines for AI/LLM-based use cases, including orchestration, integration, and serving.
- Improve developer experience for Analytics Engineering and Insights teams by reducing friction in local development, deployments, and production workflows.
- Create and maintain technical documentation and examples to support self-service analytics and data development.
What You’ll Need
Technical Skills & Experience
- 5+ years of experience in data engineering, platform engineering, or similar hands-on roles.
- Strong programming skills in Python and SQL.
- Hands-on experience with:
- Terraform
- Airflow
- dbt
- Kubernetes
- Cloud platforms (AWS, Google Cloud, or Microsoft Azure)
- CI/CD pipelines (GitHub Actions, GitLab CI, CircleCI, etc.)
- Cloud data warehouses (Snowflake, BigQuery, Redshift, Databricks, etc.)
- Strong understanding of analytical data models and how analytics teams consume data.
- Experience integrating and operating LLM-based pipelines and services (not model training).
Soft Skills & Collaboration
- Strong problem-solving skills and ability to debug complex platform issues.
- Strong preference for declarative development, with the ability to clearly separate what a system should do from how it is implemented.
- Clear communicator who can work effectively with both technical and non-technical stakeholders.
- Pragmatic, ownership-driven mindset with a focus on reliability and simplicity.
Why Join Us?
We welcome people from all backgrounds who seek the opportunity to help build a future where we connect the dots for international property payments. If you have the curiosity, passion, and collaborative spirit, work with us, and let’s move the world of PropTech forward, together.
Redpin, Currencies Direct and TorFX are proud to be an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to sex, gender identity, sexual orientation, race, colour, religion, national origin, disability, protected veteran status, age, or any other characteristic protected by law.
Key Responsibilities:
- ☁️ Manage cloud infrastructure and automation on AWS, Google Cloud (GCP), and Azure.
- 🖥️ Deploy and maintain Windows Server environments, including Internet Information Services (IIS).
- 🐧 Administer Linux servers and ensure their security and performance.
- 🚀 Deploy .NET applications (ASP.Net, MVC, Web API, WCF, etc.) using Jenkins CI/CD pipelines.
- 🔗 Manage source code repositories using GitLab or GitHub.
- 📊 Monitor and troubleshoot cloud and on-premises server performance and availability.
- 🤝 Collaborate with development teams to support application deployments and maintenance.
- 🔒 Implement security best practices across cloud and server environments.
Required Skills:
- ☁️ Hands-on experience with AWS, Google Cloud (GCP), and Azure cloud services.
- 🖥️ Strong understanding of Windows Server administration and IIS.
- 🐧 Proficiency in Linux server management.
- 🚀 Experience in deploying .NET applications and working with Jenkins for CI/CD automation.
- 🔗 Knowledge of version control systems such as GitLab or GitHub.
- 🛠️ Good troubleshooting skills and ability to resolve system issues efficiently.
- 📝 Strong documentation and communication skills.
Preferred Skills:
- 🖥️ Experience with scripting languages (PowerShell, Bash, or Python) for automation.
- 📦 Knowledge of containerization technologies (Docker, Kubernetes) is a plus.
- 🔒 Understanding of networking concepts, firewalls, and security best practices.
We are looking for a DevOps Engineer with hands-on experience in managing production infrastructure using AWS, Kubernetes, and Terraform. The ideal candidate will have exposure to CI/CD tools and queueing systems, along with a strong ability to automate and optimize workflows.
Responsibilities:
* Manage and optimize production infrastructure on AWS, ensuring scalability and reliability.
* Deploy and orchestrate containerized applications using Kubernetes.
* Implement and maintain infrastructure as code (IaC) using Terraform.
* Set up and manage CI/CD pipelines using tools like Jenkins or Chef to streamline deployment processes.
* Troubleshoot and resolve infrastructure issues to ensure high availability and performance.
* Collaborate with cross-functional teams to define technical requirements and deliver solutions.
* Nice-to-have: Manage queueing systems like Amazon SQS, Kafka, or RabbitMQ.
Requirements:
* 4+ years of experience with AWS, including practical exposure to its services in production environments.
* Demonstrated expertise in Kubernetes for container orchestration.
* Proficiency in using Terraform for managing infrastructure as code.
* Exposure to at least one CI/CD tool, such as Jenkins or Chef.
* Nice-to-have: Experience managing queueing systems like SQS, Kafka, or RabbitMQ.
🚨 We’re Building a “Top 1% Engineering Org”
We’re building a high-talent-density, AI-first R&D organization from scratch — inside a publicly listed company undergoing a full-scale transformation.
Think:
→ Rewriting legacy systems into AI-native architectures
→ Embedding LLMs + Agentic AI into core workflows
→ Reimagining platforms, infra, and data systems for the next decade
This is the kind of shift you’d expect from Google, Microsoft, or Meta —
Except you get to build it from day 0 → scale it globally.
About the Role / Team
We are building a next-generation AI-first R&D organization in Bengaluru, focused on solving complex problems across LLMs, Agentic AI systems, distributed computing, and enterprise-scale architectures.
This initiative is part of a publicly listed global company investing heavily in AI-driven transformation, re-architecting its platforms into intelligent, autonomous systems powered by large language models, workflows, and decision engines.
You will be working on:
- Agentic AI systems & LLM-powered workflows
- Distributed, scalable backend systems
- Enterprise-grade AI platforms
- Automation-first engineering environments
🚀 The Mandate
Own and evolve the technical backbone of an AI-first enterprise platform.
You will define architecture across LLM-powered systems, distributed services, and data platforms — and lead critical transformations from legacy → AI-native systems.
🧩 What You’ll Do
- Architect large-scale distributed systems powering AI-driven workflows
- Lead 0→1 and 1→N platform builds (LLM integrations, agentic systems, orchestration layers)
- Redesign legacy systems into scalable, modular, AI-native architectures
- Drive system design excellence across teams (APIs, infra, observability, reliability)
- Make high-stakes decisions on trade-offs (latency, cost, scalability, model performance)
- Mentor senior engineers and influence engineering culture/org standards
- Partner with product, data, and leadership on long-term technical strategy
🧠 What We’re Looking For
- Proven track record building high-scale backend or platform systems
- Deep expertise in distributed systems, microservices, cloud (AWS/GCP/Azure)
- Strong exposure to data systems/infra / Data / real-time architectures
- Experience or strong interest in LLMs, GenAI, or AI system design
- Exceptional system design, abstraction, and problem-solving ability
- High ownership mindset — you think in terms of systems, not tickets
- Strong coding skills in Python / Java / Go / Node.js
- Solid understanding of data structures, system design basics, and backend architecture
- Experience building scalable APIs and services
- Familiarity or curiosity around AI/LLMs, async systems, or event-driven design
- Strong debugging, problem-solving, and ownership mindset
- Solve hard system problems (latency, scale, reliability)
- Drive cross-team technical decisions and standards
- Mentor senior engineers and influence org-wide architecture
- Design large-scale distributed systems and backend platforms
- Mentorship & Technical Leadership
- Expertise in system design, scalability, and performance optimization
Nice to Have
- Experience integrating LLMs, vector databases, or AI pipelines
- Contributions to architecture at scale
- Experience with Agentic AI / LLM orchestration frameworks
- Background in product engineering or platform companies
- Exposure to global-scale systems (millions of users / high throughput)
🔥 What Sets You Apart
- Built platforms used by millions of users / high-throughput systems
- Experience with event-driven systems, stream processing, or infra platforms
- Prior work on AI/ML platforms, model serving, or intelligent systems
Job Details
- Job Title: Senior Backend Engineer
- Industry: SAAS
- Function – Information Technology
- Experience Required: 5-8 years
- Working Days: 6 days a week, (5 days-in-office, Saturdays WFH)
- Employment Type: Full Time
- Job Location: Bangalore
- CTC Range: Best in Industry
Preferred Skills: AWS, NodeJS, RESTful APIs, NoSQL
Criteria
· Minimum 5+ years in backend engineering with strong system design expertise
· Experience building scalable systems from scratch
· Expert-level proficiency in Node.js
· Deep understanding of distributed systems
· Strong NoSQL design skills
· Hands-on AWS cloud experience
· Proven leadership and mentoring capability
· Preferred candidates from SAAS/Software/IT Services based startups or scaleup companies
Job Description
The Role:
What You’ll Build:
1. System Architecture & Design
● Architect highly scalable backend systems from the ground up
● Define technology choices: frameworks, databases, queues, caching layers
● Evaluate microservices vs monoliths based on product stage
● Design REST, GraphQL, and real-time WebSocket APIs
● Build event-driven systems for asynchronous processing
● Architect multi-tenant systems with strict data isolation
● Maintain architectural documentation and technical specs
2. Core Backend Services
● Build high-performance APIs for 3D content, XR experiences, analytics, and user interactions
● Create 3D asset processing pipelines for uploads, conversions, and optimization
● Develop distributed job workers for CPU/GPU-intensive tasks
● Build authentication/authorization systems (RBAC)
● Implement billing, subscription, and usage metering
● Build secure webhook systems and third-party integration APIs
● Create real-time collaboration features via WebSockets/SSE
3. Data Architecture & Databases
● Design scalable schemas for 3D metadata, XR sessions, and analytics
● Model complex product catalogs with variants and hierarchies
● Implement Redis-based caching strategies
● Build search and indexing systems (Elasticsearch/Algolia)
● Architect ETL pipelines and data warehouses
● Implement sharding, partitioning, and replication strategies
● Design backup, restore, and disaster recovery workflows
4. Scalability & Performance
● Build systems designed for 10x–100x traffic growth
● Implement load balancing, autoscaling, and distributed processing
● Optimize API response times and database performance
● Implement global CDN delivery for heavy 3D assets
● Build rate limiting, throttling, and backpressure mechanisms
● Optimize storage and retrieval of large 3D files
● Profile and improve CPU, memory, and network performance
5. Infrastructure & DevOps
● Architect AWS infrastructure (EC2, S3, Lambda, RDS, ElastiCache)
● Build CI/CD pipelines for automated deployments and rollbacks
● Use IaC tools (Terraform/CloudFormation) for infra provisioning
● Set up monitoring, logging, and alerting systems
● Use Docker + Kubernetes for container orchestration
● Implement security best practices for data, networks, and secrets
● Define disaster recovery and business continuity plans
6. Integration & APIs
● Build integrations with Shopify, WooCommerce, Magento
● Design webhook systems for real-time events
● Build SDKs, client libraries, and developer tools
● Integrate payment gateways (Stripe, Razorpay)
● Implement SSO and OAuth for enterprise customers
● Define API versioning and lifecycle/deprecation strategies
7. Data Processing & Analytics
● Build analytics pipelines for engagement, conversions, and XR performance
● Process high-volume event streams at scale
● Build data warehouses for BI and reporting
● Develop real-time dashboards and insights systems
● Implement analytics export pipelines and platform integrations
● Enable A/B testing and experimentation frameworks
● Build personalization and recommendation systems
Technical Stack:
1. Backend Languages & Frameworks
● Primary: Node.js (Express, NestJS), Python (FastAPI, Django)
● Secondary: Go, Java/Kotlin (Spring)
● APIs: REST, GraphQL, gRPC
2. Databases & Storage
● SQL: PostgreSQL, MySQL
● NoSQL: MongoDB, DynamoDB
● Caching: Redis, Memcached
● Search: Elasticsearch, Algolia
● Storage/CDN: AWS S3, CloudFront
● Queues: Kafka, RabbitMQ, AWS SQS
3. Cloud & Infrastructure:
● Cloud: AWS (primary), GCP/Azure (nice to have)
● Compute: EC2, Lambda, ECS, EKS
● Infrastructure: Terraform, CloudFormation
● CI/CD: GitHub Actions, Jenkins, CircleCI
● Containers: Docker, Kubernetes
4. Monitoring & Operations
● Monitoring: Datadog, New Relic, CloudWatch
● Logging: ELK Stack, CloudWatch Logs
● Error Tracking: Sentry, Rollbar
● APM tools
5. Security & Auth
● Auth: JWT, OAuth 2.0, SAML
● Secrets: AWS Secrets Manager, Vault
● Security: Encryption (at rest/in transit), TLS/SSL, IAM
What We’re Looking For:
1. Must-Haves
● 5+ years in backend engineering with strong system design expertise
● Experience building scalable systems from scratch
● Expert-level proficiency in at least one backend stack (Node, Python, Go, Java)
● Deep understanding of distributed systems and microservices
● Strong SQL/NoSQL design skills with performance optimization
● Hands-on AWS cloud experience
● Ability to write high-quality production code daily
● Experience building and scaling RESTful APIs
● Strong understanding of caching, sharding, horizontal scaling
● Solid security and best-practice implementation experience
● Proven leadership and mentoring capability
2. Highly Desirable
● Experience with large file processing (3D, video, images)
● Background in SaaS, multi-tenancy, or e-commerce
● Experience with real-time systems (WebSockets, streams)
● Knowledge of ML/AI infrastructure
● Experience with HA systems, DR planning
● Familiarity with GraphQL, gRPC, event-driven systems
● DevOps/infrastructure engineering background
● Experience with XR/AR/VR backend systems
● Open-source contributions or technical writing
● Prior senior technical leadership experience
Technical Challenges You’ll Solve:
● Designing large-scale 3D asset processing pipelines
● Serving XR content globally with ultra-low latency
● Scaling from thousands to millions of daily requests
● Efficiently handling CPU/GPU-heavy workloads
● Architecting multi-tenancy with complete data isolation
● Managing billions of analytics events at scale
● Building future-proof APIs with backward compatibility
Why company:
● Architectural Ownership: Build foundational systems from scratch
● Deep Technical Work: Solve distributed systems and scaling challenges
● Hands-On Impact: Design and code mission-critical infrastructure
● Diverse Problems: APIs, infra, data, ML, XR, asset processing
● Massive Scale Opportunity: Build systems for exponential growth
● Modern Stack and best practices
● Product Impact: Your architecture directly powers millions of users
● Leadership Opportunity: Shape engineering culture and direction
● Learning Environment: Stay at the forefront of backend engineering
● Backed by AWS, Microsoft, Google
Location & Work Culture:
● Location: Bengaluru
● Schedule: 6 days a week, (5 days-in-office, Saturdays WFH)
● Culture: Builder mindset, strong ownership, technical excellence
● Team: Small, highly skilled backend and infra team
● Resources: AWS credits, latest tooling, learning budget
Job Title: Devops Engineer
Location: Delhi, Arjan Garh
Job Type: Full-Time
IMMEDIATE JOINERS REQUIRED
About Us:
Timble is a forward-thinking organization dedicated to leveraging cutting-edge technology to solve real-world problems. Our mission is to drive innovation and create impactful solutions through artificial intelligence and machine learning.
About the Role
We are looking for a high-ownership Senior DevOps Engineer to architect and maintain the mission-critical infrastructure supporting our global algorithmic trading operations. You will be the bridge between development and live trading, ensuring zero-latency performance and 100% system availability.
Key Responsibilities
- Infrastructure Architecture: Design scalable, fault-tolerant systems for high-frequency trading environments.
- Performance Optimization: Tune Linux servers and Python environments for maximum speed and efficiency.
- Incident Management: Lead real-time response for live trading systems, performing RCA and preventive fixes.
- Automation & CI/CD: Build and enhance robust pipelines using Docker, Jenkins, and Ansible.
- Proactive Monitoring: Implement advanced logging and alerting (Prometheus/Grafana) to ensure high uptime.
- Database Admin: Manage relational databases and write optimized SQL for operational reporting.
- Mentorship: Guide junior DevOps members and maintain rigorous system documentation.
Technical Requirements
- OS/Scripting: Advanced Linux Admin and expert-level Python scripting.
- IaC & Tools: Hands-on experience with Ansible, Terraform, and Docker.
- CI/CD: Proficiency in Jenkins or GitLab CI.
- Data: Strong SQL skills with experience in performance tuning.
- Education: B.Tech/M.Tech in Computer Science or related engineering field.
Job Title : Senior DevOps Engineer (Only Mumbai Candidates)
Experience : 5+ Years
Location : Mumbai (On-site)
Notice Period : Immediate to 15 Days
Interview Process : 1 Internal Round + 1 Client Round
Mandatory Skills :
Multi-Cloud (AWS/GCP/Azure – any two), Kubernetes, Terraform, Helm (writing Helm Charts), CI/CD (GitLab CI/Jenkins/GitHub Actions), GitOps (ArgoCD/FluxCD), Multi-tenant deployments, Stateful microservices on Kubernetes, Enterprise Linux.
Role Overview :
We are looking for a Senior DevOps Engineer to design, build, and manage scalable cloud infrastructure and DevOps pipelines for product-based platforms.
The ideal candidate should have strong experience with Kubernetes, Terraform, Helm Charts, CI/CD, and GitOps practices.
Key Responsibilities :
- Design and manage scalable cloud infrastructure across AWS/GCP/Azure.
- Deploy and manage microservices on Kubernetes clusters.
- Build and maintain Infrastructure as Code using Terraform and Helm.
- Implement CI/CD pipelines using GitLab CI, Jenkins, or GitHub Actions.
- Implement GitOps workflows using ArgoCD or FluxCD.
- Ensure secure, scalable, and reliable DevOps architecture.
- Implement monitoring and logging using Prometheus, Grafana, or ELK.
Good to Have :
- Packer, OpenShift/Rancher/K3s, On-prem deployments, PaaS experience, scripting (Bash/Python), Terraform modules.
Key Responsibilities
DevOps Strategy & Leadership
- Define and execute the end-to-end DevOps strategy for high-frequency trading and fintech platforms.
- Lead, mentor, and scale a high-performing DevOps team focused on automation, reliability, and performance.
- Partner closely with engineering and product leaders to ensure infrastructure strategy supports business and technical goals.
CI/CD & Infrastructure Automation
- Architect, implement, and optimize enterprise-grade CI/CD pipelines for ultra-low-latency trading systems.
- Drive Infrastructure as Code (IaC) adoption using Terraform, Helm, Kubernetes, and advanced automation toolsets.
- Establish robust release management, deployment workflows, and versioning best practices for mission‑critical environments.
Cloud & On‑Prem Infrastructure Management
- Design and manage hybrid infrastructures across AWS, GCP, and on-premise data centers ensuring high availability and fault tolerance.
- Implement sophisticated networking strategies for low-latency workloads including routing optimization and performance tuning.
- Lead multi‑cloud scalability, cost optimization, and environment standardization initiatives.
Performance Monitoring & Optimization
- Oversee large-scale monitoring systems using Prometheus, Grafana, ELK, and related observability tools.
- Implement predictive alerting, automated remediation, and system‑wide health checks for zero‑downtime operations.
- Conduct root-cause analyses and performance tuning for systems processing millions of transactions per second.
Security & Compliance
- Champion DevSecOps practices and embed security across the entire development and deployment lifecycle.
- Ensure adherence to financial regulatory standards (SEBI and global frameworks) with strong audit and compliance mechanisms.
- Lead security automation efforts, vulnerability management, and advanced IAM policy implementation.
Required Skills & Qualifications
- 10+ years of DevOps experience, with 5+ years in a leadership capacity.
- Deep hands-on expertise in CI/CD tools such as Jenkins, GitLab CI/CD, and ArgoCD.
- Strong command of AWS, GCP, and hybrid cloud infrastructures.
- Expert-level knowledge of Kubernetes, Docker, and large-scale container orchestration.
- Advanced proficiency in Terraform, Helm, and overall IaC workflows.
- Strong Linux administration, networking fundamentals (TCP/IP, DNS, Firewalls), and system internals.
- Experience with monitoring and observability platforms (Prometheus, Grafana, ELK).
- Excellent scripting skills in Python, Bash, or Go for automation and tooling.
- Deep understanding of security principles, encryption, IAM, and compliance frameworks.
Good to Have
- Experience with ultra-low-latency or high-frequency trading systems.
- Knowledge of FIX protocol, FPGA acceleration, or network‑level optimizations.
- Familiarity with Redis, Nginx, or other high‑throughput systems.
- Exposure to micro‑second‑level performance tuning or network acceleration technologies.
Why Join Us?
- Be part of a team that consistently raises the bar and delivers exceptional engineering outcomes.
- A culture where innovation, ownership, and bold thinking are valued.
- Exceptional growth opportunities—ideal for someone who thrives in fast-paced, high-impact environments.
- Build systems that influence markets and redefine the fintech landscape.
This isn’t just a role—it’s a challenge, a platform, and a proving ground.
Ready to step up? Apply now.
Role Overview:
We are looking for a skilled DevOps Engineer to join our team. You will be responsible for managing and automating the deployment, monitoring, and scaling of our applications, ensuring high availability, security, and performance. The ideal candidate is passionate about automation, CI/CD, and cloud infrastructure.
Key Responsibilities:
- Design, implement, and maintain CI/CD pipelines for development, testing, and production environments.
- Manage cloud infrastructure (AWS, Azure, GCP, or others) and ensure scalability, reliability, and security.
- Automate deployment, configuration management, and infrastructure provisioning using tools like Terraform, Ansible, or Chef.
- Monitor application performance and infrastructure health using tools like Prometheus, Grafana, ELK Stack, or Datadog.
- Collaborate with development and QA teams to streamline workflows and resolve deployment issues.
- Implement security best practices in pipelines, infrastructure, and cloud environments.
- Maintain version control and manage release cycles.
- Troubleshoot and resolve production issues efficiently.
Required Skills & Qualifications:
- Bachelor’s degree in Computer Science, IT, or related field.
- Proven experience in DevOps, system administration, or cloud engineering.
- Strong knowledge of CI/CD tools (Jenkins, GitLab CI/CD, CircleCI, etc.).
- Hands-on experience with containerization (Docker, Kubernetes).
- Experience with cloud platforms (AWS, Azure, or GCP).
- Scripting skills (Python, Bash, or PowerShell).
- Knowledge of infrastructure as code (Terraform, CloudFormation).
- Familiarity with monitoring and logging tools.
- Strong problem-solving, communication, and teamwork skills.
Preferred Qualifications:
- Experience with microservices architecture.
- Knowledge of networking, load balancing, and firewalls.
- Exposure to Agile/Scrum methodologies.
What We Offer:
- Competitive salary
- Flexible working hours and remote options.
- Learning and development opportunities.
- Collaborative and inclusive work environment.
Hiring: Cloud Engineer – MLOps Platform 🚨
📍 Location: Bangalore
🧠 Experience: 5–8 Years
We are looking for an experienced Cloud Engineer to support ML teams and drive end-to-end automation for model deployment across modern cloud platforms.
🔹 Tech Stack:
Azure | Databricks | AKS | ARO | Terraform | MLflow | CI/CD
🔹 Key Responsibilities:
• Build and maintain CI/CD and Continuous Training (CT) pipelines using Azure DevOps, GitHub Actions, or Jenkins.
• Deploy Databricks jobs, MLflow models, and microservices on AKS / ARO environments.
• Automate infrastructure using Terraform and GitOps practices.
• Manage Databricks workspaces, AKS clusters, and networking configurations.
• Implement monitoring, logging, and alerting systems for ML workloads.
• Ensure cloud security, governance, and cost optimization best practices.
🔹 Required Skills:
✔ Strong hands-on experience with Azure, AKS, ARO, and Databricks
✔ Experience with MLflow and Kubernetes-based deployments
✔ Proficiency in Python and Bash / PowerShell scripting
✔ Strong understanding of cloud security, infrastructure automation, and distributed systems
Key Responsibilities
- Design, implement, and maintain highly available infrastructure on AWS.
- Automate infrastructure provisioning using Terraform (Infrastructure as Code).
- Define and monitor SLIs, SLOs, and error budgets to improve service reliability.
- Build and manage CI/CD pipelines to enable safe and frequent deployments.
- Implement robust monitoring, alerting, and logging solutions.
- Perform incident response, root cause analysis (RCA), and postmortems.
- Improve system resilience through automation and self-healing mechanisms.
- Optimize cloud resource utilization and cost (FinOps awareness).
- Collaborate with development teams to improve application reliability.
- Manage containerized workloads using Docker and Kubernetes (EKS preferred).
- Implement security and compliance best practices across infrastructure.
- Maintain operational runbooks and documentation.
Required Qualifications
- Bachelor’s degree in Computer Science, Engineering, or related field.
- 7–8 years of experience in SRE, DevOps, or Production Engineering.
- Strong hands-on experience with AWS services.
- Proven experience with Terraform for infrastructure automation.
- Experience building CI/CD pipelines (GitHub Actions, Jenkins, or similar).
- Strong scripting skills (Python, Bash, or Shell).
- Experience with Linux system administration.
- Hands-on experience with monitoring and observability tools.
- Good understanding of networking and cloud security fundamentals.
- Experience with Git and branching strategies
Job opportunity for Developer -Python Full Stack with Siemens at Bangalore.
Interview Process:
1st round of interview - F2F (in-Person)-Technical
2nd round of interview – F2F /Virtual Interview - Technical
3rd round of interview – Virtual Interview – Technical + HR
Job Title / Designation: Developer -Python Full Stack
Employment Type: Full Time, Permanent
Location: Bangalore
Experience: 3-5 Years Job Description: : Developer -Python Full Stack
We are looking for a python full stack expert who has proven 5+ years of experience in developing automating solutions on Linux based environments. You should be capable of developing python-based web applications or automation solutions and have with excellent knowledge on DB handling and decent knowledge of the K8-based deployment environment.
Required Skills:
- Solid experience in Python back-end technology
- Sound experience in web application development
- Decent knowledge and experience in UI development using JavaScript, React/Angular or related tech stack.
- Strong understanding of software design patterns and testing principles
- Ability to learn and adapt to working with multiple programming languages.
- Experience Docker, ArgoCD, Kubernetes and Terraform
- Understanding of ETL processes to extract data from different data sources is a plus.
- Proven experience in Linux development environments using Python.
- Excellent knowledge in interacting with database systems (SQL, NoSQL) and webservices (REST)
- Experienced in establishing an optimized CI / CD environment relevant to the project.
- Good knowledge on repository management tools like Git, Bit Bucket, etc.
- Excellent debugging skills/strategies.
- Excellent communication skills
- Experienced in working in an Agile environment.
Nice to have
- Good Knowledge in eclipse IDE, developed add-ons/ plugins on eclipse Platform.
- Knowledge of 93K Semiconductor test platforms
- Good know-how of agile management tools like Jira, Azure DevOps.
- Good knowledge of RHEL
- Knowledge of JIRA administration
Key Responsibilities
- Design, implement, and maintain highly available infrastructure on AWS.
- Automate infrastructure provisioning using Terraform (Infrastructure as Code).
- Define and monitor SLIs, SLOs, and error budgets to improve service reliability.
- Build and manage CI/CD pipelines to enable safe and frequent deployments.
- Implement robust monitoring, alerting, and logging solutions.
- Perform incident response, root cause analysis (RCA), and postmortems.
- Improve system resilience through automation and self-healing mechanisms.
- Optimize cloud resource utilization and cost (FinOps awareness).
- Collaborate with development teams to improve application reliability.
- Manage containerized workloads using Docker and Kubernetes (EKS preferred).
- Implement security and compliance best practices across infrastructure.
- Maintain operational runbooks and documentation.
Required Qualifications
- Bachelor’s degree in Computer Science, Engineering, or related field.
- 7–8 years of experience in SRE, DevOps, or Production Engineering.
- Strong hands-on experience with AWS services.
- Proven experience with Terraform for infrastructure automation.
- Experience building CI/CD pipelines (GitHub Actions, Jenkins, or similar).
- Strong scripting skills (Python, Bash, or Shell).
- Experience with Linux system administration.
- Hands-on experience with monitoring and observability tools.
- Good understanding of networking and cloud security fundamentals.
- Experience with Git and branching strategies
We are looking for a skilled DevOps Engineer with hands-on experience in cloud platforms, CI/CD pipelines, container orchestration, and infrastructure automation. The ideal candidate is someone who loves solving reliability challenges, automating everything, and ensuring seamless delivery across environments.
Key Responsibilities
- Design, implement, and maintain CI/CD pipelines using GitHub Actions, Jenkins, and GitHub.
- Manage and optimize infrastructure on AWS/GCP, ensuring scalability, security, and high availability.
- Deploy and manage containerized applications using Docker and Kubernetes.
- Build, automate, and manage infrastructure as code using Terraform.
- Configure and manage automation tools and workflows using Ansible.
- Monitor system performance, troubleshoot production issues, and ensure smooth operations.
- Implement best practices for code management, release processes, and DevOps standards.
- Collaborate closely with development teams to improve build pipelines and deployment workflows.
- Write scripts in Python/Bash to automate operational tasks.
Required Skills & Experience
- 2+ years of hands-on experience as a DevOps Engineer or in a similar role.
- Strong expertise in AWS or GCP cloud services.
- Solid understanding of Kubernetes (deployment, scaling, service mesh, packaging).
- Proficiency with Terraform for infrastructure automation.
- Experience with Git, GitHub, and GitHub Actions for source control and CI/CD.
- Good knowledge of Jenkins pipelines and automation.
- Hands-on experience with Ansible for configuration management.
- Strong scripting skills using Python or Bash.
- Understanding of monitoring, logging, and security best practices.
Hiring: AWS DevOps Developer
📍 Location: Bangalore
🧑💻 Experience: 4–7 Years
📌 Job Summary
We are looking for a skilled AWS DevOps Developer with strong experience in AWS cloud infrastructure, CI/CD automation, containerization, and Infrastructure as Code. The ideal candidate should have hands-on experience building scalable and secure cloud environments.
🛠 Required Technical Skills
☁️ AWS Services
- Amazon EC2
- Amazon S3
- IAM
- VPC
- Amazon EKS
- RDS
- Route 53
- CloudWatch
- AWS Lambda
🔄 DevOps & CI/CD
- Jenkins (Pipelines, Shared Libraries)
- Git / GitHub
- Maven / Build tools
- CI/CD pipeline design & implementation
🐳 Containers & Orchestration
- Docker
- Kubernetes (EKS preferred)
- Helm
🏗 Infrastructure as Code
- Terraform
- Ansible
📊 Monitoring & Logging
- CloudWatch
- Prometheus
- Grafana
📋 Roles & Responsibilities
- Design and implement scalable AWS infrastructure
- Build and maintain CI/CD pipelines
- Deploy containerized applications using Docker & Kubernetes
- Automate infrastructure provisioning using Terraform
- Implement monitoring and alerting solutions
- Ensure security, compliance, and cost optimization
- Troubleshoot production issues and improve system reliability
➕ Good to Have
- AWS Certification (Solutions Architect / DevOps Engineer)
- Experience with Microservices architecture
- Knowledge of DevSecOps practices
- Experience in Agile methodology
Description
SRE Engineer
Role Overview
As a Site Reliability Engineer, you will play a critical role in ensuring the availability and performance of our customer-facing platform. You will work closely with DevOps, DBA, and Development teams to provision and maintain infrastructure, deploy and monitor our applications, and automate workflows. Your contributions will have a direct impact on customer satisfaction and overall experience.
Responsibilities and Deliverables
• Manage, monitor, and maintain highly available systems (Windows and Linux)
• Analyze metrics and trends to ensure rapid scalability.
• Address routine service requests while identifying ways to automate and simplify.
• Create infrastructure as code using Terraform, ARM Templates, Cloud Formation.
• Maintain data backups and disaster recovery plans.
• Design and deploy CI/CD pipelines using GitHub Actions, Octopus, Ansible, Jenkins, Azure DevOps.
• Adhere to security best practices through all stages of the software development lifecycle
• Follow and champion ITIL best practices and standards.
• Become a resource for emerging and existing cloud technologies with a focus on AWS.
Organizational Alignment
• Reports to the Senior SRE Manager
• This role involves close collaboration with DevOps, DBA, and security teams.
Technical Proficiencies
• Hands-on experience with AWS is a must-have.
• Proficiency analyzing application, IIS, system, security logs and CloudTrail events
• Practical experience with CI/CD tools such as GitHub Actions, Jenkins, Octopus
• Experience with observability tools such as New Relic, Application Insights, AppDynamics, or DataDog.
• Experience maintaining and administering Windows, Linux, and Kubernetes.
• Experience in automation using scripting languages such as Bash, PowerShell, or Python.
• Configuration management experience using Ansible, Terraform, Azure Automation Run book or similar.
• Experience with SQL Server database maintenance and administration is preferred.
• Good Understanding of networking (VNET, subnet, private link, VNET peering).
• Familiarity with cloud concepts including certificates, Oauth, AzureAD, ASE, ASP, AKS, Azure Apps,
Load Balancers, Application Gateway, Firewall, Load Balancer, API Management, SQL Server, Databases on Azure
Experience
• 7+ years of experience in SRE or System Administration role
• Demonstrated ability building and supporting high availability Windows/Linux servers, with emphasis on the WISA stack (Windows/IIS/SQL Server/ASP.net)
• 3+ years of experience working with cloud technologies including AWS, Azure.
• 1+ years of experience working with container technology including Docker and Kubernetes.
• Comfortable using Scrum, Kanban, or Lean methodologies.
Education
• Bachelor’s Degree or College Diploma in Computer Science, Information Systems, or equivalent
experience.
Additional Job Details:
• Working hours: 2:00 PM / 3:00 PM to 11:30 PM IST
• Interview process: 3 technical rounds
• Work model: 3 days’ work from office

Consumer Internet, Technology & Travel and Tourism Platform
Job Details
- Job Title: Lead DevOps Engineer
- Industry: Consumer Internet, Technology & Travel and Tourism Platform
- Function - IT
- Experience Required: 7-10 years
- Employment Type: Full Time
- Job Location: Bengaluru
- CTC Range: Best in Industry
Criteria:
- Strong Lead DevOps / Infrastructure Engineer Profiles.
- Must have 7+ years of hands-on experience working as a DevOps / Infrastructure Engineer.
- Candidate’s current title must be Lead DevOps Engineer (or equivalent Lead role) in the current organization
- Must have minimum 2+ years of team management / technical leadership experience, including mentoring engineers, driving infrastructure decisions, or leading DevOps initiatives.
- Must have strong hands-on experience with Kubernetes (container orchestration) including deployment, scaling, and cluster management.
- Must have experience with Infrastructure as Code (IaC) tools such as Terraform, Ansible, Chef, or Puppet.
- Must have strong scripting and automation experience using Python, Go, Bash, or similar scripting languages.
- Must have working experience with distributed databases or data systems such as MongoDB, Redis, Cassandra, Elasticsearch, or Puppet.
- Must have strong hands-on experience in Observability & Monitoring, CI/CD architecture, and Networking concepts in production environments.
- (Company) – Must be from B2C Product Companies only.
- (Education) – B.E/ B.Tech
Preferred
- Experience working in microservices architecture and event-driven systems.
- Exposure to cloud infrastructure, scalability, reliability, and cost optimization practices.
- (Skills) – Understanding of programming languages such as Go, Python, or Java.
- (Environment) – Experience working in high-growth startup or large-scale production environments.
Job Description
As a DevOps Engineer, you will be working on building and operating infrastructure at scale, designing and implementing a variety of tools to enable product teams to build and deploy their services independently, improving observability across the board, and designing for security, resiliency, availability, and stability. If the prospect of ensuring system reliability at scale and exploring cutting-edge technology to solve problems, excites you, then this is your fit.
Job Responsibilities:
- Own end-to-end infrastructure right from non-prod to prod environment including self-managed DBs
- Codify our infrastructure
- Do what it takes to keep the uptime above 99.99%
- Understand the bigger picture and sail through the ambiguities
- Scale technology considering cost and observability and manage end-to-end processes
- Understand DevOps philosophy and evangelize the principles across the organization
- Strong communication and collaboration skills to break down the silos

Consumer Internet, Technology & Travel and Tourism Platform
Job Details
- Job Title: Senior DevOps Engineer
- Industry: Consumer Internet, Technology & Travel and Tourism Platform
- Function - IT
- Experience Required: 4-7 years
- Employment Type: Full Time
- Job Location: Bengaluru
- CTC Range: Best in Industry
Criteria:
- Strong DevOps / Infrastructure Engineer Profiles.
- Must have 4+ years of hands-on experience working as a DevOps Engineer / Infrastructure Engineer / SRE / DevOps Consultant.
- Must have hands-on experience with Kubernetes and Docker, including deployment, scaling, or containerized application management.
- Must have experience with Infrastructure as Code (IaC) or configuration management tools such as Terraform, Ansible, Chef, or Puppet.
- Must have strong automation and scripting experience using Python, Go, Bash, Shell, or similar scripting languages.
- Must have working experience with distributed databases or data systems such as MongoDB, Redis, Cassandra, Elasticsearch, or Kafka.
- Candidate must demonstrate strong expertise in at least one of the following areas - Databases / Distributed Data Systems, Observability & Monitoring, CI/CD Pipelines. Networking Concepts, Kubernetes / Container Platforms
- Candidates must be from B2C Product-based companies only.
- (Education) – BE / B.Tech or equivalent
Preferred
- Experience working with microservices or event-driven architectures.
- Exposure to cloud infrastructure, monitoring, reliability, and scalability practices.
- (Skills) – Understanding of programming languages such as Go, Python, or Java.
- Preferred (Environment) – Experience working in high-scale production or fast-growing product startups.
Job Description
As a DevOps Engineer, you will be working on building and operating infrastructure at scale, designing and implementing a variety of tools to enable product teams to build and deploy their services independently, improving observability across the board, and designing for security, resiliency, availability, and stability. If the prospect of ensuring system reliability at scale and exploring cutting-edge technology to solve problems, excites you, then this is your fit.
Job Responsibilities:
- Own end-to-end infrastructure right from non-prod to prod environment including self-managed DBs
- Codify our infrastructure
- Do what it takes to keep the uptime above 99.99%
- Understand the bigger picture and sail through the ambiguities
- Scale technology considering cost and observability and manage end-to-end processes
- Understand DevOps philosophy and evangelize the principles across the organization
- Strong communication and collaboration skills to break down the silos
About the role:
We are looking for a skilled and driven Security Engineer to join our growing security team. This role requires a hands-on professional who can evaluate and strengthen the security posture of our
applications and infrastructure across Web, Android, iOS, APIs, and cloud-native environments.
The ideal candidate will also lead technical triage from our bug bounty program, integrate security into the DevOps lifecycle, and contribute to building a security-first engineering culture.
Required Skills & Experience:
● 3 to 6 years of solid hands-on experience in the VAPT domain
● Solid understanding of Web, Android, and iOS application security
● Experience with DevSecOps tools and integrating security into CI/CD
● Strong knowledge of cloud platforms (AWS/GCP/Azure) and their security models
● Familiarity with bug bounty programs and responsible disclosure practices
● Familiarity with tools like Burp Suite, MobSF, OWASP ZAP, Terraform, Checkov..etc
● Good knowledge of API security
● Scripting experience (Python, Bash, or similar) for automation tasks
Preferred Qualifications:
● OSCP, CEH, AWS Security Specialty, or similar certifications
● Experience working in a regulated environment (e.g., FinTech, InsurTech)
Responsibilities:
● Perform Security reviews, Vulnerability Assessments & Penetration Testing for Web,
Android, iOS, and API endpoints
● Perform Threat Modelling & anticipate potential attack vectors and improve security
architecture on complex or cross-functional components
● Identify and remediate OWASP Top 10 and mobile-specific vulnerabilities
● Conduct secure code reviews and red team assessments
● Integrate SAST, DAST, SCA, and secret scanning tools into CI/CD pipelines
● Automate security checks using tools like SonarQube, Snyk, Trivy, etc.
● Maintain and manage vulnerability scanning infrastructure
● Perform security assessments of AWS, Azure, and GCP environments, with an emphasis
on container security, particularly for Docker and Kubernetes.
● Implement guardrails for IAM, network segmentation, encryption, and cloud monitoring
● Contribute to infrastructure hardening for containers, Kubernetes, and virtual machines
● Triage bug bounty reports and coordinate remediation with engineering teams
● Act as the primary responder for external security disclosures
● Maintain documentation and metrics related to bug bounty and penetration testing
activities
● Collaborate with developers and architects to ensure secure design decisions
● Lead security design reviews for new features and products
● Provide actionable risk assessments and mitigation plans to stakeholders

Global Digital Transformation Solutions Provider
Job Details
- Job Title: Lead I - Data Engineering (Python, AWS Glue, Pyspark, Terraform)
- Industry: Global digital transformation solutions provider
- Domain - Information technology (IT)
- Experience Required: 5-7 years
- Employment Type: Full Time
- Job Location: Hyderabad
- CTC Range: Best in Industry
Job Description
Data Engineer with AWS, Python, Glue, Terraform, Step function and Spark
Skills: Python, AWS Glue, Pyspark, Terraform - All are mandatory
******
Notice period - 0 to 15 days only
Job stability is mandatory
Location: Hyderabad

Global Digital Transformation Solutions Provider
JOB DETAILS:
* Job Title: Lead I - Azure, Terraform, GitLab CI
* Industry: Global Digital Transformation Solutions Provider
* Salary: Best in Industry
* Experience: 3-5 years
* Location: Trivandrum/Pune
Job Description
Job Title: DevOps Engineer
Experience: 4–8 Years
Location: Trivandrum & Pune
Job Type: Full-Time
Mandatory skills: Azure, Terraform, GitLab CI, Splunk
Job Description
We are looking for an experienced and driven DevOps Engineer with 4 to 8 years of experience to join our team in Trivandrum or Pune. The ideal candidate will take ownership of automating cloud infrastructure, maintaining CI/CD pipelines, and implementing monitoring solutions to support scalable and reliable software delivery in a cloud-first environment.
Key Responsibilities
- Design, manage, and automate Azure cloud infrastructure using Terraform.
- Develop scalable, reusable, and version-controlled Infrastructure as Code (IaC) modules.
- Implement monitoring and logging solutions using Splunk, Azure Monitor, and Dynatrace.
- Build and maintain secure and efficient CI/CD pipelines using GitLab CI or Harness.
- Collaborate with cross-functional teams to enable smooth deployment workflows and infrastructure updates.
- Analyze system logs, performance metrics, and s to troubleshoot and optimize performance.
- Ensure infrastructure security, compliance, and scalability best practices are followed.
Mandatory Skills
Candidates must have hands-on experience with the following technologies:
- Azure – Cloud infrastructure management and deployment
- Terraform – Infrastructure as Code for scalable provisioning
- GitLab CI – Pipeline development, automation, and integration
- Splunk – Monitoring, logging, and troubleshooting production systems
Preferred Skills
- Experience with Harness (for CI/CD)
- Familiarity with Azure Monitor and Dynatrace
- Scripting proficiency in Python, Bash, or PowerShell
- Understanding of DevOps best practices, containerization, and microservices architecture
- Exposure to Agile and collaborative development environments
Skills Summary
Azure, Terraform, GitLab CI, Splunk (Mandatory) Additional: Harness, Azure Monitor, Dynatrace, Python, Bash, PowerShell
Skills: Azure, Splunk, Terraform, Gitlab Ci
******
Notice period - 0 to 15days only
Job stability is mandatory
Location: Trivandrum/Pune
- 3+ years hands-on Azure cloud & automation experience.
- Experience managing high-availability enterprise systems.
- Microsoft Azure (AKS, VNets, App Gateway, Load Balancers).
- Kubernetes (AKS) & Docker.
- Networking (VPN, DNS, routing, firewalls, NSGs).
- Infra-as-Code (Terraform / Bicep optional).
- Monitoring tools: Azure Monitor, Grafana, Prometheus.
- CI/CD: Azure DevOps, GitLab/Jenkins (added advantage).
- Security: Key Vault, certificates, encryption, RBAC.
- Understanding of PostgreSQL/PostGIS networking.
- Design and manage Azure infrastructure (VMs, VNets, NSGs, Load Balancers, AKS, Storage).
- Deploy and maintain AKS workloads for NiFi, PostGIS, and microservices.
- Architect secure network topology including VNet peering, VPNs, Private Endpoints, DNS & Zero Trust policies.
- Implement monitoring and alerting using Azure Monitor, Log Analytics, Grafana & Prometheus.
- Ensure high uptime, DR planning, backup and failover strategies.
- Automate deployments with Azure DevOps, Helm, ArgoCD & GitOps principles.
- Enforce security, RBAC, compliance, and audit standards across environments.
- Good to have knowledge/experince in Linux administration (Ubuntu/Debian).
Job Location: Bangalore/Mumbai
Exp: 3-10+ Yrs
Job Title: DevOps Engineer
About TradeLab:
TradeLab is a leading fintech technology provider, delivering cutting-edge solutions to brokers, banks, and fintech platforms. Our portfolio includes high-performance Order & Risk Management Systems (ORMS), seamless MetaTrader integrations, AI-driven customer engagement platforms such as PULSE LLaVA, and compliance-grade risk management solutions.
Key Responsibilities
- DevOps Strategy & Leadership
- Contribute to defining and executing the DevOps strategy for high-frequency trading and fintech platforms. Mentor junior engineers and collaborate with cross-functional teams to foster a culture of automation, scalability, and performance.
- Work closely with engineering and product teams to align infrastructure initiatives with business objectives.
- CI/CD and Infrastructure Automation Design and optimize CI/CD pipelines for ultra-low-latency trading systems.
- Implement Infrastructure as Code (IaC) practices using Terraform, Helm, Kubernetes, and automation frameworks.
- Establish best practices for release management and deployment in mission-critical environments.
- Cloud & On-Prem Infrastructure Management Manage hybrid infrastructure across AWS, GCP, and on-prem data centers ensuring high availability and fault tolerance.
- Implement networking strategies for low-latency trading, including routing and performance tuning.
- Drive cost optimization and scalability initiatives across multi-cloud environments.
- Performance Monitoring & Optimization Set up and maintain system performance monitoring using Prometheus, Grafana, and ELK stack. Implement alerting and automated remediation strategies for zero-downtime operations.
- Conduct root-cause analysis and performance tuning for systems handling millions of transactions per second.
- Security & Compliance Apply DevSecOps principles across all environments.
- Ensure compliance with financial regulations (SEBI and global standards) and maintain audit trails.
- Drive security automation, vulnerability management, and IAM policies.
Required Skills & Qualifications
- 3–8 years of experience in DevOps, with exposure to leadership or team mentoring.
- Strong expertise in CI/CD tools (Jenkins, GitLab CI/CD, ArgoCD). Hands-on experience with cloud platforms (AWS, GCP) and hybrid infrastructure.
- Proficiency in Kubernetes, Docker, and container orchestration. Solid experience with Terraform, Helm, and IaC principles.
- Strong Linux administration and networking fundamentals (TCP/IP, DNS, firewalls). Experience with monitoring tools (Prometheus, Grafana, ELK).
- Proficiency in scripting languages (Python, Bash, Go) for automation. Understanding of security best practices, IAM, and compliance frameworks.
Good to Have
- Exposure to ultra-low-latency trading infrastructure or high-frequency trading systems.
- Knowledge of FIX protocol, FPGA acceleration, or network optimization techniques.
- Familiarity with Redis, Nginx, or other real-time data handling technologies.
- Experience in advanced performance tuning for microsecond-level execution.
Why Join Us?
Work with a team that expects and delivers excellence. A culture where innovation and speed are rewarded. Limitless opportunities for growth—if you can handle the pace. Build systems that move markets and redefine fintech
10–14 years of experience in software engineering, with strong emphasis on backend and data architecture for large-scale systems.
Proven experience designing and deploying distributed, event-driven systems and streaming data pipelines.
Expert proficiency in Go/Python, including experience with microservices, APIs, and concurrency models.
Deep understanding of data flows across multi-sensor and multi-modal sources, including ingestion, transformation, and synchronization.
Experience building real-time APIs for interactive web applications and data-heavy workflows.
Familiarity with frontend ecosystems (React, TypeScript) and rendering frameworks leveraging WebGL/WebGPU.
Hands-on experience with CI/CD, Kubernetes, Docker, and Infrastructure as Code (Terraform, Helm).
JOB DETAILS:
* Job Title: DevOps Engineer (Azure)
* Industry: Technology
* Salary: Best in Industry
* Experience: 2-5 years
* Location: Bengaluru, Koramangala
Review Criteria
- Strong Azure DevOps Engineer Profiles.
- Must have minimum 2+ years of hands-on experience as an Azure DevOps Engineer with strong exposure to Azure DevOps Services (Repos, Pipelines, Boards, Artifacts).
- Must have strong experience in designing and maintaining YAML-based CI/CD pipelines, including end-to-end automation of build, test, and deployment workflows.
- Must have hands-on scripting and automation experience using Bash, Python, and/or PowerShell
- Must have working knowledge of databases such as Microsoft SQL Server, PostgreSQL, or Oracle Database
- Must have experience with monitoring, alerting, and incident management using tools like Grafana, Prometheus, Datadog, or CloudWatch, including troubleshooting and root cause analysis
Preferred
- Knowledge of containerisation and orchestration tools such as Docker and Kubernetes.
- Knowledge of Infrastructure as Code and configuration management tools such as Terraform and Ansible.
- Preferred (Education) – BE/BTech / ME/MTech in Computer Science or related discipline
Role & Responsibilities
- Build and maintain Azure DevOps YAML-based CI/CD pipelines for build, test, and deployments.
- Manage Azure DevOps Repos, Pipelines, Boards, and Artifacts.
- Implement Git branching strategies and automate release workflows.
- Develop scripts using Bash, Python, or PowerShell for DevOps automation.
- Monitor systems using Grafana, Prometheus, Datadog, or CloudWatch and handle incidents.
- Collaborate with dev and QA teams in an Agile/Scrum environment.
- Maintain documentation, runbooks, and participate in root cause analysis.
Ideal Candidate
- 2–5 years of experience as an Azure DevOps Engineer.
- Strong hands-on experience with Azure DevOps CI/CD (YAML) and Git.
- Experience with Microsoft Azure (OCI/AWS exposure is a plus).
- Working knowledge of SQL Server, PostgreSQL, or Oracle.
- Good scripting, troubleshooting, and communication skills.
- Bonus: Docker, Kubernetes, Terraform, Ansible experience.
- Comfortable with WFO (Koramangala, Bangalore).
About Us:
MyOperator is a Business AI Operator and a category leader that unifies WhatsApp, Calls, and AI-powered chat & voice bots into one intelligent business communication platform.
Unlike fragmented communication tools, MyOperator combines automation, intelligence, and workflow integration to help businesses run WhatsApp campaigns, manage calls, deploy AI chatbots, and track performance — all from a single, no-code platform.Trusted by 12,000+ brands including Amazon, Domino’s, Apollo, and Razorpay, MyOperator enables faster responses, higher resolution rates, and scalable customer engagement — without fragmented tools or increased headcount.
Role Overview:
We’re seeking a passionate Python Developer with strong experience in backend development and cloud infrastructure. This role involves building scalable microservices, integrating AI tools like LangChain/LLMs, and optimizing backend performance for high-growth B2B products.
Key Responsibilities:
- Develop robust backend services using Python, Django, and FastAPI
- Design and maintain a scalable microservices architecture
- Integrate LangChain/LLMs into AI-powered features
- Write clean, tested, and maintainable code with pytest
- Manage and optimize databases (MySQL/Postgres)
- Deploy and monitor services on AWS
- Collaborate across teams to define APIs, data flows, and system architecture
Must-Have Skills:
- Python and Django
- MySQL or Postgres
- Microservices architecture
- AWS (EC2, RDS, Lambda, etc.)
- Unit testing using pytest
- LangChain or Large Language Models (LLM)
- Strong grasp of Data Structures & Algorithms
- AI coding assistant tools (e.g., Chat GPT & Gemini)
Good to Have:
- MongoDB or ElasticSearch
- Go or PHP
- FastAPI
- React, Bootstrap (basic frontend support)
- ETL pipelines, Jenkins, Terraform
Why Join Us?
- 100% Remote role with a collaborative team
- Work on AI-first, high-scale SaaS products
- Drive real impact in a fast-growing tech company
- Ownership and growth from day one

Global digital transformation solutions provider
JOB DETAILS:
* Job Title: Lead I - Software Engineering-Kotlin, Java, Spring Boot, Aws
* Industry: Global digital transformation solutions provide
* Salary: Best in Industry
* Experience: 5 -7 years
* Location: Trivandrum, Thiruvananthapuram
Role Proficiency:
Act creatively to develop applications and select appropriate technical options optimizing application development maintenance and performance by employing design patterns and reusing proven solutions account for others' developmental activities
Skill Examples:
- Explain and communicate the design / development to the customer
- Perform and evaluate test results against product specifications
- Break down complex problems into logical components
- Develop user interfaces business software components
- Use data models
- Estimate time and effort required for developing / debugging features / components
- Perform and evaluate test in the customer or target environment
- Make quick decisions on technical/project related challenges
- Manage a Team mentor and handle people related issues in team
- Maintain high motivation levels and positive dynamics in the team.
- Interface with other teams’ designers and other parallel practices
- Set goals for self and team. Provide feedback to team members
- Create and articulate impactful technical presentations
- Follow high level of business etiquette in emails and other business communication
- Drive conference calls with customers addressing customer questions
- Proactively ask for and offer help
- Ability to work under pressure determine dependencies risks facilitate planning; handling multiple tasks.
- Build confidence with customers by meeting the deliverables on time with quality.
- Estimate time and effort resources required for developing / debugging features / components
- Make on appropriate utilization of Software / Hardware’s.
- Strong analytical and problem-solving abilities
Knowledge Examples:
- Appropriate software programs / modules
- Functional and technical designing
- Programming languages – proficient in multiple skill clusters
- DBMS
- Operating Systems and software platforms
- Software Development Life Cycle
- Agile – Scrum or Kanban Methods
- Integrated development environment (IDE)
- Rapid application development (RAD)
- Modelling technology and languages
- Interface definition languages (IDL)
- Knowledge of customer domain and deep understanding of sub domain where problem is solved
Additional Comments:
We are seeking an experienced Senior Backend Engineer with strong expertise in Kotlin and Java to join our dynamic engineering team.
The ideal candidate will have a deep understanding of backend frameworks, cloud technologies, and scalable microservices architectures, with a passion for clean code, resilience, and system observability.
You will play a critical role in designing, developing, and maintaining core backend services that power our high-availability e-commerce and promotion platforms.
Key Responsibilities
Design, develop, and maintain backend services using Kotlin (JVM, Coroutines, Serialization) and Java.
Build robust microservices with Spring Boot and related Spring ecosystem components (Spring Cloud, Spring Security, Spring Kafka, Spring Data).
Implement efficient serialization/deserialization using Jackson and Kotlin Serialization. Develop, maintain, and execute automated tests using JUnit 5, Mockk, and ArchUnit to ensure code quality.
Work with Kafka Streams (Avro), Oracle SQL (JDBC, JPA), DynamoDB, and Redis for data storage and caching needs. Deploy and manage services in AWS environment leveraging DynamoDB, Lambdas, and IAM.
Implement CI/CD pipelines with GitLab CI to automate build, test, and deployment processes.
Containerize applications using Docker and integrate monitoring using Datadog for tracing, metrics, and dashboards.
Define and maintain infrastructure as code using Terraform for services including GitLab, Datadog, Kafka, and Optimizely.
Develop and maintain RESTful APIs with OpenAPI (Swagger) and JSON API standards.
Apply resilience patterns using Resilience4j to build fault-tolerant systems.
Adhere to architectural and design principles such as Domain-Driven Design (DDD), Object-Oriented Programming (OOP), and Contract Testing (Pact).
Collaborate with cross-functional teams in an Agile Scrum environment to deliver high-quality features.
Utilize feature flagging tools like Optimizely to enable controlled rollouts.
Mandatory Skills & Technologies Languages:
Kotlin (JVM, Coroutines, Serialization),
Java Frameworks: Spring Boot (Spring Cloud, Spring Security, Spring Kafka, Spring Data)
Serialization: Jackson, Kotlin Serialization
Testing: JUnit 5, Mockk, ArchUnit
Data: Kafka (Avro) Streams Oracle SQL (JDBC, JPA) DynamoDB (NoSQL) Redis (Caching)
Cloud: AWS (DynamoDB, Lambda, IAM)
CI/CD: GitLab CI Containers: Docker
Monitoring & Observability: Datadog (Tracing, Metrics, Dashboards, Monitors)
Infrastructure as Code: Terraform (GitLab, Datadog, Kafka, Optimizely)
API: OpenAPI (Swagger), REST API, JSON API
Resilience: Resilience4j
Architecture & Practices: Domain-Driven Design (DDD) Object-Oriented Programming (OOP) Contract Testing (Pact) Feature Flags (Optimizely)
Platforms: E-Commerce Platform (CommerceTools), Promotion Engine (Talon.One)
Methodologies: Scrum, Agile
Skills: Kotlin, Java, Spring Boot, Aws
Must-Haves
Kotlin (JVM, Coroutines, Serialization), Java, Spring Boot (Spring Cloud, Spring Security, Spring Kafka, Spring Data), AWS (DynamoDB, Lambda, IAM), Microservices Architecture
******
Notice period - 0 to 15 days only
Job stability is mandatory
Location: Trivandrum
Virtual Weekend Interview on 7th Feb 2026 - Saturday
We are hiring a Senior DevOps Engineer (5–10 years experience) with strong hands-on expertise in AWS, CI/CD, Docker, Kubernetes, and Linux. The role involves designing, automating, and managing scalable cloud infrastructure and deployment pipelines. Experience with Terraform/Ansible, monitoring tools, and security best practices is required. Immediate joiners preferred.
Role: DevOps Engineer
Experience: 7+ Years
Location: Pune / Trivandrum
Work Mode: Hybrid
𝐊𝐞𝐲 𝐑𝐞𝐬𝐩𝐨𝐧𝐬𝐢𝐛𝐢𝐥𝐢𝐭𝐢𝐞𝐬:
- Drive CI/CD pipelines for microservices and cloud architectures
- Design and operate cloud-native platforms (AWS/Azure)
- Manage Kubernetes/OpenShift clusters and containerized applications
- Develop automated pipelines and infrastructure scripts
- Collaborate with cross-functional teams on DevOps best practices
- Mentor development teams on continuous delivery and reliability
- Handle incident management, troubleshooting, and root cause analysis
𝐌𝐚𝐧𝐝𝐚𝐭𝐨𝐫𝐲 𝐒𝐤𝐢𝐥𝐥𝐬:
- 7+ years in DevOps/SRE roles
- Strong experience with AWS or Azure
- Hands-on with Docker, Kubernetes, and/or OpenShift
- Proficiency in Jenkins, Git, Maven, JIRA
- Strong scripting skills (Shell, Python, Perl, Ruby, JavaScript)
- Solid networking knowledge and troubleshooting skills
- Excellent communication and collaboration abilities
𝐏𝐫𝐞𝐟𝐞𝐫𝐫𝐞𝐝 𝐒𝐤𝐢𝐥𝐥𝐬:
- Experience with Helm, monitoring tools (Splunk, Grafana, New Relic, Datadog)
- Knowledge of Microservices and SOA architectures
- Familiarity with database technologies

US based large Biotech company with WW operations.
Senior Cloud Engineer Job Description
Position Title: Senior Cloud Engineer -- AWS [LONG TERM-CONTRACT POSITION]
Location: Remote [REQUIRES WORKING IN CST TIME ZONE]
Position Overview
The Senior Cloud Engineer will play a critical role in designing, deploying, and managing scalable, secure, and highly available cloud infrastructure across multiple platforms (AWS, Azure, Google Cloud). This role requires deep technical expertise, leadership in cloud
strategy, and hands-on experience with automation, DevOps practices, and cloud-native technologies. The ideal candidate will work collaboratively with cross-functional teams to deliver robust cloud solutions, drive best practices, and support business objectives
through innovative cloud engineering.
Key Responsibilities
Design, implement, and maintain cloud infrastructure and services, ensuring high availability, performance, and security across multi-cloud environments (AWS, Azure, GCP)
Develop and manage Infrastructure as Code (IaC) using tools such as Terraform, CloudFormation, and Ansible for automated provisioning and configuration
Lead the adoption and optimization of DevOps methodologies, including CI/CD pipelines, automated testing, and deployment processes
Collaborate with software engineers, architects, and stakeholders to architect cloud-native solutions that meet business and technical requirements
Monitor, troubleshoot, and optimize cloud systems for cost, performance, and reliability, using cloud monitoring and logging tools
Ensure cloud environments adhere to security best practices, compliance standards, and governance policies, including identity and access management, encryption, and vulnerability management
Mentor and guide junior engineers, sharing knowledge and fostering a culture of continuous improvement and innovation
Participate in on-call rotation and provide escalation support for critical cloud infrastructure issues
Document cloud architectures, processes, and procedures to ensure knowledge transfer and operational excellence
Stay current with emerging cloud technologies, trends, and best practices,
Required Qualifications
- Bachelors or Masters degree in Computer Science, Engineering, Information Systems, or a related field, or equivalent work experience
- 6–10 years of experience in cloud engineering or related roles, with a proven track record in large-scale cloud environments
- Deep expertise in at least one major cloud platform (AWS, Azure, Google Cloud) and experience in multi-cloud environments
- Strong programming and scripting skills (Python, Bash, PowerShell, etc.) for automation and cloud service integration
- Proficiency with DevOps tools and practices, including CI/CD (Jenkins, GitLab CI), containerization (Docker, Kubernetes), and configuration management (Ansible, Chef)
- Solid understanding of networking concepts (VPC, VPN, DNS, firewalls, load balancers), system administration (Linux/Windows), and cloud storage solutions
- Experience with cloud security, governance, and compliance frameworks
- Excellent analytical, troubleshooting, and root cause analysis skills
- Strong communication and collaboration abilities, with experience working in agile, interdisciplinary teams
- Ability to work independently, manage multiple priorities, and lead complex projects to completion
Preferred Qualifications
- Relevant cloud certifications (e.g., AWS Certified Solutions Architect, AWS DevOps Engineer, Microsoft AZ-300/400/500, Google Professional Cloud Architect)
- Experience with cloud cost optimization and FinOps practices
- Familiarity with monitoring/logging tools (CloudWatch, Kibana, Logstash, Datadog, etc.)
- Exposure to cloud database technologies (SQL, NoSQL, managed database services)
- Knowledge of cloud migration strategies and hybrid cloud architectures
REVIEW CRITERIA:
MANDATORY:
- Strong Hands-On AWS Cloud Engineering / DevOps Profile
- Mandatory (Experience 1): Must have 12+ years of experience in AWS Cloud Engineering / Cloud Operations / Application Support
- Mandatory (Experience 2): Must have strong hands-on experience supporting AWS production environments (EC2, VPC, IAM, S3, ALB, CloudWatch)
- Mandatory (Infrastructure as a code): Must have hands-on Infrastructure as Code experience using Terraform in production environments
- Mandatory (AWS Networking): Strong understanding of AWS networking and connectivity (VPC design, routing, NAT, load balancers, hybrid connectivity basics)
- Mandatory (Cost Optimization): Exposure to cost optimization and usage tracking in AWS environments
- Mandatory (Core Skills): Experience handling monitoring, alerts, incident management, and root cause analysis
- Mandatory (Soft Skills): Strong communication skills and stakeholder coordination skills
ROLE & RESPONSIBILITIES:
We are looking for a hands-on AWS Cloud Engineer to support day-to-day cloud operations, automation, and reliability of AWS environments. This role works closely with the Cloud Operations Lead, DevOps, Security, and Application teams to ensure stable, secure, and cost-effective cloud platforms.
KEY RESPONSIBILITIES:
- Operate and support AWS production environments across multiple accounts
- Manage infrastructure using Terraform and support CI/CD pipelines
- Support Amazon EKS clusters, upgrades, scaling, and troubleshooting
- Build and manage Docker images and push to Amazon ECR
- Monitor systems using CloudWatch and third-party tools; respond to incidents
- Support AWS networking (VPCs, NAT, Transit Gateway, VPN/DX)
- Assist with cost optimization, tagging, and governance standards
- Automate operational tasks using Python, Lambda, and Systems Manager
IDEAL CANDIDATE:
- Strong hands-on AWS experience (EC2, VPC, IAM, S3, ALB, CloudWatch)
- Experience with Terraform and Git-based workflows
- Hands-on experience with Kubernetes / EKS
- Experience with CI/CD tools (GitHub Actions, Jenkins, etc.)
- Scripting experience in Python or Bash
- Understanding of monitoring, incident management, and cloud security basics
NICE TO HAVE:
- AWS Associate-level certifications
- Experience with Karpenter, Prometheus, New Relic
- Exposure to FinOps and cost optimization practices
Profile: Devops Lead
Location: Gurugram
Experience: 08+ Years
Notice Period: can join Immediate to 1 week
Company: Watsoo
Required Skills & Qualifications
- Bachelor’s degree in Computer Science, Engineering, or related field.
- 5+ years of proven hands-on DevOps experience.
- Strong experience with CI/CD tools (Jenkins, GitLab CI, GitHub Actions, etc.).
- Expertise in containerization & orchestration (Docker, Kubernetes, Helm).
- Hands-on experience with cloud platforms (AWS, Azure, or GCP).
- Proficiency in Infrastructure as Code (IaC) tools (Terraform, Ansible, Pulumi, or CloudFormation).
- Experience with monitoring and logging solutions (Prometheus, Grafana, ELK, CloudWatch, etc.).
- Proficiency in scripting languages (Python, Bash, or Shell).
- Knowledge of networking, security, and system administration.
- Strong problem-solving skills and ability to work in fast-paced environments.
- Troubleshoot production issues, perform root cause analysis, and implement preventive measures.
Advocate DevOps best practices, automation, and continuous improvement
JOB DETAILS:
- Job Title: Senior Devops Engineer 2
- Industry: Ride-hailing
- Experience: 5-7 years
- Working Days: 5 days/week
- Work Mode: ONSITE
- Job Location: Bangalore
- CTC Range: Best in Industry
Required Skills: Cloud & Infrastructure Operations, Kubernetes & Container Orchestration, Monitoring, Reliability & Observability, Proficiency with Terraform, Ansible etc., Strong problem-solving skills with scripting (Python/Go/Shell)
Criteria:
1. Candidate must be from a product-based or scalable app-based start-ups company with experience handling large-scale production traffic.
2. Minimum 5 yrs of experience working as a DevOps/Infrastructure Consultant
3. Own end-to-end infrastructure right from non-prod to prod environment including self-managed
4. Candidate must have experience in database migration from scratch
5. Must have a firm hold on the container orchestration tool Kubernetes
6. Must have expertise in configuration management tools like Ansible, Terraform, Chef / Puppet
7. Understanding programming languages like GO/Python, and Java
8. Working on databases like Mongo/Redis/Cassandra/Elasticsearch/Kafka.
9. Working experience on Cloud platform - AWS
10. Candidate should have Minimum 1.5 years stability per organization, and a clear reason for relocation.
Description
Job Summary:
As a DevOps Engineer at company, you will be working on building and operating infrastructure at scale, designing and implementing a variety of tools to enable product teams to build and deploy their services independently, improving observability across the board, and designing for security, resiliency, availability, and stability. If the prospect of ensuring system reliability at scale and exploring cutting-edge technology to solve problems, excites you, then this is your fit.
Job Responsibilities:
● Own end-to-end infrastructure right from non-prod to prod environment including self-managed DBs
● Codify our infrastructure
● Do what it takes to keep the uptime above 99.99%
● Understand the bigger picture and sail through the ambiguities
● Scale technology considering cost and observability and manage end-to-end processes
● Understand DevOps philosophy and evangelize the principles across the organization
● Strong communication and collaboration skills to break down the silos
Job Requirements:
● B.Tech. / B.E. degree in Computer Science or equivalent software engineering degree/experience
● Minimum 5 yrs of experience working as a DevOps/Infrastructure Consultant
● Must have a firm hold on the container orchestration tool Kubernetes
● Must have expertise in configuration management tools like Ansible, Terraform, Chef / Puppet
● Strong problem-solving skills, and ability to write scripts using any scripting language
● Understanding programming languages like GO/Python, and Java
● Comfortable working on databases like Mongo/Redis/Cassandra/Elasticsearch/Kafka.
What’s there for you?
Company’s team handles everything – infra, tooling, and self-manages a bunch of databases, such as
● 150+ microservices with event-driven architecture across different tech stacks Golang/ java/ node
● More than 100,000 Request per second on our edge gateways
● ~20,000 events per second on self-managed Kafka
● 100s of TB of data on self-managed databases
● 100s of real-time continuous deployment to production
● Self-managed infra supporting
● 100% OSS
JOB DETAILS:
- Job Title: Lead DevOps Engineer
- Industry: Ride-hailing
- Experience: 6-9 years
- Working Days: 5 days/week
- Work Mode: ONSITE
- Job Location: Bangalore
- CTC Range: Best in Industry
Required Skills: Cloud & Infrastructure Operations, Kubernetes & Container Orchestration, Monitoring, Reliability & Observability, Proficiency with Terraform, Ansible etc., Strong problem-solving skills with scripting (Python/Go/Shell)
Criteria:
1. Candidate must be from a product-based or scalable app-based start-ups company with experience handling large-scale production traffic.
2. Minimum 6 yrs of experience working as a DevOps/Infrastructure Consultant
3. Candidate must have 2 years of experience as an lead (handling team of 3 to 4 members at least)
4. Own end-to-end infrastructure right from non-prod to prod environment including self-managed
5. Candidate must have Self experience in database migration from scratch
6. Must have a firm hold on the container orchestration tool Kubernetes
7. Should have expertise in configuration management tools like Ansible, Terraform, Chef / Puppet
8. Understanding programming languages like GO/Python, and Java
9. Working on databases like Mongo/Redis/Cassandra/Elasticsearch/Kafka.
10. Working experience on Cloud platform -AWS
11. Candidate should have Minimum 1.5 years stability per organization, and a clear reason for relocation.
Description
Job Summary:
As a DevOps Engineer at company, you will be working on building and operating infrastructure at scale, designing and implementing a variety of tools to enable product teams to build and deploy their services independently, improving observability across the board, and designing for security, resiliency, availability, and stability. If the prospect of ensuring system reliability at scale and exploring cutting-edge technology to solve problems, excites you, then this is your fit.
Job Responsibilities:
● Own end-to-end infrastructure right from non-prod to prod environment including self-managed DBs
● Codify our infrastructure
● Do what it takes to keep the uptime above 99.99%
● Understand the bigger picture and sail through the ambiguities
● Scale technology considering cost and observability and manage end-to-end processes
● Understand DevOps philosophy and evangelize the principles across the organization
● Strong communication and collaboration skills to break down the silos
Job Requirements:
● B.Tech. / B.E. degree in Computer Science or equivalent software engineering degree/experience
● Minimum 6 yrs of experience working as a DevOps/Infrastructure Consultant
● Must have a firm hold on the container orchestration tool Kubernetes
● Must have expertise in configuration management tools like Ansible, Terraform, Chef / Puppet
● Strong problem-solving skills, and ability to write scripts using any scripting language
● Understanding programming languages like GO/Python, and Java
● Comfortable working on databases like Mongo/Redis/Cassandra/Elasticsearch/Kafka.
What’s there for you?
Company’s team handles everything – infra, tooling, and self-manages a bunch of databases, such as
● 150+ microservices with event-driven architecture across different tech stacks Golang/ java/ node
● More than 100,000 Request per second on our edge gateways
● ~20,000 events per second on self-managed Kafka
● 100s of TB of data on self-managed databases
● 100s of real-time continuous deployment to production
● Self-managed infra supporting
● 100% OSS
JOB DETAILS:
- Job Title: Senior Devops Engineer 1
- Industry: Ride-hailing
- Experience: 4-6 years
- Working Days: 5 days/week
- Work Mode: ONSITE
- Job Location: Bangalore
- CTC Range: Best in Industry
Required Skills: Cloud & Infrastructure Operations, Kubernetes & Container Orchestration, Monitoring, Reliability & Observability, Proficiency with Terraform, Ansible etc., Strong problem-solving skills with scripting (Python/Go/Shell)
Criteria:
1. Candidate must be from a product-based or scalable app-based startups company with experience handling large-scale production traffic.
2. Candidate must have strong Linux expertise with hands-on production troubleshooting and working knowledge of databases and middleware (Mongo, Redis, Cassandra, Elasticsearch, Kafka).
3. Candidate must have solid experience with Kubernetes.
4. Candidate should have strong knowledge of configuration management tools like Ansible, Terraform, and Chef / Puppet. Add on- Prometheus & Grafana etc.
5. Candidate must be an individual contributor with strong ownership.
6. Candidate must have hands-on experience with DATABASE MIGRATIONS and observability tools such as Prometheus and Grafana.
7. Candidate must have working knowledge of Go/Python and Java.
8. Candidate should have working experience on Cloud platform - AWS
9. Candidate should have Minimum 1.5 years stability per organization, and a clear reason for relocation.
Description
Job Summary:
As a DevOps Engineer at company, you will be working on building and operating infrastructure at scale, designing and implementing a variety of tools to enable product teams to build and deploy their services independently, improving observability across the board, and designing for security, resiliency, availability, and stability. If the prospect of ensuring system reliability at scale and exploring cutting-edge technology to solve problems, excites you, then this is your fit.
Job Responsibilities:
- Own end-to-end infrastructure right from non-prod to prod environment including self-managed DBs.
- Understanding the needs of stakeholders and conveying this to developers.
- Working on ways to automate and improve development and release processes.
- Identifying technical problems and developing software updates and ‘fixes’.
- Working with software developers to ensure that development follows established processes and works as intended.
- Do what it takes to keep the uptime above 99.99%.
- Understand DevOps philosophy and evangelize the principles across the organization.
- Strong communication and collaboration skills to break down the silos
Job Requirements:
- B.Tech. / B.E. degree in Computer Science or equivalent software engineering degree/experience.
- Minimum 4 yrs of experience working as a DevOps/Infrastructure Consultant.
- Strong background in operating systems like Linux.
- Understands the container orchestration tool Kubernetes.
- Proficient Knowledge of configuration management tools like Ansible, Terraform, and Chef / Puppet. Add on- Prometheus & Grafana etc.
- Problem-solving attitude, and ability to write scripts using any scripting language.
- Understanding programming languages like GO/Python, and Java.
- Basic understanding of databases and middlewares like Mongo/Redis/Cassandra/Elasticsearch/Kafka.
- Should be able to take ownership of tasks, and must be responsible. - Good communication skills
About MyOperator
MyOperator is a Business AI Operator, a category leader that unifies WhatsApp, Calls, and AI-powered chat & voice bots into one intelligent business communication platform. Unlike fragmented communication tools, MyOperator combines automation, intelligence, and workflow integration to help businesses run WhatsApp campaigns, manage calls, deploy AI chatbots, and track performance — all from a single, no-code platform. Trusted by 12,000+ brands including Amazon, Domino's, Apollo, and Razorpay, MyOperator enables faster responses, higher resolution rates, and scalable customer engagement — without fragmented tools or increased headcount.
Job Summary
We are looking for a skilled and motivated DevOps Engineer with 3+ years of hands-on experience in AWS cloud infrastructure, CI/CD automation, and Kubernetes-based deployments. The ideal candidate will have strong expertise in Infrastructure as Code, containerization, monitoring, and automation, and will play a key role in ensuring high availability, scalability, and security of production systems.
Key Responsibilities
- Design, deploy, manage, and maintain AWS cloud infrastructure, including EC2, RDS, OpenSearch, VPC, S3, ALB, API Gateway, Lambda, SNS, and SQS.
- Build, manage, and operate Kubernetes (EKS) clusters and containerized workloads.
- Containerize applications using Docker and manage deployments with Helm charts
- Develop and maintain CI/CD pipelines using Jenkins for automated build and deployment processes
- Provision and manage infrastructure using Terraform (Infrastructure as Code)
- Implement and manage monitoring, logging, and alerting solutions using Prometheus and Grafana
- Write and maintain Python scripts for automation, monitoring, and operational tasks
- Ensure high availability, scalability, performance, and cost optimization of cloud resources
- Implement and follow security best practices across AWS and Kubernetes environments
- Troubleshoot production issues, perform root cause analysis, and support incident resolution
- Collaborate closely with development and QA teams to streamline deployment and release processes
Required Skills & Qualifications
- 3+ years of hands-on experience as a DevOps Engineer or Cloud Engineer.
- Strong experience with AWS services, including:
- EC2, RDS, OpenSearch, VPC, S3
- Application Load Balancer (ALB), API Gateway, Lambda
- SNS and SQS.
- Hands-on experience with AWS EKS (Kubernetes)
- Strong knowledge of Docker and Helm charts
- Experience with Terraform for infrastructure provisioning and management
- Solid experience building and managing CI/CD pipelines using Jenkins
- Practical experience with Prometheus and Grafana for monitoring and alerting
- Proficiency in Python scripting for automation and operational tasks
- Good understanding of Linux systems, networking concepts, and cloud security
- Strong problem-solving and troubleshooting skills
Good to Have (Preferred Skills)
- Exposure to GitOps practices
- Experience managing multi-environment setups (Dev, QA, UAT, Production)
- Knowledge of cloud cost optimization techniques
- Understanding of Kubernetes security best practices
- Experience with log aggregation tools (e.g., ELK/OpenSearch stack)
Language Preference
- Fluency in English is mandatory.
- Fluency in Hindi is preferred.























