50+ Terraform Jobs in India
Apply to 50+ Terraform Jobs on CutShort.io. Find your next job, effortlessly. Browse Terraform Jobs and apply today!
Key Responsibilities
DevOps Strategy & Leadership
- Define and execute the end-to-end DevOps strategy for high-frequency trading and fintech platforms.
- Lead, mentor, and scale a high-performing DevOps team focused on automation, reliability, and performance.
- Partner closely with engineering and product leaders to ensure infrastructure strategy supports business and technical goals.
CI/CD & Infrastructure Automation
- Architect, implement, and optimize enterprise-grade CI/CD pipelines for ultra-low-latency trading systems.
- Drive Infrastructure as Code (IaC) adoption using Terraform, Helm, Kubernetes, and advanced automation toolsets.
- Establish robust release management, deployment workflows, and versioning best practices for mission‑critical environments.
Cloud & On‑Prem Infrastructure Management
- Design and manage hybrid infrastructures across AWS, GCP, and on-premise data centers ensuring high availability and fault tolerance.
- Implement sophisticated networking strategies for low-latency workloads including routing optimization and performance tuning.
- Lead multi‑cloud scalability, cost optimization, and environment standardization initiatives.
Performance Monitoring & Optimization
- Oversee large-scale monitoring systems using Prometheus, Grafana, ELK, and related observability tools.
- Implement predictive alerting, automated remediation, and system‑wide health checks for zero‑downtime operations.
- Conduct root-cause analyses and performance tuning for systems processing millions of transactions per second.
Security & Compliance
- Champion DevSecOps practices and embed security across the entire development and deployment lifecycle.
- Ensure adherence to financial regulatory standards (SEBI and global frameworks) with strong audit and compliance mechanisms.
- Lead security automation efforts, vulnerability management, and advanced IAM policy implementation.
Required Skills & Qualifications
- 10+ years of DevOps experience, with 5+ years in a leadership capacity.
- Deep hands-on expertise in CI/CD tools such as Jenkins, GitLab CI/CD, and ArgoCD.
- Strong command of AWS, GCP, and hybrid cloud infrastructures.
- Expert-level knowledge of Kubernetes, Docker, and large-scale container orchestration.
- Advanced proficiency in Terraform, Helm, and overall IaC workflows.
- Strong Linux administration, networking fundamentals (TCP/IP, DNS, Firewalls), and system internals.
- Experience with monitoring and observability platforms (Prometheus, Grafana, ELK).
- Excellent scripting skills in Python, Bash, or Go for automation and tooling.
- Deep understanding of security principles, encryption, IAM, and compliance frameworks.
Good to Have
- Experience with ultra-low-latency or high-frequency trading systems.
- Knowledge of FIX protocol, FPGA acceleration, or network‑level optimizations.
- Familiarity with Redis, Nginx, or other high‑throughput systems.
- Exposure to micro‑second‑level performance tuning or network acceleration technologies.
Why Join Us?
- Be part of a team that consistently raises the bar and delivers exceptional engineering outcomes.
- A culture where innovation, ownership, and bold thinking are valued.
- Exceptional growth opportunities—ideal for someone who thrives in fast-paced, high-impact environments.
- Build systems that influence markets and redefine the fintech landscape.
This isn’t just a role—it’s a challenge, a platform, and a proving ground.
Ready to step up? Apply now.
Role Overview:
We are looking for a skilled DevOps Engineer to join our team. You will be responsible for managing and automating the deployment, monitoring, and scaling of our applications, ensuring high availability, security, and performance. The ideal candidate is passionate about automation, CI/CD, and cloud infrastructure.
Key Responsibilities:
- Design, implement, and maintain CI/CD pipelines for development, testing, and production environments.
- Manage cloud infrastructure (AWS, Azure, GCP, or others) and ensure scalability, reliability, and security.
- Automate deployment, configuration management, and infrastructure provisioning using tools like Terraform, Ansible, or Chef.
- Monitor application performance and infrastructure health using tools like Prometheus, Grafana, ELK Stack, or Datadog.
- Collaborate with development and QA teams to streamline workflows and resolve deployment issues.
- Implement security best practices in pipelines, infrastructure, and cloud environments.
- Maintain version control and manage release cycles.
- Troubleshoot and resolve production issues efficiently.
Required Skills & Qualifications:
- Bachelor’s degree in Computer Science, IT, or related field.
- Proven experience in DevOps, system administration, or cloud engineering.
- Strong knowledge of CI/CD tools (Jenkins, GitLab CI/CD, CircleCI, etc.).
- Hands-on experience with containerization (Docker, Kubernetes).
- Experience with cloud platforms (AWS, Azure, or GCP).
- Scripting skills (Python, Bash, or PowerShell).
- Knowledge of infrastructure as code (Terraform, CloudFormation).
- Familiarity with monitoring and logging tools.
- Strong problem-solving, communication, and teamwork skills.
Preferred Qualifications:
- Experience with microservices architecture.
- Knowledge of networking, load balancing, and firewalls.
- Exposure to Agile/Scrum methodologies.
What We Offer:
- Competitive salary
- Flexible working hours and remote options.
- Learning and development opportunities.
- Collaborative and inclusive work environment.
Hiring: Cloud Engineer – MLOps Platform 🚨
📍 Location: Bangalore
🧠 Experience: 5–8 Years
We are looking for an experienced Cloud Engineer to support ML teams and drive end-to-end automation for model deployment across modern cloud platforms.
🔹 Tech Stack:
Azure | Databricks | AKS | ARO | Terraform | MLflow | CI/CD
🔹 Key Responsibilities:
• Build and maintain CI/CD and Continuous Training (CT) pipelines using Azure DevOps, GitHub Actions, or Jenkins.
• Deploy Databricks jobs, MLflow models, and microservices on AKS / ARO environments.
• Automate infrastructure using Terraform and GitOps practices.
• Manage Databricks workspaces, AKS clusters, and networking configurations.
• Implement monitoring, logging, and alerting systems for ML workloads.
• Ensure cloud security, governance, and cost optimization best practices.
🔹 Required Skills:
✔ Strong hands-on experience with Azure, AKS, ARO, and Databricks
✔ Experience with MLflow and Kubernetes-based deployments
✔ Proficiency in Python and Bash / PowerShell scripting
✔ Strong understanding of cloud security, infrastructure automation, and distributed systems
Key Responsibilities
- Design, implement, and maintain highly available infrastructure on AWS.
- Automate infrastructure provisioning using Terraform (Infrastructure as Code).
- Define and monitor SLIs, SLOs, and error budgets to improve service reliability.
- Build and manage CI/CD pipelines to enable safe and frequent deployments.
- Implement robust monitoring, alerting, and logging solutions.
- Perform incident response, root cause analysis (RCA), and postmortems.
- Improve system resilience through automation and self-healing mechanisms.
- Optimize cloud resource utilization and cost (FinOps awareness).
- Collaborate with development teams to improve application reliability.
- Manage containerized workloads using Docker and Kubernetes (EKS preferred).
- Implement security and compliance best practices across infrastructure.
- Maintain operational runbooks and documentation.
Required Qualifications
- Bachelor’s degree in Computer Science, Engineering, or related field.
- 7–8 years of experience in SRE, DevOps, or Production Engineering.
- Strong hands-on experience with AWS services.
- Proven experience with Terraform for infrastructure automation.
- Experience building CI/CD pipelines (GitHub Actions, Jenkins, or similar).
- Strong scripting skills (Python, Bash, or Shell).
- Experience with Linux system administration.
- Hands-on experience with monitoring and observability tools.
- Good understanding of networking and cloud security fundamentals.
- Experience with Git and branching strategies
Job opportunity for Developer -Python Full Stack with Siemens at Bangalore.
Interview Process:
1st round of interview - F2F (in-Person)-Technical
2nd round of interview – F2F /Virtual Interview - Technical
3rd round of interview – Virtual Interview – Technical + HR
Job Title / Designation: Developer -Python Full Stack
Employment Type: Full Time, Permanent
Location: Bangalore
Experience: 3-5 Years Job Description: : Developer -Python Full Stack
We are looking for a python full stack expert who has proven 5+ years of experience in developing automating solutions on Linux based environments. You should be capable of developing python-based web applications or automation solutions and have with excellent knowledge on DB handling and decent knowledge of the K8-based deployment environment.
Required Skills:
- Solid experience in Python back-end technology
- Sound experience in web application development
- Decent knowledge and experience in UI development using JavaScript, React/Angular or related tech stack.
- Strong understanding of software design patterns and testing principles
- Ability to learn and adapt to working with multiple programming languages.
- Experience Docker, ArgoCD, Kubernetes and Terraform
- Understanding of ETL processes to extract data from different data sources is a plus.
- Proven experience in Linux development environments using Python.
- Excellent knowledge in interacting with database systems (SQL, NoSQL) and webservices (REST)
- Experienced in establishing an optimized CI / CD environment relevant to the project.
- Good knowledge on repository management tools like Git, Bit Bucket, etc.
- Excellent debugging skills/strategies.
- Excellent communication skills
- Experienced in working in an Agile environment.
Nice to have
- Good Knowledge in eclipse IDE, developed add-ons/ plugins on eclipse Platform.
- Knowledge of 93K Semiconductor test platforms
- Good know-how of agile management tools like Jira, Azure DevOps.
- Good knowledge of RHEL
- Knowledge of JIRA administration
Key Responsibilities
- Design, implement, and maintain highly available infrastructure on AWS.
- Automate infrastructure provisioning using Terraform (Infrastructure as Code).
- Define and monitor SLIs, SLOs, and error budgets to improve service reliability.
- Build and manage CI/CD pipelines to enable safe and frequent deployments.
- Implement robust monitoring, alerting, and logging solutions.
- Perform incident response, root cause analysis (RCA), and postmortems.
- Improve system resilience through automation and self-healing mechanisms.
- Optimize cloud resource utilization and cost (FinOps awareness).
- Collaborate with development teams to improve application reliability.
- Manage containerized workloads using Docker and Kubernetes (EKS preferred).
- Implement security and compliance best practices across infrastructure.
- Maintain operational runbooks and documentation.
Required Qualifications
- Bachelor’s degree in Computer Science, Engineering, or related field.
- 7–8 years of experience in SRE, DevOps, or Production Engineering.
- Strong hands-on experience with AWS services.
- Proven experience with Terraform for infrastructure automation.
- Experience building CI/CD pipelines (GitHub Actions, Jenkins, or similar).
- Strong scripting skills (Python, Bash, or Shell).
- Experience with Linux system administration.
- Hands-on experience with monitoring and observability tools.
- Good understanding of networking and cloud security fundamentals.
- Experience with Git and branching strategies
We are looking for a skilled DevOps Engineer with hands-on experience in cloud platforms, CI/CD pipelines, container orchestration, and infrastructure automation. The ideal candidate is someone who loves solving reliability challenges, automating everything, and ensuring seamless delivery across environments.
Key Responsibilities
- Design, implement, and maintain CI/CD pipelines using GitHub Actions, Jenkins, and GitHub.
- Manage and optimize infrastructure on AWS/GCP, ensuring scalability, security, and high availability.
- Deploy and manage containerized applications using Docker and Kubernetes.
- Build, automate, and manage infrastructure as code using Terraform.
- Configure and manage automation tools and workflows using Ansible.
- Monitor system performance, troubleshoot production issues, and ensure smooth operations.
- Implement best practices for code management, release processes, and DevOps standards.
- Collaborate closely with development teams to improve build pipelines and deployment workflows.
- Write scripts in Python/Bash to automate operational tasks.
Required Skills & Experience
- 2+ years of hands-on experience as a DevOps Engineer or in a similar role.
- Strong expertise in AWS or GCP cloud services.
- Solid understanding of Kubernetes (deployment, scaling, service mesh, packaging).
- Proficiency with Terraform for infrastructure automation.
- Experience with Git, GitHub, and GitHub Actions for source control and CI/CD.
- Good knowledge of Jenkins pipelines and automation.
- Hands-on experience with Ansible for configuration management.
- Strong scripting skills using Python or Bash.
- Understanding of monitoring, logging, and security best practices.
Hiring: AWS DevOps Developer
📍 Location: Bangalore
🧑💻 Experience: 4–7 Years
📌 Job Summary
We are looking for a skilled AWS DevOps Developer with strong experience in AWS cloud infrastructure, CI/CD automation, containerization, and Infrastructure as Code. The ideal candidate should have hands-on experience building scalable and secure cloud environments.
🛠 Required Technical Skills
☁️ AWS Services
- Amazon EC2
- Amazon S3
- IAM
- VPC
- Amazon EKS
- RDS
- Route 53
- CloudWatch
- AWS Lambda
🔄 DevOps & CI/CD
- Jenkins (Pipelines, Shared Libraries)
- Git / GitHub
- Maven / Build tools
- CI/CD pipeline design & implementation
🐳 Containers & Orchestration
- Docker
- Kubernetes (EKS preferred)
- Helm
🏗 Infrastructure as Code
- Terraform
- Ansible
📊 Monitoring & Logging
- CloudWatch
- Prometheus
- Grafana
📋 Roles & Responsibilities
- Design and implement scalable AWS infrastructure
- Build and maintain CI/CD pipelines
- Deploy containerized applications using Docker & Kubernetes
- Automate infrastructure provisioning using Terraform
- Implement monitoring and alerting solutions
- Ensure security, compliance, and cost optimization
- Troubleshoot production issues and improve system reliability
➕ Good to Have
- AWS Certification (Solutions Architect / DevOps Engineer)
- Experience with Microservices architecture
- Knowledge of DevSecOps practices
- Experience in Agile methodology
A DevOps Engineer plays a crucial role in enhancing and streamlining IT operations, particularly in the context of cloud computing and agile software development.
Responsible for managing the cloud infrastructure, CI/CD pipelines
Works as an integral software delivery team member delivering world-class, scalable and robust solutions.
Major Functions/Responsibilities:
- Design, Implement, and manage CI/CD pipelines on Microsoft Azure using Azure DevOps Services
- Automate deployment processes and infrastructure management using tools like Terraform, ARM templates or Azure CLI
- Create, configure, and execute on-going or newly proposed processes for multiple projects.
- Manage version control systems and ensure proper branching, merging and release workflows.
- Collaborate with development teams to design and implement cloud-native applications and scalable solutions.
- Manage Azure services like Azure Functions, App services, Azure Kubernetes Service (AKS) and Azure Container Registry (ACR)
- Implement monitoring and logging strategies using tools like Azure Monitor, Azure Log analytics and Application Insights.
- Identify areas for improvement within processes and practices.
- Optimize Azure resources to minimize costs while ensuring performance and scalability
- Strong problem-solving skills and the ability to troubleshoot complex issues.
- Create system documentation for training and reference.
- Implement security best practices including RBAC, Key Vault, Network Security, identity & access management.
Recommended Education/Experience/ Skills:
- 6+ years of DevOps experience in improving efficiency and achieving Continuous Integration, Continuous Testing and Continuous Deployment.
- 2+ years of experience with infrastructure automation tools (such as Terraform, Ansible, chef).
- Knowledge of AI in the context of NLP or other specialized AI fields and experience in AI based applications
- Experience with Docker, Azure Kubernetes Service for container management.
- Experience with Windows and Linux system administration with knowledge of installations, performance tuning, security, and shell scripting.
- Prior DevOps experience in improving efficiency and achieving Continuous Integration, Continuous Testing and Continuous Deployment.
- Proven experience in building DevOps infrastructure and creating multiple environments.
- Experience with or strong understanding of modern service-oriented architecture.
- Experience with scripting languages such as PowerShell, Python or Bash
- Knowledge of networking load balancers.
- Understanding of cloud security principles.
- Experience with SDLC Management software and solutions and knowledge of Agile and Scrum methodologies
- Experience in integration of automated testing and deployment for cloud-based applications with Continuous Integration tools.
- Experience in any modern language (C#, HTML, CSS, Java, etc).
- Collaboration and communication skills to work with cross-functional teams.
- Expertise in version control systems like Azure DevOps, Git managing repositories.
- Relevant DevOps-related certifications, such as Azure DevOps expert , Azure AI services etc
Description
SRE Engineer
Role Overview
As a Site Reliability Engineer, you will play a critical role in ensuring the availability and performance of our customer-facing platform. You will work closely with DevOps, DBA, and Development teams to provision and maintain infrastructure, deploy and monitor our applications, and automate workflows. Your contributions will have a direct impact on customer satisfaction and overall experience.
Responsibilities and Deliverables
• Manage, monitor, and maintain highly available systems (Windows and Linux)
• Analyze metrics and trends to ensure rapid scalability.
• Address routine service requests while identifying ways to automate and simplify.
• Create infrastructure as code using Terraform, ARM Templates, Cloud Formation.
• Maintain data backups and disaster recovery plans.
• Design and deploy CI/CD pipelines using GitHub Actions, Octopus, Ansible, Jenkins, Azure DevOps.
• Adhere to security best practices through all stages of the software development lifecycle
• Follow and champion ITIL best practices and standards.
• Become a resource for emerging and existing cloud technologies with a focus on AWS.
Organizational Alignment
• Reports to the Senior SRE Manager
• This role involves close collaboration with DevOps, DBA, and security teams.
Technical Proficiencies
• Hands-on experience with AWS is a must-have.
• Proficiency analyzing application, IIS, system, security logs and CloudTrail events
• Practical experience with CI/CD tools such as GitHub Actions, Jenkins, Octopus
• Experience with observability tools such as New Relic, Application Insights, AppDynamics, or DataDog.
• Experience maintaining and administering Windows, Linux, and Kubernetes.
• Experience in automation using scripting languages such as Bash, PowerShell, or Python.
• Configuration management experience using Ansible, Terraform, Azure Automation Run book or similar.
• Experience with SQL Server database maintenance and administration is preferred.
• Good Understanding of networking (VNET, subnet, private link, VNET peering).
• Familiarity with cloud concepts including certificates, Oauth, AzureAD, ASE, ASP, AKS, Azure Apps,
Load Balancers, Application Gateway, Firewall, Load Balancer, API Management, SQL Server, Databases on Azure
Experience
• 7+ years of experience in SRE or System Administration role
• Demonstrated ability building and supporting high availability Windows/Linux servers, with emphasis on the WISA stack (Windows/IIS/SQL Server/ASP.net)
• 3+ years of experience working with cloud technologies including AWS, Azure.
• 1+ years of experience working with container technology including Docker and Kubernetes.
• Comfortable using Scrum, Kanban, or Lean methodologies.
Education
• Bachelor’s Degree or College Diploma in Computer Science, Information Systems, or equivalent
experience.
Additional Job Details:
• Working hours: 2:00 PM / 3:00 PM to 11:30 PM IST
• Interview process: 3 technical rounds
• Work model: 3 days’ work from office
Role Summary
We are looking for a skilled DevSecOps Engineer to design, implement, and secure scalable CI/CD pipelines and cloud infrastructure on Google Cloud Platform. The role focuses on secure application delivery using Cloud Run, GKE, Terraform, and integrated DevSecOps practices to ensure compliance, reliability, and performance.
Key Responsibilities
- Design and manage secure CI/CD pipelines using Cloud Build, Jenkins, or Tekton
- Provision and manage GCP infrastructure using Terraform (IaC)
- Deploy and manage containerized applications on Cloud Run and GKE
- Implement container security, vulnerability scanning, SAST/DAST, and dependency scanning
- Enforce IAM, VPC, and cloud security best practices
- Monitor, log, and troubleshoot environments for performance and reliability
- Enable development teams with DevSecOps frameworks and governance standards
Relevant Skills
- Cloud: Google Cloud Platform (GKE, Cloud Run, IAM, VPC, Cloud Build, Artifact Registry)
- CI/CD Tools: Jenkins, Tekton, Cloud Build
- Infrastructure as Code: Terraform
- Containers & Orchestration: Docker, Kubernetes (GKE)
- Security Tools: Checkmarx (SAST/DAST), FOSSA, container vulnerability scanning tools
- Monitoring & Observability: GCP Operations Suite (Cloud Monitoring & Logging)
- Version Control: Git, branch and release management strategies
- Other: DevSecOps practices, compliance automation, release orchestration
As a Senior Platform Engineer at Brevo, you are a systems architect who treats infrastructure as software. Your goal is to build a resilient, self-service data platform that allows our engineering teams to ship faster without worrying about the underlying storage complexities. You will bridge the gap between deep Linux internals and high-level automation.
Your Impact at Brevo:
- Own the delivery and lifecycle management of data-storage platforms, ensuring they meet product requirements for performance, reliability, security, and scale.
- Bring a product mindset by balancing user needs, operational excellence, and long-term sustainability.
- Architect, implement, and automate data-storage infrastructure (e.g., SQL, NoSQL, object, and file storage) using infrastructure-as-code and configuration-management tools such as Terraform and Ansible.
- Drive platform reliability and resilience, including installation, configuration, parameter tuning, observability, and continuous performance optimisation for all supported data engines.
- Develop, evolve, and enforce robust backup, disaster recovery, and business-continuity strategies, ensuring data durability and compliance with internal and external standards.
- Collaborate cross-functionally with product, engineering, and security teams to understand workload characteristics, define SLOs/SLAs, and deliver platform capabilities that accelerate product development.
- Lead automation initiatives across storage-related CI/CD and operational workflows, reducing manual overhead, improving consistency, and increasing delivery velocity.
- Work on production issues, perform deep root-cause analysis, and drive long-term fixes and systemic improvements across the platform.
- Produce and maintain high-quality documentation of architectures, operational procedures, standards, and best practices to ensure platform transparency and reproducibility.
Who you are:
- You have 6-9 years of relevant experience.
- Strong expertise in Linux systems engineering and operating mission-critical services at scale.
- Hands-on experience managing and tuning multiple classes of data engines, such as PostgreSQL, MongoDB, ClickHouse, ScyllaDB, MinIO, or equivalent SQL/NoSQL/object-storage technologies.
- Deep understanding of backup, restore, replication, and disaster-recovery patterns, along with experience implementing them in distributed environments.
- Skilled in building end-to-end automation using scripting languages and IaC tools (Ansible, Bash, and Terraform), with demonstrated expertise in architecting reusable, scalable, and maintainable automation frameworks for complex environments.
- Excellent diagnostic and problem-solving skills, capable of navigating complex storage performance issues, distributed-system failures, and production incidents.
- Experience on SRE key principles, Monitoring & Observability, SLO/SLA/SLI, Error budget, incident response, and postmortems.
- A platform and product mindset - thinking in terms of self-service, reliability, APIs, user experience, and long-term platform evolution.
Why people love working at Brevo:
- A place to grow, together: Join a diverse, international team in a modern office buzzing with energy and ideas.
- Practical perks for everyday balance: Cab Facility and per-day meal vouchers; Employee-friendly salary structure; 1.4x pay on holidays/weekends for critical work; excellent referral program with high-value gift options (e.g., bike, flight tickets)
- Learning, every step of the way: Over 1,55,000 courses on Udemy, along with facilitator-led sessions, and personalised training programs tailored to individual and team needs
- Flexible for life: A hybrid setup (2 days WFH), budget for your home workspace, and a generous leave policy to help you balance life and work effortlessly
- Wellbeing that works: INR 10 Lakh medical insurance, maternity support, childcare facilities, and wellness bonuses to keep you and your family covered
- A culture that cares: Annual company off-sites, inter-office trips, team outings, active social, green, and LGBTQIA+ communities, along with festive celebrations - all making Brevo a vibrant and inclusive workplace.

Consumer Internet, Technology & Travel and Tourism Platform
Job Details
- Job Title: Lead DevOps Engineer
- Industry: Consumer Internet, Technology & Travel and Tourism Platform
- Function - IT
- Experience Required: 7-10 years
- Employment Type: Full Time
- Job Location: Bengaluru
- CTC Range: Best in Industry
Criteria:
- Strong Lead DevOps / Infrastructure Engineer Profiles.
- Must have 7+ years of hands-on experience working as a DevOps / Infrastructure Engineer.
- Candidate’s current title must be Lead DevOps Engineer (or equivalent Lead role) in the current organization
- Must have minimum 2+ years of team management / technical leadership experience, including mentoring engineers, driving infrastructure decisions, or leading DevOps initiatives.
- Must have strong hands-on experience with Kubernetes (container orchestration) including deployment, scaling, and cluster management.
- Must have experience with Infrastructure as Code (IaC) tools such as Terraform, Ansible, Chef, or Puppet.
- Must have strong scripting and automation experience using Python, Go, Bash, or similar scripting languages.
- Must have working experience with distributed databases or data systems such as MongoDB, Redis, Cassandra, Elasticsearch, or Puppet.
- Must have strong hands-on experience in Observability & Monitoring, CI/CD architecture, and Networking concepts in production environments.
- (Company) – Must be from B2C Product Companies only.
- (Education) – B.E/ B.Tech
Preferred
- Experience working in microservices architecture and event-driven systems.
- Exposure to cloud infrastructure, scalability, reliability, and cost optimization practices.
- (Skills) – Understanding of programming languages such as Go, Python, or Java.
- (Environment) – Experience working in high-growth startup or large-scale production environments.
Job Description
As a DevOps Engineer, you will be working on building and operating infrastructure at scale, designing and implementing a variety of tools to enable product teams to build and deploy their services independently, improving observability across the board, and designing for security, resiliency, availability, and stability. If the prospect of ensuring system reliability at scale and exploring cutting-edge technology to solve problems, excites you, then this is your fit.
Job Responsibilities:
- Own end-to-end infrastructure right from non-prod to prod environment including self-managed DBs
- Codify our infrastructure
- Do what it takes to keep the uptime above 99.99%
- Understand the bigger picture and sail through the ambiguities
- Scale technology considering cost and observability and manage end-to-end processes
- Understand DevOps philosophy and evangelize the principles across the organization
- Strong communication and collaboration skills to break down the silos

Consumer Internet, Technology & Travel and Tourism Platform
Job Details
- Job Title: Senior DevOps Engineer
- Industry: Consumer Internet, Technology & Travel and Tourism Platform
- Function - IT
- Experience Required: 4-7 years
- Employment Type: Full Time
- Job Location: Bengaluru
- CTC Range: Best in Industry
Criteria:
- Strong DevOps / Infrastructure Engineer Profiles.
- Must have 4+ years of hands-on experience working as a DevOps Engineer / Infrastructure Engineer / SRE / DevOps Consultant.
- Must have hands-on experience with Kubernetes and Docker, including deployment, scaling, or containerized application management.
- Must have experience with Infrastructure as Code (IaC) or configuration management tools such as Terraform, Ansible, Chef, or Puppet.
- Must have strong automation and scripting experience using Python, Go, Bash, Shell, or similar scripting languages.
- Must have working experience with distributed databases or data systems such as MongoDB, Redis, Cassandra, Elasticsearch, or Kafka.
- Candidate must demonstrate strong expertise in at least one of the following areas - Databases / Distributed Data Systems, Observability & Monitoring, CI/CD Pipelines. Networking Concepts, Kubernetes / Container Platforms
- Candidates must be from B2C Product-based companies only.
- (Education) – BE / B.Tech or equivalent
Preferred
- Experience working with microservices or event-driven architectures.
- Exposure to cloud infrastructure, monitoring, reliability, and scalability practices.
- (Skills) – Understanding of programming languages such as Go, Python, or Java.
- Preferred (Environment) – Experience working in high-scale production or fast-growing product startups.
Job Description
As a DevOps Engineer, you will be working on building and operating infrastructure at scale, designing and implementing a variety of tools to enable product teams to build and deploy their services independently, improving observability across the board, and designing for security, resiliency, availability, and stability. If the prospect of ensuring system reliability at scale and exploring cutting-edge technology to solve problems, excites you, then this is your fit.
Job Responsibilities:
- Own end-to-end infrastructure right from non-prod to prod environment including self-managed DBs
- Codify our infrastructure
- Do what it takes to keep the uptime above 99.99%
- Understand the bigger picture and sail through the ambiguities
- Scale technology considering cost and observability and manage end-to-end processes
- Understand DevOps philosophy and evangelize the principles across the organization
- Strong communication and collaboration skills to break down the silos
About the role:
We are looking for a skilled and driven Security Engineer to join our growing security team. This role requires a hands-on professional who can evaluate and strengthen the security posture of our
applications and infrastructure across Web, Android, iOS, APIs, and cloud-native environments.
The ideal candidate will also lead technical triage from our bug bounty program, integrate security into the DevOps lifecycle, and contribute to building a security-first engineering culture.
Required Skills & Experience:
● 3 to 6 years of solid hands-on experience in the VAPT domain
● Solid understanding of Web, Android, and iOS application security
● Experience with DevSecOps tools and integrating security into CI/CD
● Strong knowledge of cloud platforms (AWS/GCP/Azure) and their security models
● Familiarity with bug bounty programs and responsible disclosure practices
● Familiarity with tools like Burp Suite, MobSF, OWASP ZAP, Terraform, Checkov..etc
● Good knowledge of API security
● Scripting experience (Python, Bash, or similar) for automation tasks
Preferred Qualifications:
● OSCP, CEH, AWS Security Specialty, or similar certifications
● Experience working in a regulated environment (e.g., FinTech, InsurTech)
Responsibilities:
● Perform Security reviews, Vulnerability Assessments & Penetration Testing for Web,
Android, iOS, and API endpoints
● Perform Threat Modelling & anticipate potential attack vectors and improve security
architecture on complex or cross-functional components
● Identify and remediate OWASP Top 10 and mobile-specific vulnerabilities
● Conduct secure code reviews and red team assessments
● Integrate SAST, DAST, SCA, and secret scanning tools into CI/CD pipelines
● Automate security checks using tools like SonarQube, Snyk, Trivy, etc.
● Maintain and manage vulnerability scanning infrastructure
● Perform security assessments of AWS, Azure, and GCP environments, with an emphasis
on container security, particularly for Docker and Kubernetes.
● Implement guardrails for IAM, network segmentation, encryption, and cloud monitoring
● Contribute to infrastructure hardening for containers, Kubernetes, and virtual machines
● Triage bug bounty reports and coordinate remediation with engineering teams
● Act as the primary responder for external security disclosures
● Maintain documentation and metrics related to bug bounty and penetration testing
activities
● Collaborate with developers and architects to ensure secure design decisions
● Lead security design reviews for new features and products
● Provide actionable risk assessments and mitigation plans to stakeholders
Job Details
- Job Title: Lead I - Data Engineering (Python, AWS Glue, Pyspark, Terraform)
- Industry: Global digital transformation solutions provider
- Domain - Information technology (IT)
- Experience Required: 5-7 years
- Employment Type: Full Time
- Job Location: Hyderabad
- CTC Range: Best in Industry
Job Description
Data Engineer with AWS, Python, Glue, Terraform, Step function and Spark
Skills: Python, AWS Glue, Pyspark, Terraform - All are mandatory
******
Notice period - 0 to 15 days only
Job stability is mandatory
Location: Hyderabad
About the role:
We are looking for a Staff Site Reliability Engineer who can operate at a staff level across multiple teams and clients. If you care about designing reliable platforms, influencing system architecture, and raising reliability standards across teams, you’ll enjoy working at One2N.
At One2N, you will work with our startups and enterprise clients, solving One-to-N scale problems where the proof of concept is already established and the focus is on scalability, maintainability, and long-term reliability. In this role, you will drive reliability, observability, and infrastructure architecture across systems, influencing design decisions, defining best practices, and guiding teams to build resilient, production-grade systems.
Key responsibilities:
- Own and drive reliability and infrastructure strategy across multiple products or client engagements
- Design and evolve platform engineering and self-serve infrastructure patterns used by product engineering teams
- Lead architecture discussions around observability, scalability, availability, and cost efficiency.
- Define and standardize monitoring, alerting, SLOs/SLIs, and incident management practices.
- Build and review production-grade CI/CD and IaC systems used across teams
- Act as an escalation point for complex production issues and incident retrospectives.
- Partner closely with engineering leads, product teams, and clients to influence system design decisions early.
- Mentor young engineers through design reviews, technical guidance, and best practices.
- Improve Developer Experience (DX) by reducing cognitive load, toil, and operational friction.
- Help teams mature their on-call processes, reliability culture, and operational ownership.
- Stay ahead of trends in cloud-native infrastructure, observability, and platform engineering, and bring relevant ideas into practice
About you:
- 9+ years of experience in SRE, DevOps, or software engineering roles
- Strong experience designing and operating Kubernetes-based systems on AWS at scale
- Deep hands-on expertise in observability and telemetry, including tools like OpenTelemetry, Datadog, Grafana, Prometheus, ELK, Honeycomb, or similar.
- Proven experience with infrastructure as code (Terraform, Pulumi) and cloud architecture design.
- Strong understanding of distributed systems, microservices, and containerized workloads.
- Ability to write and review production-quality code (Golang, Python, Java, or similar)
- Solid Linux fundamentals and experience debugging complex system-level issues
- Experience driving cross-team technical initiatives.
- Excellent analytical and problem-solving skills, keen attention to detail, and a passion for continuous improvement.
- Strong written, communication, and collaboration skills, with the ability to work effectively in a fast-paced, agile environment.
Nice to have:
- Experience working in consulting or multi-client environments.
- Exposure to cost optimization, or large-scale AWS account management
- Experience building internal platforms or shared infrastructure used by multiple teams.
- Prior experience influencing or defining engineering standards across organizations.
JOB DETAILS:
* Job Title: Lead I - Azure, Terraform, GitLab CI
* Industry: Global Digital Transformation Solutions Provider
* Salary: Best in Industry
* Experience: 3-5 years
* Location: Trivandrum/Pune
Job Description
Job Title: DevOps Engineer
Experience: 4–8 Years
Location: Trivandrum & Pune
Job Type: Full-Time
Mandatory skills: Azure, Terraform, GitLab CI, Splunk
Job Description
We are looking for an experienced and driven DevOps Engineer with 4 to 8 years of experience to join our team in Trivandrum or Pune. The ideal candidate will take ownership of automating cloud infrastructure, maintaining CI/CD pipelines, and implementing monitoring solutions to support scalable and reliable software delivery in a cloud-first environment.
Key Responsibilities
- Design, manage, and automate Azure cloud infrastructure using Terraform.
- Develop scalable, reusable, and version-controlled Infrastructure as Code (IaC) modules.
- Implement monitoring and logging solutions using Splunk, Azure Monitor, and Dynatrace.
- Build and maintain secure and efficient CI/CD pipelines using GitLab CI or Harness.
- Collaborate with cross-functional teams to enable smooth deployment workflows and infrastructure updates.
- Analyze system logs, performance metrics, and s to troubleshoot and optimize performance.
- Ensure infrastructure security, compliance, and scalability best practices are followed.
Mandatory Skills
Candidates must have hands-on experience with the following technologies:
- Azure – Cloud infrastructure management and deployment
- Terraform – Infrastructure as Code for scalable provisioning
- GitLab CI – Pipeline development, automation, and integration
- Splunk – Monitoring, logging, and troubleshooting production systems
Preferred Skills
- Experience with Harness (for CI/CD)
- Familiarity with Azure Monitor and Dynatrace
- Scripting proficiency in Python, Bash, or PowerShell
- Understanding of DevOps best practices, containerization, and microservices architecture
- Exposure to Agile and collaborative development environments
Skills Summary
Azure, Terraform, GitLab CI, Splunk (Mandatory) Additional: Harness, Azure Monitor, Dynatrace, Python, Bash, PowerShell
Skills: Azure, Splunk, Terraform, Gitlab Ci
******
Notice period - 0 to 15days only
Job stability is mandatory
Location: Trivandrum/Pune
- 3+ years hands-on Azure cloud & automation experience.
- Experience managing high-availability enterprise systems.
- Microsoft Azure (AKS, VNets, App Gateway, Load Balancers).
- Kubernetes (AKS) & Docker.
- Networking (VPN, DNS, routing, firewalls, NSGs).
- Infra-as-Code (Terraform / Bicep optional).
- Monitoring tools: Azure Monitor, Grafana, Prometheus.
- CI/CD: Azure DevOps, GitLab/Jenkins (added advantage).
- Security: Key Vault, certificates, encryption, RBAC.
- Understanding of PostgreSQL/PostGIS networking.
- Design and manage Azure infrastructure (VMs, VNets, NSGs, Load Balancers, AKS, Storage).
- Deploy and maintain AKS workloads for NiFi, PostGIS, and microservices.
- Architect secure network topology including VNet peering, VPNs, Private Endpoints, DNS & Zero Trust policies.
- Implement monitoring and alerting using Azure Monitor, Log Analytics, Grafana & Prometheus.
- Ensure high uptime, DR planning, backup and failover strategies.
- Automate deployments with Azure DevOps, Helm, ArgoCD & GitOps principles.
- Enforce security, RBAC, compliance, and audit standards across environments.
- Good to have knowledge/experince in Linux administration (Ubuntu/Debian).
Job Location: Bangalore/Mumbai
Exp: 3-10+ Yrs
Job Title: DevOps Engineer
About TradeLab:
TradeLab is a leading fintech technology provider, delivering cutting-edge solutions to brokers, banks, and fintech platforms. Our portfolio includes high-performance Order & Risk Management Systems (ORMS), seamless MetaTrader integrations, AI-driven customer engagement platforms such as PULSE LLaVA, and compliance-grade risk management solutions.
Key Responsibilities
- DevOps Strategy & Leadership
- Contribute to defining and executing the DevOps strategy for high-frequency trading and fintech platforms. Mentor junior engineers and collaborate with cross-functional teams to foster a culture of automation, scalability, and performance.
- Work closely with engineering and product teams to align infrastructure initiatives with business objectives.
- CI/CD and Infrastructure Automation Design and optimize CI/CD pipelines for ultra-low-latency trading systems.
- Implement Infrastructure as Code (IaC) practices using Terraform, Helm, Kubernetes, and automation frameworks.
- Establish best practices for release management and deployment in mission-critical environments.
- Cloud & On-Prem Infrastructure Management Manage hybrid infrastructure across AWS, GCP, and on-prem data centers ensuring high availability and fault tolerance.
- Implement networking strategies for low-latency trading, including routing and performance tuning.
- Drive cost optimization and scalability initiatives across multi-cloud environments.
- Performance Monitoring & Optimization Set up and maintain system performance monitoring using Prometheus, Grafana, and ELK stack. Implement alerting and automated remediation strategies for zero-downtime operations.
- Conduct root-cause analysis and performance tuning for systems handling millions of transactions per second.
- Security & Compliance Apply DevSecOps principles across all environments.
- Ensure compliance with financial regulations (SEBI and global standards) and maintain audit trails.
- Drive security automation, vulnerability management, and IAM policies.
Required Skills & Qualifications
- 3–8 years of experience in DevOps, with exposure to leadership or team mentoring.
- Strong expertise in CI/CD tools (Jenkins, GitLab CI/CD, ArgoCD). Hands-on experience with cloud platforms (AWS, GCP) and hybrid infrastructure.
- Proficiency in Kubernetes, Docker, and container orchestration. Solid experience with Terraform, Helm, and IaC principles.
- Strong Linux administration and networking fundamentals (TCP/IP, DNS, firewalls). Experience with monitoring tools (Prometheus, Grafana, ELK).
- Proficiency in scripting languages (Python, Bash, Go) for automation. Understanding of security best practices, IAM, and compliance frameworks.
Good to Have
- Exposure to ultra-low-latency trading infrastructure or high-frequency trading systems.
- Knowledge of FIX protocol, FPGA acceleration, or network optimization techniques.
- Familiarity with Redis, Nginx, or other real-time data handling technologies.
- Experience in advanced performance tuning for microsecond-level execution.
Why Join Us?
Work with a team that expects and delivers excellence. A culture where innovation and speed are rewarded. Limitless opportunities for growth—if you can handle the pace. Build systems that move markets and redefine fintech
10–14 years of experience in software engineering, with strong emphasis on backend and data architecture for large-scale systems.
Proven experience designing and deploying distributed, event-driven systems and streaming data pipelines.
Expert proficiency in Go/Python, including experience with microservices, APIs, and concurrency models.
Deep understanding of data flows across multi-sensor and multi-modal sources, including ingestion, transformation, and synchronization.
Experience building real-time APIs for interactive web applications and data-heavy workflows.
Familiarity with frontend ecosystems (React, TypeScript) and rendering frameworks leveraging WebGL/WebGPU.
Hands-on experience with CI/CD, Kubernetes, Docker, and Infrastructure as Code (Terraform, Helm).
JOB DETAILS:
* Job Title: DevOps Engineer (Azure)
* Industry: Technology
* Salary: Best in Industry
* Experience: 2-5 years
* Location: Bengaluru, Koramangala
Review Criteria
- Strong Azure DevOps Engineer Profiles.
- Must have minimum 2+ years of hands-on experience as an Azure DevOps Engineer with strong exposure to Azure DevOps Services (Repos, Pipelines, Boards, Artifacts).
- Must have strong experience in designing and maintaining YAML-based CI/CD pipelines, including end-to-end automation of build, test, and deployment workflows.
- Must have hands-on scripting and automation experience using Bash, Python, and/or PowerShell
- Must have working knowledge of databases such as Microsoft SQL Server, PostgreSQL, or Oracle Database
- Must have experience with monitoring, alerting, and incident management using tools like Grafana, Prometheus, Datadog, or CloudWatch, including troubleshooting and root cause analysis
Preferred
- Knowledge of containerisation and orchestration tools such as Docker and Kubernetes.
- Knowledge of Infrastructure as Code and configuration management tools such as Terraform and Ansible.
- Preferred (Education) – BE/BTech / ME/MTech in Computer Science or related discipline
Role & Responsibilities
- Build and maintain Azure DevOps YAML-based CI/CD pipelines for build, test, and deployments.
- Manage Azure DevOps Repos, Pipelines, Boards, and Artifacts.
- Implement Git branching strategies and automate release workflows.
- Develop scripts using Bash, Python, or PowerShell for DevOps automation.
- Monitor systems using Grafana, Prometheus, Datadog, or CloudWatch and handle incidents.
- Collaborate with dev and QA teams in an Agile/Scrum environment.
- Maintain documentation, runbooks, and participate in root cause analysis.
Ideal Candidate
- 2–5 years of experience as an Azure DevOps Engineer.
- Strong hands-on experience with Azure DevOps CI/CD (YAML) and Git.
- Experience with Microsoft Azure (OCI/AWS exposure is a plus).
- Working knowledge of SQL Server, PostgreSQL, or Oracle.
- Good scripting, troubleshooting, and communication skills.
- Bonus: Docker, Kubernetes, Terraform, Ansible experience.
- Comfortable with WFO (Koramangala, Bangalore).
About Us:
MyOperator is a Business AI Operator and a category leader that unifies WhatsApp, Calls, and AI-powered chat & voice bots into one intelligent business communication platform.
Unlike fragmented communication tools, MyOperator combines automation, intelligence, and workflow integration to help businesses run WhatsApp campaigns, manage calls, deploy AI chatbots, and track performance — all from a single, no-code platform.Trusted by 12,000+ brands including Amazon, Domino’s, Apollo, and Razorpay, MyOperator enables faster responses, higher resolution rates, and scalable customer engagement — without fragmented tools or increased headcount.
Role Overview:
We’re seeking a passionate Python Developer with strong experience in backend development and cloud infrastructure. This role involves building scalable microservices, integrating AI tools like LangChain/LLMs, and optimizing backend performance for high-growth B2B products.
Key Responsibilities:
- Develop robust backend services using Python, Django, and FastAPI
- Design and maintain a scalable microservices architecture
- Integrate LangChain/LLMs into AI-powered features
- Write clean, tested, and maintainable code with pytest
- Manage and optimize databases (MySQL/Postgres)
- Deploy and monitor services on AWS
- Collaborate across teams to define APIs, data flows, and system architecture
Must-Have Skills:
- Python and Django
- MySQL or Postgres
- Microservices architecture
- AWS (EC2, RDS, Lambda, etc.)
- Unit testing using pytest
- LangChain or Large Language Models (LLM)
- Strong grasp of Data Structures & Algorithms
- AI coding assistant tools (e.g., Chat GPT & Gemini)
Good to Have:
- MongoDB or ElasticSearch
- Go or PHP
- FastAPI
- React, Bootstrap (basic frontend support)
- ETL pipelines, Jenkins, Terraform
Why Join Us?
- 100% Remote role with a collaborative team
- Work on AI-first, high-scale SaaS products
- Drive real impact in a fast-growing tech company
- Ownership and growth from day one
JOB DETAILS:
* Job Title: Lead I - Software Engineering-Kotlin, Java, Spring Boot, Aws
* Industry: Global digital transformation solutions provide
* Salary: Best in Industry
* Experience: 5 -7 years
* Location: Trivandrum, Thiruvananthapuram
Role Proficiency:
Act creatively to develop applications and select appropriate technical options optimizing application development maintenance and performance by employing design patterns and reusing proven solutions account for others' developmental activities
Skill Examples:
- Explain and communicate the design / development to the customer
- Perform and evaluate test results against product specifications
- Break down complex problems into logical components
- Develop user interfaces business software components
- Use data models
- Estimate time and effort required for developing / debugging features / components
- Perform and evaluate test in the customer or target environment
- Make quick decisions on technical/project related challenges
- Manage a Team mentor and handle people related issues in team
- Maintain high motivation levels and positive dynamics in the team.
- Interface with other teams’ designers and other parallel practices
- Set goals for self and team. Provide feedback to team members
- Create and articulate impactful technical presentations
- Follow high level of business etiquette in emails and other business communication
- Drive conference calls with customers addressing customer questions
- Proactively ask for and offer help
- Ability to work under pressure determine dependencies risks facilitate planning; handling multiple tasks.
- Build confidence with customers by meeting the deliverables on time with quality.
- Estimate time and effort resources required for developing / debugging features / components
- Make on appropriate utilization of Software / Hardware’s.
- Strong analytical and problem-solving abilities
Knowledge Examples:
- Appropriate software programs / modules
- Functional and technical designing
- Programming languages – proficient in multiple skill clusters
- DBMS
- Operating Systems and software platforms
- Software Development Life Cycle
- Agile – Scrum or Kanban Methods
- Integrated development environment (IDE)
- Rapid application development (RAD)
- Modelling technology and languages
- Interface definition languages (IDL)
- Knowledge of customer domain and deep understanding of sub domain where problem is solved
Additional Comments:
We are seeking an experienced Senior Backend Engineer with strong expertise in Kotlin and Java to join our dynamic engineering team.
The ideal candidate will have a deep understanding of backend frameworks, cloud technologies, and scalable microservices architectures, with a passion for clean code, resilience, and system observability.
You will play a critical role in designing, developing, and maintaining core backend services that power our high-availability e-commerce and promotion platforms.
Key Responsibilities
Design, develop, and maintain backend services using Kotlin (JVM, Coroutines, Serialization) and Java.
Build robust microservices with Spring Boot and related Spring ecosystem components (Spring Cloud, Spring Security, Spring Kafka, Spring Data).
Implement efficient serialization/deserialization using Jackson and Kotlin Serialization. Develop, maintain, and execute automated tests using JUnit 5, Mockk, and ArchUnit to ensure code quality.
Work with Kafka Streams (Avro), Oracle SQL (JDBC, JPA), DynamoDB, and Redis for data storage and caching needs. Deploy and manage services in AWS environment leveraging DynamoDB, Lambdas, and IAM.
Implement CI/CD pipelines with GitLab CI to automate build, test, and deployment processes.
Containerize applications using Docker and integrate monitoring using Datadog for tracing, metrics, and dashboards.
Define and maintain infrastructure as code using Terraform for services including GitLab, Datadog, Kafka, and Optimizely.
Develop and maintain RESTful APIs with OpenAPI (Swagger) and JSON API standards.
Apply resilience patterns using Resilience4j to build fault-tolerant systems.
Adhere to architectural and design principles such as Domain-Driven Design (DDD), Object-Oriented Programming (OOP), and Contract Testing (Pact).
Collaborate with cross-functional teams in an Agile Scrum environment to deliver high-quality features.
Utilize feature flagging tools like Optimizely to enable controlled rollouts.
Mandatory Skills & Technologies Languages:
Kotlin (JVM, Coroutines, Serialization),
Java Frameworks: Spring Boot (Spring Cloud, Spring Security, Spring Kafka, Spring Data)
Serialization: Jackson, Kotlin Serialization
Testing: JUnit 5, Mockk, ArchUnit
Data: Kafka (Avro) Streams Oracle SQL (JDBC, JPA) DynamoDB (NoSQL) Redis (Caching)
Cloud: AWS (DynamoDB, Lambda, IAM)
CI/CD: GitLab CI Containers: Docker
Monitoring & Observability: Datadog (Tracing, Metrics, Dashboards, Monitors)
Infrastructure as Code: Terraform (GitLab, Datadog, Kafka, Optimizely)
API: OpenAPI (Swagger), REST API, JSON API
Resilience: Resilience4j
Architecture & Practices: Domain-Driven Design (DDD) Object-Oriented Programming (OOP) Contract Testing (Pact) Feature Flags (Optimizely)
Platforms: E-Commerce Platform (CommerceTools), Promotion Engine (Talon.One)
Methodologies: Scrum, Agile
Skills: Kotlin, Java, Spring Boot, Aws
Must-Haves
Kotlin (JVM, Coroutines, Serialization), Java, Spring Boot (Spring Cloud, Spring Security, Spring Kafka, Spring Data), AWS (DynamoDB, Lambda, IAM), Microservices Architecture
******
Notice period - 0 to 15 days only
Job stability is mandatory
Location: Trivandrum
Virtual Weekend Interview on 7th Feb 2026 - Saturday

🚀 Hiring: Associate Tech Architect / Senior Tech Specialist
🌍 Remote | Contract Opportunity
We’re looking for a seasoned tech professional who can lead the design and implementation of cloud-native data and platform solutions. This is a remote, contract-based role for someone with strong ownership and architecture experience.
🔴 Mandatory & Most Important Skill Set
Hands-on expertise in the following technologies is essential:
✅ AWS – Cloud architecture & services
✅ Python – Backend & data engineering
✅ Terraform – Infrastructure as Code
✅ Airflow – Workflow orchestration
✅ SQL – Data processing & querying
✅ DBT – Data transformation & modeling
💼 Key Responsibilities
- Architect and build scalable AWS-based data platforms
- Design and manage ETL/ELT pipelines
- Orchestrate workflows using Airflow
- Implement cloud infrastructure using Terraform
- Lead best practices in data architecture, performance, and scalability
- Collaborate with engineering teams and provide technical leadership
🎯 Ideal Profile
✔ Strong experience in cloud and data platform architecture
✔ Ability to take end-to-end technical ownership
✔ Comfortable working in a remote, distributed team environment
📄 Role Type: Contract
🌍 Work Mode: 100% Remote
If you have deep expertise in these core technologies and are ready to take on a high-impact architecture role, we’d love to hear from you.
We are hiring a Senior DevOps Engineer (5–10 years experience) with strong hands-on expertise in AWS, CI/CD, Docker, Kubernetes, and Linux. The role involves designing, automating, and managing scalable cloud infrastructure and deployment pipelines. Experience with Terraform/Ansible, monitoring tools, and security best practices is required. Immediate joiners preferred.
Role: DevOps Engineer
Experience: 7+ Years
Location: Pune / Trivandrum
Work Mode: Hybrid
𝐊𝐞𝐲 𝐑𝐞𝐬𝐩𝐨𝐧𝐬𝐢𝐛𝐢𝐥𝐢𝐭𝐢𝐞𝐬:
- Drive CI/CD pipelines for microservices and cloud architectures
- Design and operate cloud-native platforms (AWS/Azure)
- Manage Kubernetes/OpenShift clusters and containerized applications
- Develop automated pipelines and infrastructure scripts
- Collaborate with cross-functional teams on DevOps best practices
- Mentor development teams on continuous delivery and reliability
- Handle incident management, troubleshooting, and root cause analysis
𝐌𝐚𝐧𝐝𝐚𝐭𝐨𝐫𝐲 𝐒𝐤𝐢𝐥𝐥𝐬:
- 7+ years in DevOps/SRE roles
- Strong experience with AWS or Azure
- Hands-on with Docker, Kubernetes, and/or OpenShift
- Proficiency in Jenkins, Git, Maven, JIRA
- Strong scripting skills (Shell, Python, Perl, Ruby, JavaScript)
- Solid networking knowledge and troubleshooting skills
- Excellent communication and collaboration abilities
𝐏𝐫𝐞𝐟𝐞𝐫𝐫𝐞𝐝 𝐒𝐤𝐢𝐥𝐥𝐬:
- Experience with Helm, monitoring tools (Splunk, Grafana, New Relic, Datadog)
- Knowledge of Microservices and SOA architectures
- Familiarity with database technologies

US based large Biotech company with WW operations.
Senior Cloud Engineer Job Description
Position Title: Senior Cloud Engineer -- AWS [LONG TERM-CONTRACT POSITION]
Location: Remote [REQUIRES WORKING IN CST TIME ZONE]
Position Overview
The Senior Cloud Engineer will play a critical role in designing, deploying, and managing scalable, secure, and highly available cloud infrastructure across multiple platforms (AWS, Azure, Google Cloud). This role requires deep technical expertise, leadership in cloud
strategy, and hands-on experience with automation, DevOps practices, and cloud-native technologies. The ideal candidate will work collaboratively with cross-functional teams to deliver robust cloud solutions, drive best practices, and support business objectives
through innovative cloud engineering.
Key Responsibilities
Design, implement, and maintain cloud infrastructure and services, ensuring high availability, performance, and security across multi-cloud environments (AWS, Azure, GCP)
Develop and manage Infrastructure as Code (IaC) using tools such as Terraform, CloudFormation, and Ansible for automated provisioning and configuration
Lead the adoption and optimization of DevOps methodologies, including CI/CD pipelines, automated testing, and deployment processes
Collaborate with software engineers, architects, and stakeholders to architect cloud-native solutions that meet business and technical requirements
Monitor, troubleshoot, and optimize cloud systems for cost, performance, and reliability, using cloud monitoring and logging tools
Ensure cloud environments adhere to security best practices, compliance standards, and governance policies, including identity and access management, encryption, and vulnerability management
Mentor and guide junior engineers, sharing knowledge and fostering a culture of continuous improvement and innovation
Participate in on-call rotation and provide escalation support for critical cloud infrastructure issues
Document cloud architectures, processes, and procedures to ensure knowledge transfer and operational excellence
Stay current with emerging cloud technologies, trends, and best practices,
Required Qualifications
- Bachelors or Masters degree in Computer Science, Engineering, Information Systems, or a related field, or equivalent work experience
- 6–10 years of experience in cloud engineering or related roles, with a proven track record in large-scale cloud environments
- Deep expertise in at least one major cloud platform (AWS, Azure, Google Cloud) and experience in multi-cloud environments
- Strong programming and scripting skills (Python, Bash, PowerShell, etc.) for automation and cloud service integration
- Proficiency with DevOps tools and practices, including CI/CD (Jenkins, GitLab CI), containerization (Docker, Kubernetes), and configuration management (Ansible, Chef)
- Solid understanding of networking concepts (VPC, VPN, DNS, firewalls, load balancers), system administration (Linux/Windows), and cloud storage solutions
- Experience with cloud security, governance, and compliance frameworks
- Excellent analytical, troubleshooting, and root cause analysis skills
- Strong communication and collaboration abilities, with experience working in agile, interdisciplinary teams
- Ability to work independently, manage multiple priorities, and lead complex projects to completion
Preferred Qualifications
- Relevant cloud certifications (e.g., AWS Certified Solutions Architect, AWS DevOps Engineer, Microsoft AZ-300/400/500, Google Professional Cloud Architect)
- Experience with cloud cost optimization and FinOps practices
- Familiarity with monitoring/logging tools (CloudWatch, Kibana, Logstash, Datadog, etc.)
- Exposure to cloud database technologies (SQL, NoSQL, managed database services)
- Knowledge of cloud migration strategies and hybrid cloud architectures
REVIEW CRITERIA:
MANDATORY:
- Strong Hands-On AWS Cloud Engineering / DevOps Profile
- Mandatory (Experience 1): Must have 12+ years of experience in AWS Cloud Engineering / Cloud Operations / Application Support
- Mandatory (Experience 2): Must have strong hands-on experience supporting AWS production environments (EC2, VPC, IAM, S3, ALB, CloudWatch)
- Mandatory (Infrastructure as a code): Must have hands-on Infrastructure as Code experience using Terraform in production environments
- Mandatory (AWS Networking): Strong understanding of AWS networking and connectivity (VPC design, routing, NAT, load balancers, hybrid connectivity basics)
- Mandatory (Cost Optimization): Exposure to cost optimization and usage tracking in AWS environments
- Mandatory (Core Skills): Experience handling monitoring, alerts, incident management, and root cause analysis
- Mandatory (Soft Skills): Strong communication skills and stakeholder coordination skills
ROLE & RESPONSIBILITIES:
We are looking for a hands-on AWS Cloud Engineer to support day-to-day cloud operations, automation, and reliability of AWS environments. This role works closely with the Cloud Operations Lead, DevOps, Security, and Application teams to ensure stable, secure, and cost-effective cloud platforms.
KEY RESPONSIBILITIES:
- Operate and support AWS production environments across multiple accounts
- Manage infrastructure using Terraform and support CI/CD pipelines
- Support Amazon EKS clusters, upgrades, scaling, and troubleshooting
- Build and manage Docker images and push to Amazon ECR
- Monitor systems using CloudWatch and third-party tools; respond to incidents
- Support AWS networking (VPCs, NAT, Transit Gateway, VPN/DX)
- Assist with cost optimization, tagging, and governance standards
- Automate operational tasks using Python, Lambda, and Systems Manager
IDEAL CANDIDATE:
- Strong hands-on AWS experience (EC2, VPC, IAM, S3, ALB, CloudWatch)
- Experience with Terraform and Git-based workflows
- Hands-on experience with Kubernetes / EKS
- Experience with CI/CD tools (GitHub Actions, Jenkins, etc.)
- Scripting experience in Python or Bash
- Understanding of monitoring, incident management, and cloud security basics
NICE TO HAVE:
- AWS Associate-level certifications
- Experience with Karpenter, Prometheus, New Relic
- Exposure to FinOps and cost optimization practices
Profile: Devops Lead
Location: Gurugram
Experience: 08+ Years
Notice Period: can join Immediate to 1 week
Company: Watsoo
Required Skills & Qualifications
- Bachelor’s degree in Computer Science, Engineering, or related field.
- 5+ years of proven hands-on DevOps experience.
- Strong experience with CI/CD tools (Jenkins, GitLab CI, GitHub Actions, etc.).
- Expertise in containerization & orchestration (Docker, Kubernetes, Helm).
- Hands-on experience with cloud platforms (AWS, Azure, or GCP).
- Proficiency in Infrastructure as Code (IaC) tools (Terraform, Ansible, Pulumi, or CloudFormation).
- Experience with monitoring and logging solutions (Prometheus, Grafana, ELK, CloudWatch, etc.).
- Proficiency in scripting languages (Python, Bash, or Shell).
- Knowledge of networking, security, and system administration.
- Strong problem-solving skills and ability to work in fast-paced environments.
- Troubleshoot production issues, perform root cause analysis, and implement preventive measures.
Advocate DevOps best practices, automation, and continuous improvement
Drive the design, automation, and reliability of Albert Invent’s core platform to support scalable, high-performance AI applications.
You will partner closely with Product Engineering and SRE teams to ensure security, resiliency, and developer productivity while owning end-to-end service operability.
Key Responsibilities
- Own the design, reliability, and operability of Albert’s mission-critical platform.
- Work closely with Product Engineering and SRE to build scalable, secure, and high-performance services.
- Plan and deliver core platform capabilities that improve developer velocity, system resilience, and scalability.
- Maintain a deep understanding of microservices topology, dependencies, and behavior.
- Act as the technical authority for performance, reliability, and availability across services.
- Drive automation and orchestration across infrastructure and operations.
- Serve as the final escalation point for complex or undocumented production issues.
- Lead root-cause analysis, mitigation strategies, and long-term system improvements.
- Mentor engineers in building robust, automated, and production-grade systems.
- Champion best practices in SRE, reliability, and platform engineering.
Must-Have Requirements
- Bachelor’s degree in Computer Science, Engineering, or equivalent practical experience.
- 4+ years of strong backend coding in Python or Node.js.
- 4+ years of overall software engineering experience, including 2+ years in an SRE / automation-focused role.
- Strong hands-on experience with Infrastructure as Code (Terraform preferred).
- Deep experience with AWS cloud infrastructure and distributed systems (microservices, APIs, service-to-service communication).
- Experience with observability systems – logs, metrics, and tracing.
- Experience using CI/CD pipelines (e.g., CircleCI).
- Performance testing experience using K6 or similar tools.
- Strong focus on automation, standards, and operational excellence.
- Experience building low-latency APIs (< 200ms response time).
- Ability to work in fast-paced, high-ownership environments.
- Proven ability to lead technically, mentor engineers, and influence engineering quality.
Good-to-Have Skills
- Kubernetes and container orchestration experience.
- Observability tools such as Prometheus, Grafana, OpenTelemetry, Datadog.
- Experience building Internal Developer Platforms (IDPs) or reusable engineering frameworks.
- Exposure to ML infrastructure or data engineering pipelines.
- Experience working in compliance-driven environments (SOC2, HIPAA, etc.).
Hey there budding tech wizard! Are you ready to take on a new challenge?
As a Senior Software Developer 1 (Full Stack) at Techlyticaly, you'll be responsible for solving problems and flexing your tech muscles to build amazing stuff, mentoring and guiding others. You'll work under the guidance of mentors and be responsible for developing high-quality, maintainable code modules that are extensible and meet the technical guidelines provided.
Responsibilities
We want you to show off your technical skills, but we also want you to be creative and think outside the box. Here are some of the ways you'll be flexing your tech muscles:
- Use your superpowers to solve complex technical problems, combining your excellent abstract reasoning ability with problem-solving skills.
- Efficient in at least one product or technology of strategic importance to the organisation, and a true tech ninja.
- Stay up-to-date with emerging trends in the field, so that you can keep bringing fresh ideas to the table.
- Implement robust and extensible code modules as per guidelines. We love all code that's functional (Don’t we?)
- Develop good quality, maintainable code modules without any defects, exhibiting attention to detail. Nothing should look sus!
- Manage assigned tasks well and schedule them appropriately for self and team, while providing visibility to the mentor and understanding the mentor's expectations of work. But don't be afraid to add your own twist to the work you're doing.
- Consistently apply and improve team software development processes such as estimations, tracking, testing, code and design reviews, etc., but do it with a funky twist that reflects your personality.
- Clarify requirements and provide end-to-end estimates. We all love it when requirements are clear (Don’t we?)
- Participate in release planning and design complex modules & features.
- Work with product and business teams directly for critical issue ownership. Isn’t it better when one of us understands what they say?
- Feel empowered by managing deployments and assisting in infra management.
- Act as role model for the team and guide them to brilliance. We all feel secured when we have someone to look up to.
Qualifications
We want to make sure you're a funky, tech-loving person with a passion for learning and growing. Here are some of the things we're looking for:
- You have a Bachelor's or Master’s degree in Computer Science or a related field, but you also have a creative side that you're not afraid to show.
- You have excellent abstract reasoning ability and a strong understanding of core computer science fundamentals.
- You're proficient with web programming languages such as HTML, CSS, JavaScript with at least 5+ years of experience, but you're also open to learning new languages and technologies that might not be as mainstream.
- You’ve 5+ years of experience with backend web framework Django and DRF.
- You’ve 5+ years of experience with frontend web framework React.
- Your knowledge of cloud service providers like AWS, GCP, Azure, etc. will be an added bonus.
- You have experience with testing, code, and design reviews.
- You have strong written and verbal communication skills, but you're also not afraid to show your personality and let your funky side shine through.
- You can work independently and in a team environment, but you're also excited to collaborate with others and share your ideas.
- You've demonstrated your ability to lead a small team of developers.
- And most important, you're also excited to learn about new things and try out new ideas.
Compensation:
We know you're passionate and talented, and we want to reward you for that. That's why we're offering a compensation package of 15 - 17 LPA!
This is an mid-level position, you'll get to flex your coding muscles, work on exciting projects, and grow your skills in a fast-paced, dynamic environment. So, if you're passionate about all things tech and ready to take your skills to the next level, we want YOU to apply! Let's make some magic happen together!
We are located in Delhi. This post may require relocation.
JOB DETAILS:
- Job Title: Senior Devops Engineer 2
- Industry: Ride-hailing
- Experience: 5-7 years
- Working Days: 5 days/week
- Work Mode: ONSITE
- Job Location: Bangalore
- CTC Range: Best in Industry
Required Skills: Cloud & Infrastructure Operations, Kubernetes & Container Orchestration, Monitoring, Reliability & Observability, Proficiency with Terraform, Ansible etc., Strong problem-solving skills with scripting (Python/Go/Shell)
Criteria:
1. Candidate must be from a product-based or scalable app-based start-ups company with experience handling large-scale production traffic.
2. Minimum 5 yrs of experience working as a DevOps/Infrastructure Consultant
3. Own end-to-end infrastructure right from non-prod to prod environment including self-managed
4. Candidate must have experience in database migration from scratch
5. Must have a firm hold on the container orchestration tool Kubernetes
6. Must have expertise in configuration management tools like Ansible, Terraform, Chef / Puppet
7. Understanding programming languages like GO/Python, and Java
8. Working on databases like Mongo/Redis/Cassandra/Elasticsearch/Kafka.
9. Working experience on Cloud platform - AWS
10. Candidate should have Minimum 1.5 years stability per organization, and a clear reason for relocation.
Description
Job Summary:
As a DevOps Engineer at company, you will be working on building and operating infrastructure at scale, designing and implementing a variety of tools to enable product teams to build and deploy their services independently, improving observability across the board, and designing for security, resiliency, availability, and stability. If the prospect of ensuring system reliability at scale and exploring cutting-edge technology to solve problems, excites you, then this is your fit.
Job Responsibilities:
● Own end-to-end infrastructure right from non-prod to prod environment including self-managed DBs
● Codify our infrastructure
● Do what it takes to keep the uptime above 99.99%
● Understand the bigger picture and sail through the ambiguities
● Scale technology considering cost and observability and manage end-to-end processes
● Understand DevOps philosophy and evangelize the principles across the organization
● Strong communication and collaboration skills to break down the silos
Job Requirements:
● B.Tech. / B.E. degree in Computer Science or equivalent software engineering degree/experience
● Minimum 5 yrs of experience working as a DevOps/Infrastructure Consultant
● Must have a firm hold on the container orchestration tool Kubernetes
● Must have expertise in configuration management tools like Ansible, Terraform, Chef / Puppet
● Strong problem-solving skills, and ability to write scripts using any scripting language
● Understanding programming languages like GO/Python, and Java
● Comfortable working on databases like Mongo/Redis/Cassandra/Elasticsearch/Kafka.
What’s there for you?
Company’s team handles everything – infra, tooling, and self-manages a bunch of databases, such as
● 150+ microservices with event-driven architecture across different tech stacks Golang/ java/ node
● More than 100,000 Request per second on our edge gateways
● ~20,000 events per second on self-managed Kafka
● 100s of TB of data on self-managed databases
● 100s of real-time continuous deployment to production
● Self-managed infra supporting
● 100% OSS
JOB DETAILS:
- Job Title: Lead DevOps Engineer
- Industry: Ride-hailing
- Experience: 6-9 years
- Working Days: 5 days/week
- Work Mode: ONSITE
- Job Location: Bangalore
- CTC Range: Best in Industry
Required Skills: Cloud & Infrastructure Operations, Kubernetes & Container Orchestration, Monitoring, Reliability & Observability, Proficiency with Terraform, Ansible etc., Strong problem-solving skills with scripting (Python/Go/Shell)
Criteria:
1. Candidate must be from a product-based or scalable app-based start-ups company with experience handling large-scale production traffic.
2. Minimum 6 yrs of experience working as a DevOps/Infrastructure Consultant
3. Candidate must have 2 years of experience as an lead (handling team of 3 to 4 members at least)
4. Own end-to-end infrastructure right from non-prod to prod environment including self-managed
5. Candidate must have Self experience in database migration from scratch
6. Must have a firm hold on the container orchestration tool Kubernetes
7. Should have expertise in configuration management tools like Ansible, Terraform, Chef / Puppet
8. Understanding programming languages like GO/Python, and Java
9. Working on databases like Mongo/Redis/Cassandra/Elasticsearch/Kafka.
10. Working experience on Cloud platform -AWS
11. Candidate should have Minimum 1.5 years stability per organization, and a clear reason for relocation.
Description
Job Summary:
As a DevOps Engineer at company, you will be working on building and operating infrastructure at scale, designing and implementing a variety of tools to enable product teams to build and deploy their services independently, improving observability across the board, and designing for security, resiliency, availability, and stability. If the prospect of ensuring system reliability at scale and exploring cutting-edge technology to solve problems, excites you, then this is your fit.
Job Responsibilities:
● Own end-to-end infrastructure right from non-prod to prod environment including self-managed DBs
● Codify our infrastructure
● Do what it takes to keep the uptime above 99.99%
● Understand the bigger picture and sail through the ambiguities
● Scale technology considering cost and observability and manage end-to-end processes
● Understand DevOps philosophy and evangelize the principles across the organization
● Strong communication and collaboration skills to break down the silos
Job Requirements:
● B.Tech. / B.E. degree in Computer Science or equivalent software engineering degree/experience
● Minimum 6 yrs of experience working as a DevOps/Infrastructure Consultant
● Must have a firm hold on the container orchestration tool Kubernetes
● Must have expertise in configuration management tools like Ansible, Terraform, Chef / Puppet
● Strong problem-solving skills, and ability to write scripts using any scripting language
● Understanding programming languages like GO/Python, and Java
● Comfortable working on databases like Mongo/Redis/Cassandra/Elasticsearch/Kafka.
What’s there for you?
Company’s team handles everything – infra, tooling, and self-manages a bunch of databases, such as
● 150+ microservices with event-driven architecture across different tech stacks Golang/ java/ node
● More than 100,000 Request per second on our edge gateways
● ~20,000 events per second on self-managed Kafka
● 100s of TB of data on self-managed databases
● 100s of real-time continuous deployment to production
● Self-managed infra supporting
● 100% OSS
JOB DETAILS:
- Job Title: Senior Devops Engineer 1
- Industry: Ride-hailing
- Experience: 4-6 years
- Working Days: 5 days/week
- Work Mode: ONSITE
- Job Location: Bangalore
- CTC Range: Best in Industry
Required Skills: Cloud & Infrastructure Operations, Kubernetes & Container Orchestration, Monitoring, Reliability & Observability, Proficiency with Terraform, Ansible etc., Strong problem-solving skills with scripting (Python/Go/Shell)
Criteria:
1. Candidate must be from a product-based or scalable app-based startups company with experience handling large-scale production traffic.
2. Candidate must have strong Linux expertise with hands-on production troubleshooting and working knowledge of databases and middleware (Mongo, Redis, Cassandra, Elasticsearch, Kafka).
3. Candidate must have solid experience with Kubernetes.
4. Candidate should have strong knowledge of configuration management tools like Ansible, Terraform, and Chef / Puppet. Add on- Prometheus & Grafana etc.
5. Candidate must be an individual contributor with strong ownership.
6. Candidate must have hands-on experience with DATABASE MIGRATIONS and observability tools such as Prometheus and Grafana.
7. Candidate must have working knowledge of Go/Python and Java.
8. Candidate should have working experience on Cloud platform - AWS
9. Candidate should have Minimum 1.5 years stability per organization, and a clear reason for relocation.
Description
Job Summary:
As a DevOps Engineer at company, you will be working on building and operating infrastructure at scale, designing and implementing a variety of tools to enable product teams to build and deploy their services independently, improving observability across the board, and designing for security, resiliency, availability, and stability. If the prospect of ensuring system reliability at scale and exploring cutting-edge technology to solve problems, excites you, then this is your fit.
Job Responsibilities:
- Own end-to-end infrastructure right from non-prod to prod environment including self-managed DBs.
- Understanding the needs of stakeholders and conveying this to developers.
- Working on ways to automate and improve development and release processes.
- Identifying technical problems and developing software updates and ‘fixes’.
- Working with software developers to ensure that development follows established processes and works as intended.
- Do what it takes to keep the uptime above 99.99%.
- Understand DevOps philosophy and evangelize the principles across the organization.
- Strong communication and collaboration skills to break down the silos
Job Requirements:
- B.Tech. / B.E. degree in Computer Science or equivalent software engineering degree/experience.
- Minimum 4 yrs of experience working as a DevOps/Infrastructure Consultant.
- Strong background in operating systems like Linux.
- Understands the container orchestration tool Kubernetes.
- Proficient Knowledge of configuration management tools like Ansible, Terraform, and Chef / Puppet. Add on- Prometheus & Grafana etc.
- Problem-solving attitude, and ability to write scripts using any scripting language.
- Understanding programming languages like GO/Python, and Java.
- Basic understanding of databases and middlewares like Mongo/Redis/Cassandra/Elasticsearch/Kafka.
- Should be able to take ownership of tasks, and must be responsible. - Good communication skills
About MyOperator
MyOperator is a Business AI Operator, a category leader that unifies WhatsApp, Calls, and AI-powered chat & voice bots into one intelligent business communication platform. Unlike fragmented communication tools, MyOperator combines automation, intelligence, and workflow integration to help businesses run WhatsApp campaigns, manage calls, deploy AI chatbots, and track performance — all from a single, no-code platform. Trusted by 12,000+ brands including Amazon, Domino's, Apollo, and Razorpay, MyOperator enables faster responses, higher resolution rates, and scalable customer engagement — without fragmented tools or increased headcount.
Job Summary
We are looking for a skilled and motivated DevOps Engineer with 3+ years of hands-on experience in AWS cloud infrastructure, CI/CD automation, and Kubernetes-based deployments. The ideal candidate will have strong expertise in Infrastructure as Code, containerization, monitoring, and automation, and will play a key role in ensuring high availability, scalability, and security of production systems.
Key Responsibilities
- Design, deploy, manage, and maintain AWS cloud infrastructure, including EC2, RDS, OpenSearch, VPC, S3, ALB, API Gateway, Lambda, SNS, and SQS.
- Build, manage, and operate Kubernetes (EKS) clusters and containerized workloads.
- Containerize applications using Docker and manage deployments with Helm charts
- Develop and maintain CI/CD pipelines using Jenkins for automated build and deployment processes
- Provision and manage infrastructure using Terraform (Infrastructure as Code)
- Implement and manage monitoring, logging, and alerting solutions using Prometheus and Grafana
- Write and maintain Python scripts for automation, monitoring, and operational tasks
- Ensure high availability, scalability, performance, and cost optimization of cloud resources
- Implement and follow security best practices across AWS and Kubernetes environments
- Troubleshoot production issues, perform root cause analysis, and support incident resolution
- Collaborate closely with development and QA teams to streamline deployment and release processes
Required Skills & Qualifications
- 3+ years of hands-on experience as a DevOps Engineer or Cloud Engineer.
- Strong experience with AWS services, including:
- EC2, RDS, OpenSearch, VPC, S3
- Application Load Balancer (ALB), API Gateway, Lambda
- SNS and SQS.
- Hands-on experience with AWS EKS (Kubernetes)
- Strong knowledge of Docker and Helm charts
- Experience with Terraform for infrastructure provisioning and management
- Solid experience building and managing CI/CD pipelines using Jenkins
- Practical experience with Prometheus and Grafana for monitoring and alerting
- Proficiency in Python scripting for automation and operational tasks
- Good understanding of Linux systems, networking concepts, and cloud security
- Strong problem-solving and troubleshooting skills
Good to Have (Preferred Skills)
- Exposure to GitOps practices
- Experience managing multi-environment setups (Dev, QA, UAT, Production)
- Knowledge of cloud cost optimization techniques
- Understanding of Kubernetes security best practices
- Experience with log aggregation tools (e.g., ELK/OpenSearch stack)
Language Preference
- Fluency in English is mandatory.
- Fluency in Hindi is preferred.
Role: Senior Platform Engineer (GCP Cloud)
Experience Level: 3 to 6 Years
Work location: Mumbai
Mode : Hybrid
Role & Responsibilities:
- Build automation software for cloud platforms and applications
- Drive Infrastructure as Code (IaC) adoption
- Design self-service, self-healing monitoring and alerting tools
- Automate CI/CD pipelines (Git, Jenkins, SonarQube, Docker)
- Build Kubernetes container platforms
- Introduce new cloud technologies for business innovation
Requirements:
- Hands-on experience with GCP Cloud
- Knowledge of cloud services (compute, storage, network, messaging)
- IaC tools experience (Terraform/CloudFormation)
- SQL & NoSQL databases (Postgres, Cassandra)
- Automation tools (Puppet/Chef/Ansible)
- Strong Linux administration skills
- Programming: Bash/Python/Java/Scala
- CI/CD pipeline expertise (Jenkins, Git, Maven)
- Multi-region deployment experience
- Agile/Scrum/DevOps methodology
We’re looking for an experienced Site Reliability Engineer to fill the mission-critical role of ensuring that our complex, web-scale systems are healthy, monitored, automated, and designed to scale. You will use your background as an operations generalist to work closely with our development teams from the early stages of design all the way through identifying and resolving production issues. The ideal candidate will be passionate about an operations role that involves deep knowledge of both the application and the product, and will also believe that automation is a key component to operating large-scale systems.
6-Month Accomplishments
- Familiarize with poshmark tech stack and functional requirements.
- Get comfortable with automation tools/frameworks used within cloudops organization and deployment processes associated with.
- Gain in depth knowledge related to related product functionality and infrastructure required for it.
- Start Contributing by working on small to medium scale projects.
- Understand and follow on call rotation as a secondary to get familiarized with the on call process.
12+ Month Accomplishments
- Execute projects related to comms functionality, independently, with little guidance from lead.
- Create meaningful alerts and dashboards for various sub-system involved in targeted infrastructure.
- Identify gaps in infrastructure and suggest improvements or work on it.
- Get involved in on-call rotation.
Responsibilities
- Serve as a primary point responsible for the overall health, performance, and capacity of one or more of our Internet-facing services.
- Gain deep knowledge of our complex applications.
- Assist in the roll-out and deployment of new product features and installations to facilitate our rapid iteration and constant growth.
- Develop tools to improve our ability to rapidly deploy and effectively monitor custom applications in a large-scale UNIX environment.
- Work closely with development teams to ensure that platforms are designed with "operability" in mind.
- Function well in a fast-paced, rapidly-changing environment.
- Participate in a 24x7 on-call rotation.
Desired Skills
- 5+ years of experience in Systems Engineering/Site Reliability Operations role is required, ideally in a startup or fast-growing company.
- 5+ years in a UNIX-based large-scale web operations role.
- 5+ years of experience in doing 24/7 support for large scale production environments.
- Battle-proven, real-life experience in running a large scale production operation.
- Experience working on cloud-based infrastructure e.g AWS, GCP, Azure.
- Hands-on experience with continuous integration tools such as Jenkins, configuration management with Ansible, systems monitoring and alerting with tools such as Nagios, New Relic, Graphite.
- Experience scripting/coding
- Ability to use a wide variety of open source technologies and tools.
Technologies we use:
- Ruby, JavaScript, NodeJs, Tomcat, Nginx, HaProxy
- MongoDB, RabbitMQ, Redis, ElasticSearch.
- Amazon Web Services (EC2, RDS, CloudFront, S3, etc.)
- Terraform, Packer, Jenkins, Datadog, Kubernetes, Docker, Ansible and other DevOps tools.
About Poshmark
Poshmark is a leading fashion resale marketplace powered by a vibrant, highly engaged community of buyers and sellers and real-time social experiences. Designed to make online selling fun, more social and easier than ever, Poshmark empowers its sellers to turn their closet into a thriving business and share their style with the world. Since its founding in 2011, Poshmark has grown its community to over 130 million users and generated over $10 billion in GMV, helping sellers realize billions in earnings, delighting buyers with deals and one-of-a-kind items, and building a more sustainable future for fashion. For more information, please visit www.poshmark.com, and for company news, visit newsroom.poshmark.com.
We’re looking for an experienced Site Reliability Engineer to fill the mission-critical role of ensuring that our complex, web-scale systems are healthy, monitored, automated, and designed to scale. You will use your background as an operations generalist to work closely with our development teams from the early stages of design all the way through identifying and resolving production issues. The ideal candidate will be passionate about an operations role that involves deep knowledge of both the application and the product, and will also believe that automation is a key component to operating large-scale systems.
6-Month Accomplishments
- Familiarize with poshmark tech stack and functional requirements.
- Get comfortable with automation tools/frameworks used within cloudops organization and deployment processes associated with.
- Gain in depth knowledge related to related product functionality and infrastructure required for it.
- Start Contributing by working on small to medium scale projects.
- Understand and follow on call rotation as a secondary to get familiarized with the on call process.
12+ Month Accomplishments
- Execute projects related to comms functionality, independently, with little guidance from lead.
- Create meaningful alerts and dashboards for various sub-system involved in targeted infrastructure.
- Identify gaps in infrastructure and suggest improvements or work on it.
- Get involved in on-call rotation.
Responsibilities
- Serve as a primary point responsible for the overall health, performance, and capacity of one or more of our Internet-facing services.
- Gain deep knowledge of our complex applications.
- Assist in the roll-out and deployment of new product features and installations to facilitate our rapid iteration and constant growth.
- Develop tools to improve our ability to rapidly deploy and effectively monitor custom applications in a large-scale UNIX environment.
- Work closely with development teams to ensure that platforms are designed with "operability" in mind.
- Function well in a fast-paced, rapidly-changing environment.
- Participate in a 24x7 on-call rotation
Desired Skills
- 4+ years of experience in Systems Engineering/Site Reliability Operations role is required, ideally in a startup or fast-growing company.
- 4+ years in a UNIX-based large-scale web operations role.
- 4+ years of experience in doing 24/7 support for large scale production environments.
- Battle-proven, real-life experience in running a large scale production operation.
- Experience working on cloud-based infrastructure e.g AWS, GCP, Azure.
- Hands-on experience with continuous integration tools such as Jenkins, configuration management with Ansible, systems monitoring and alerting with tools such as Nagios, New Relic, Graphite.
- Experience scripting/coding
- Ability to use a wide variety of open source technologies and tools.
Technologies we use:
- Ruby, JavaScript, NodeJs, Tomcat, Nginx, HaProxy
- MongoDB, RabbitMQ, Redis, ElasticSearch.
- Amazon Web Services (EC2, RDS, CloudFront, S3, etc.)
- Terraform, Packer, Jenkins, Datadog, Kubernetes, Docker, Ansible and other DevOps tools.
SENIOR INFORMATION SECURITY ENGINEER (DEVSECOPS)
Key Skills: Software Development Life Cycle (SDLC), CI/CD
About Company: Consumer Internet / E-Commerce
Company Size: Mid-Sized
Experience Required: 6 - 10 years
Working Days: 5 days/week
Office Location: Bengaluru [Karnataka]
Review Criteria:
Mandatory:
- Strong DevSecOps profile
- Must have 5+ years of hands-on experience in Information Security, with a primary focus on cloud security across AWS, Azure, and GCP environments.
- Must have strong practical experience working with Cloud Security Posture Management (CSPM) tools such as Prisma Cloud, Wiz, or Orca along with SIEM / IDS / IPS platforms
- Must have proven experience in securing Kubernetes and containerized environments including image security,runtime protection, RBAC, and network policies.
- Must have hands-on experience integrating security within CI/CD pipelines using tools such as Snyk, GitHub Advanced Security,or equivalent security scanning solutions.
- Must have solid understanding of core security domains including network security, encryption, identity and access management key management, and security governance including cloud-native security services like GuardDuty, Azure Security Center etc
- Must have practical experience with Application Security Testing tools including SAST, DAST, and SCA in real production environments
- Must have hands-on experience with security monitoring, incident response, alert investigation, root-cause analysis (RCA), and managing VAPT / penetration testing activities
- Must have experience securing infrastructure-as-code and cloud deployments using Terraform, CloudFormation, ARM, Docker, and Kubernetes
- B2B SaaS Product companies
- Must have working knowledge of globally recognized security frameworks and standards such as ISO 27001, NIST, and CIS with exposure to SOC2, GDPR, or HIPAA compliance environments
Preferred:
- Experience with DevSecOps automation, security-as-code, and policy-as-code implementations
- Exposure to threat intelligence platforms, cloud security monitoring, and proactive threat detection methodologies, including EDR / DLP or vulnerability management tools
- Must demonstrate strong ownership mindset, proactive security-first thinking, and ability to communicate risks in clear business language
Roles & Responsibilities:
We are looking for a Senior Information Security Engineer who can help protect our cloud infrastructure, applications, and data while enabling teams to move fast and build securely.
This role sits deep within our engineering ecosystem. You’ll embed security into how we design, build, deploy, and operate systems—working closely with Cloud, Platform, and Application Engineering teams. You’ll balance proactive security design with hands-on incident response, and help shape a strong, security-first culture across the organization.
If you enjoy solving real-world security problems, working close to systems and code, and influencing how teams build securely at scale, this role is for you.
What You’ll Do-
Cloud & Infrastructure Security:
- Design, implement, and operate cloud-native security controls across AWS, Azure, GCP, and Oracle.
- Strengthen IAM, network security, and cloud posture using services like GuardDuty, Azure Security Center and others.
- Partner with platform teams to secure VPCs, security groups, and cloud access patterns.
Application & DevSecOps Security:
- Embed security into the SDLC through threat modeling, secure code reviews, and security-by-design practices.
- Integrate SAST, DAST, and SCA tools into CI/CD pipelines.
- Secure infrastructure-as-code and containerized workloads using Terraform, CloudFormation, ARM, Docker, and Kubernetes.
Security Monitoring & Incident Response:
- Monitor security alerts and investigate potential threats across cloud and application layers.
- Lead or support incident response efforts, root-cause analysis, and corrective actions.
- Plan and execute VAPT and penetration testing engagements (internal and external), track remediation, and validate fixes.
- Conduct red teaming activities and tabletop exercises to test detection, response readiness, and cross-team coordination.
- Continuously improve detection, response, and testing maturity.
Security Tools & Platforms:
- Manage and optimize security tooling including firewalls, SIEM, EDR, DLP, IDS/IPS, CSPM, and vulnerability management platforms.
- Ensure tools are well-integrated, actionable, and aligned with operational needs.
Compliance, Governance & Awareness:
- Support compliance with industry standards and frameworks such as SOC2, HIPAA, ISO 27001, NIST, CIS, and GDPR.
- Promote secure engineering practices through training, documentation, and ongoing awareness programs.
- Act as a trusted security advisor to engineering and product teams.
Continuous Improvement:
- Stay ahead of emerging threats, cloud vulnerabilities, and evolving security best practices.
- Continuously raise the bar on a company's security posture through automation and process improvement.
Endpoint Security (Secondary Scope):
- Provide guidance on endpoint security tooling such as SentinelOne and Microsoft Defender when required.
Ideal Candidate:
- Strong hands-on experience in cloud security across AWS and Azure.
- Practical exposure to CSPM tools (e.g., Prisma Cloud, Wiz, Orca) and SIEM / IDS / IPS platforms.
- Experience securing containerized and Kubernetes-based environments.
- Familiarity with CI/CD security integrations (e.g., Snyk, GitHub Advanced Security, or similar).
- Solid understanding of network security, encryption, identity, and access management.
- Experience with application security testing tools (SAST, DAST, SCA).
- Working knowledge of security frameworks and standards such as ISO 27001, NIST, and CIS.
- Strong analytical, troubleshooting, and problem-solving skills.
Nice to Have:
- Experience with DevSecOps automation and security-as-code practices.
- Exposure to threat intelligence and cloud security monitoring solutions.
- Familiarity with incident response frameworks and forensic analysis.
- Security certifications such as CISSP, CISM, CCSP, or CompTIA Security+.
Perks, Benefits and Work Culture:
A wholesome opportunity in a fast-paced environment that will enable you to juggle between concepts, yet maintain the quality of content, interact and share your ideas and have loads of learning while at work. Work with a team of highly talented young professionals and enjoy the comprehensive benefits that company offers.
Procedure is hiring for Drover.
This is not a DevOps/SRE/cloud-migration role — this is a hands-on backend engineering and architecture role where you build the platform powering our hardware at scale.
About Drover
Ranching is getting harder. Increased labor costs and a volatile climate are placing mounting pressure to provide for a growing population. Drover is empowering ranchers to efficiently and sustainably feed the world by making it cheaper and easier to manage livestock, unlock productivity gains, and reduce carbon footprint with rotational grazing. Not only is this a $46B opportunity, you'll be working on a climate solution with the potential for real, meaningful impact.
We use patent-pending low-voltage electrical muscle stimulation (EMS) to steer and contain cows, replacing the need for physical fences or electric shock. We are building something that has never been done before, and we have hundreds of ranches on our waitlist.
Drover is founded by Callum Taylor (ex-Harvard), who comes from 5 generations of ranching, and Samuel Aubin, both of whom grew up in Australian ranching towns and have an intricate understanding of the problem space. We are well-funded and supported by Workshop Ventures, a VC firm with experience in building unicorn IoT companies.
We're looking to assemble a team of exceptional talent with a high eagerness to dive headfirst into understanding the challenges and opportunities within ranching.
About The Role
As our founding cloud engineer, you will be responsible for building and scaling the infrastructure that powers our IoT platform, connecting thousands of devices across ranches nationwide.
Because we are an early-stage startup, you will have high levels of ownership in what you build. You will play a pivotal part in architecting our cloud infrastructure, building robust APIs, and ensuring our systems can scale reliably. We are looking for someone who is excited about solving complex technical challenges at the intersection of IoT, agriculture, and cloud computing.
What You'll Do
- Develop Drover IoT cloud architecture from the ground up (it’s a green field project)
- Design and implement services to support wearable devices, mobile app, and backend API
- Implement data processing and storage pipelines
- Create and maintain Infrastructure-as-Code
- Support the engineering team across all aspects of early-stage development -- after all, this is a startup
Requirements
- 5+ years of experience developing cloud architecture on AWS
- In-depth understanding of various AWS services, especially those related to IoT
- Expertise in cloud-hosted, event-driven, serverless architectures
- Expertise in programming languages suitable for AWS micro-services (eg: TypeScript, Python)
- Experience with networking and socket programming
- Experience with Kubernetes or similar orchestration platforms
- Experience with Infrastructure-as-Code tools (e.g., Terraform, AWS CDK)
- Familiarity with relational databases (PostgreSQL)
- Familiarity with Continuous Integration and Continuous Deployment (CI/CD)
Nice To Have
- Bachelor’s or Master’s degree in Computer Science, Software Engineering, Electrical Engineering, or a related field
ROLES AND RESPONSIBILITIES:
You will be responsible for architecting, implementing, and optimizing Dremio-based data Lakehouse environments integrated with cloud storage, BI, and data engineering ecosystems. The role requires a strong balance of architecture design, data modeling, query optimization, and governance enablement in large-scale analytical environments.
- Design and implement Dremio lakehouse architecture on cloud (AWS/Azure/Snowflake/Databricks ecosystem).
- Define data ingestion, curation, and semantic modeling strategies to support analytics and AI workloads.
- Optimize Dremio reflections, caching, and query performance for diverse data consumption patterns.
- Collaborate with data engineering teams to integrate data sources via APIs, JDBC, Delta/Parquet, and object storage layers (S3/ADLS).
- Establish best practices for data security, lineage, and access control aligned with enterprise governance policies.
- Support self-service analytics by enabling governed data products and semantic layers.
- Develop reusable design patterns, documentation, and standards for Dremio deployment, monitoring, and scaling.
- Work closely with BI and data science teams to ensure fast, reliable, and well-modeled access to enterprise data.
IDEAL CANDIDATE:
- Bachelor’s or Master’s in Computer Science, Information Systems, or related field.
- 5+ years in data architecture and engineering, with 3+ years in Dremio or modern lakehouse platforms.
- Strong expertise in SQL optimization, data modeling, and performance tuning within Dremio or similar query engines (Presto, Trino, Athena).
- Hands-on experience with cloud storage (S3, ADLS, GCS), Parquet/Delta/Iceberg formats, and distributed query planning.
- Knowledge of data integration tools and pipelines (Airflow, DBT, Kafka, Spark, etc.).
- Familiarity with enterprise data governance, metadata management, and role-based access control (RBAC).
- Excellent problem-solving, documentation, and stakeholder communication skills.
PREFERRED:
- Experience integrating Dremio with BI tools (Tableau, Power BI, Looker) and data catalogs (Collibra, Alation, Purview).
- Exposure to Snowflake, Databricks, or BigQuery environments.
- Experience in high-tech, manufacturing, or enterprise data modernization programs.
Role: Full-Time, Long-Term Required: Docker, GCP, CI/CD Preferred: Experience with ML pipelines
OVERVIEW
We are seeking a DevOps engineer to join as a core member of our technical team. This is a long-term position for someone who wants to own infrastructure and deployment for a production machine learning system. You will ensure our prediction pipeline runs reliably, deploys smoothly, and scales as needed.
The ideal candidate thinks about failure modes obsessively, automates everything possible, and builds systems that run without constant attention.
CORE TECHNICAL REQUIREMENTS
Docker (Required): Deep experience with containerization. Efficient Dockerfiles, layer caching, multi-stage builds, debugging container issues. Experience with Docker Compose for local development.
Google Cloud Platform (Required): Strong GCP experience: Cloud Run for serverless containers, Compute Engine for VMs, Artifact Registry for images, Cloud Storage, IAM. You can navigate the console but prefer scripting everything.
CI/CD (Required): Build and maintain deployment pipelines. GitHub Actions required. You automate testing, building, pushing, and deploying. You understand the difference between continuous integration and continuous deployment.
Linux Administration (Required): Comfortable on the command line. SSH, diagnose problems, manage services, read logs, fix things. Bash scripting is second nature.
PostgreSQL (Required): Database administration basics—backups, monitoring, connection management, basic performance tuning. Not a DBA, but comfortable keeping a production database healthy.
Infrastructure as Code (Preferred): Terraform, Pulumi, or similar. Infrastructure should be versioned, reviewed, and reproducible—not clicked together in a console.
WHAT YOU WILL OWN
Deployment Pipeline: Maintaining and improving deployment scripts and CI/CD workflows. Code moves from commit to production reliably with appropriate testing gates.
Cloud Run Services: Managing deployments for model fitting, data cleansing, and signal discovery services. Monitor health, optimize cold starts, handle scaling.
VM Infrastructure: PostgreSQL and Streamlit on GCP VMs. Instance management, updates, backups, security.
Container Registry: Managing images in GitHub Container Registry and Google Artifact Registry. Cleanup policies, versioning, access control.
Monitoring and Alerting: Building observability. Logging, metrics, health checks, alerting. Know when things break before users tell us.
Environment Management: Configuration across local and production. Secrets management. Environment parity where it matters.
WHAT SUCCESS LOOKS LIKE
Deployments are boring—no drama, no surprises. Systems recover automatically from transient failures. Engineers deploy with confidence. Infrastructure changes are versioned and reproducible. Costs are reasonable and resources scale appropriately.
ENGINEERING STANDARDS
Automation First: If you do something twice, automate it. Manual processes are bugs waiting to happen.
Documentation: Runbooks, architecture diagrams, deployment guides. The next person can understand and operate the system.
Security Mindset: Secrets never in code. Least-privilege access. You think about attack surfaces.
Reliability Focus: Design for failure. Backups are tested. Recovery procedures exist and work.
CURRENT ENVIRONMENT
GCP (Cloud Run, Compute Engine, Artifact Registry, Cloud Storage), Docker, Docker Compose, GitHub Actions, PostgreSQL 16, Bash deployment scripts with Python wrapper.
WHAT WE ARE LOOKING FOR
Ownership Mentality: You see a problem, you fix it. You do not wait for assignment.
Calm Under Pressure: When production breaks, you diagnose methodically.
Communication: You explain infrastructure decisions to non-infrastructure people. You document what you build.
Long-Term Thinking: You build systems maintained for years, not quick fixes creating tech debt.
EDUCATION
University degree in Computer Science, Engineering, or related field preferred. Equivalent demonstrated expertise also considered.
TO APPLY
Include: (1) CV/resume, (2) Brief description of infrastructure you built or maintained, (3) Links to relevant work if available, (4) Availability and timezone.
Exp: 7- 10 Years
CTC: up to 35 LPA
Skills:
- 6–10 years DevOps / SRE / Cloud Infrastructure experience
- Expert-level Kubernetes (networking, security, scaling, controllers)
- Terraform Infrastructure-as-Code mastery
- Hands-on Kafka production experience
- AWS cloud architecture and networking expertise
- Strong scripting in Python, Go, or Bash
- GitOps and CI/CD tooling experience
Key Responsibilities:
- Design highly available, secure cloud infrastructure supporting distributed microservices at scale
- Lead multi-cluster Kubernetes strategy optimized for GPU and multi-tenant workloads
- Implement Infrastructure-as-Code using Terraform across full infrastructure lifecycle
- Optimize Kafka-based data pipelines for throughput, fault tolerance, and low latency
- Deliver zero-downtime CI/CD pipelines using GitOps-driven deployment models
- Establish SRE practices with SLOs, p95 and p99 monitoring, and FinOps discipline
- Ensure production-ready disaster recovery and business continuity testing
If interested Kindly share your updated resume at 82008 31681
Seeking a Senior Staff Cloud Engineer who will lead the design, development, and optimization of scalable cloud architectures, drive automation across the platform, and collaborate with cross-functional stakeholders to deliver secure, high-performance cloud solutions aligned with business goals
Responsibilities:
- Cloud Architecture & Strategy
- Define and evolve the company’s cloud architecture, with AWS as the primary platform.
- Design secure, scalable, and resilient cloud-native and event-driven architectures to support product growth and enterprise demands.
- Create and scale up our platform for integrations with our enterprise customers (webhooks, data pipelines, connectors, batch ingestions, etc)
- Partner with engineering and product to convert custom solutions into productised capabilities.
- Security & Compliance Enablement
- Act as a foundational partner in building out the company’s security andcompliance functions.
- Help define cloud security architecture, policies, and controls to meet enterprise and customer requirements.
- Guide compliance teams on technical approaches to SOC2, ISO 27001, GDPR, and GxP standards.
- Mentor engineers and security specialists on embedding secure-by-design and compliance-first practices.
- Customer & Solutions Enablement
- Work with Solutions Engineering and customers to design and validate complex deployments.
- Contribute to processes that productise custom implementations into scalable platform features.
- Leadership & Influence
- Serve as a technical thought leader across cloud, data, and security domains.
- Collaborate with cross-functional leadership (Product, Platform, TPM, Security) to align technical strategy with business goals.
- Act as an advisor to security and compliance teams during their growth, helping establish scalable practices and frameworks.
- Represent the company in customer and partner discussions as a trusted cloud and security subject matter expert.
- Data Platforms & Governance
- Provide guidance to the data engineering team on database architecture, storage design, and integration patterns.
- Advise on selection and optimisation of a wide variety of databases (relational, NoSQL, time-series, graph, analytical).
- Collaborate on data governance frameworks covering lifecycle management, retention, classification, and access controls.
- Partner with data and compliance teams to ensure regulatory alignment and strong data security practices.
- Developer Experience & DevOps
- Build and maintain tools, automation, and CI/CD pipelines that accelerate developer velocity.
- Promote best practices for infrastructure as code, containerisation, observability, and cost optimisation.
- Embed security, compliance, and reliability standards into the development lifecycle.
Requirements:
- 12+ years of experience in cloud engineering or architecture roles.
- Deep expertise in AWS and strong understanding of modern distributed application design (microservices, containers, event-driven architectures).
- Hands-on experience with a wide range of databases (SQL, NoSQL, analytical, and specialized systems).
- Strong foundation in data management and governance, including lifecycle and compliance.
- Experience supporting or helping build security and compliance functions within a SaaS or enterprise environment.
- Expertise with IaC (Terraform, CDK, CloudFormation) and CI/CD pipelines.
- Strong foundation in networking, security, observability, and performance engineering.
- Excellent communication and influencing skills, with the ability to partner across technical and business functions.
Good to Have:
- Exposure to Azure, GCP, or other cloud environments.
- Experience working in SaaS/PaaS at enterprise scale.
- Background in product engineering, with experience shaping technical direction in collaboration with product teams.
- Knowledge of regulatory and compliance standards (SOC2, ISO 27001, GDPR, and GxP).
About Albert Invent
Albert Invent is a cutting-edge AI-driven software company headquartered in Oakland, California, on a mission to empower scientists and innovators in chemistry and materials science to invent the future faster. Every day, scientists in 30+ countries use Albert to accelerate R&D with AI trained like a chemist, bringing better products to market, faster.
Why Join Albert Invent
- Joining Albert Invent means becoming part of a mission-driven, fast-growing global team at the intersection of AI, data, and advanced materials science.
- You will collaborate with world-class scientists and technologists to redefine how new materials are discovered, developed, and brought to market.
- The culture is built on curiosity, collaboration, and ownership, with a strong focus on learning and impact.
- You will enjoy the opportunity to work on cutting-edge AI tools that accelerate real- world R&D and solve global challenges from sustainability to advanced manufacturing while growing your careers in a high-energy environment.

Global digital transformation solutions provider.
Job Description
We are seeking a highly skilled Site Reliability Engineer (SRE) with strong expertise in Google Cloud Platform (GCP) and CI/CD automation to lead cloud infrastructure initiatives. The ideal candidate will design and implement robust CI/CD pipelines, automate deployments, ensure platform reliability, and drive continuous improvement in cloud operations and DevOps practices.
Key Responsibilities:
- Design, develop, and optimize end-to-end CI/CD pipelines using Jenkins, with a strong focus on Declarative Pipeline syntax.
- Automate deployment, scaling, and management of applications across various GCP services including GKE, Cloud Run, Compute Engine, Cloud SQL, Cloud Storage, VPC, and Cloud Functions.
- Collaborate closely with development and DevOps teams to ensure seamless integration of applications into the CI/CD pipeline and GCP environment.
- Implement and manage monitoring, logging, and ing solutions to maintain visibility, reliability, and performance of cloud infrastructure and applications.
- Ensure compliance with security best practices and organizational policies across GCP environments.
- Document processes, configurations, and architectural decisions to maintain operational transparency.
- Stay updated with the latest GCP services, DevOps, and SRE best practices to enhance infrastructure efficiency and reliability.
Mandatory Skills:
- Google Cloud Platform (GCP) – Hands-on experience with core GCP compute, networking, and storage services.
- Jenkins – Expertise in Declarative Pipeline creation and optimization.
- CI/CD – Strong understanding of automated build, test, and deployment workflows.
- Solid understanding of SRE principles including automation, scalability, observability, and system reliability.
- Familiarity with containerization and orchestration tools (Docker, Kubernetes – GKE).
- Proficiency in scripting languages such as Shell, Python, or Groovy for automation tasks.
Preferred Skills:
- Experience with Terraform, Ansible, or Cloud Deployment Manager for Infrastructure as Code (IaC).
- Exposure to monitoring and observability tools like Stackdriver, Prometheus, or Grafana.
- Knowledge of multi-cloud or hybrid environments (AWS experience is a plus).
- GCP certification (Professional Cloud DevOps Engineer / Cloud Architect) preferred.
Skills
Gcp, Jenkins, CICD Aws,
Nice to Haves
Experience with Terraform, Ansible, or Cloud Deployment Manager for Infrastructure as Code (IaC).
Exposure to monitoring and observability tools like Stackdriver, Prometheus, or Grafana.
Knowledge of multi-cloud or hybrid environments (AWS experience is a plus).
GCP certification (Professional Cloud DevOps Engineer / Cloud Architect) preferred.
******
Notice period - 0 to 15days only
Location – Pune, Trivandrum, Kochi, Chennai
Required Skills: Advanced AWS Infrastructure Expertise, CI/CD Pipeline Automation, Monitoring, Observability & Incident Management, Security, Networking & Risk Management, Infrastructure as Code & Scripting
Criteria:
- 5+ years of DevOps/SRE experience in cloud-native, product-based companies (B2C scale preferred)
- Strong hands-on AWS expertise across core and advanced services (EC2, ECS/EKS, Lambda, S3, CloudFront, RDS, VPC, IAM, ELB/ALB, Route53)
- Proven experience designing high-availability, fault-tolerant cloud architectures for large-scale traffic
- Strong experience building & maintaining CI/CD pipelines (Jenkins mandatory; GitHub Actions/GitLab CI a plus)
- Prior experience running production-grade microservices deployments and automated rollout strategies (Blue/Green, Canary)
- Hands-on experience with monitoring & observability tools (Grafana, Prometheus, ELK, CloudWatch, New Relic, etc.)
- Solid hands-on experience with MongoDB in production, including performance tuning, indexing & replication
- Strong scripting skills (Bash, Shell, Python) for automation
- Hands-on experience with IaC (Terraform, CloudFormation, or Ansible)
- Deep understanding of networking fundamentals (VPC, subnets, routing, NAT, security groups)
- Strong experience in incident management, root cause analysis & production firefighting
Description
Role Overview
Company is seeking an experienced Senior DevOps Engineer to design, build, and optimize cloud infrastructure on AWS, automate CI/CD pipelines, implement monitoring and security frameworks, and proactively identify scalability challenges. This role requires someone who has hands-on experience running infrastructure at B2C product scale, ideally in media/OTT or high-traffic applications.
Key Responsibilities
1. Cloud Infrastructure — AWS (Primary Focus)
- Architect, deploy, and manage scalable infrastructure using AWS services such as EC2, ECS/EKS, Lambda, S3, CloudFront, RDS, ELB/ALB, VPC, IAM, Route53, etc.
- Optimize cloud cost, resource utilization, and performance across environments.
- Design high-availability, fault-tolerant systems for streaming workloads.
2. CI/CD Automation
- Build and maintain CI/CD pipelines using Jenkins, GitHub Actions, or GitLab CI.
- Automate deployments for microservices, mobile apps, and backend APIs.
- Implement blue/green and canary deployments for seamless production rollouts.
3. Observability & Monitoring
- Implement logging, metrics, and alerting using tools like Grafana, Prometheus, ELK, CloudWatch, New Relic, etc.
- Perform proactive performance analysis to minimize downtime and bottlenecks.
- Set up dashboards for real-time visibility into system health and user traffic spikes.
4. Security, Compliance & Risk Highlighting
• Conduct frequent risk assessments and identify vulnerabilities in:
o Cloud architecture
o Access policies (IAM)
o Secrets & key management
o Data flows & network exposure
• Implement security best practices including VPC isolation, WAF rules, firewall policies, and SSL/TLS management.
5. Scalability & Reliability Engineering
- Analyze traffic patterns for OTT-specific load variations (weekends, new releases, peak hours).
- Identify scalability gaps and propose solutions across:
- o Microservices
- o Caching layers
- o CDN distribution (CloudFront)
- o Database workloads
- Perform capacity planning and load testing to ensure readiness for 10x traffic growth.
6. Database & Storage Support
- Administer and optimize MongoDB for high-read/low-latency use cases.
- Design backup, recovery, and data replication strategies.
- Work closely with backend teams to tune query performance and indexing.
7. Automation & Infrastructure as Code
- Implement IaC using Terraform, CloudFormation, or Ansible.
- Automate repetitive infrastructure tasks to ensure consistency across environments.
Required Skills & Experience
Technical Must-Haves
- 5+ years of DevOps/SRE experience in cloud-native, product-based companies.
- Strong hands-on experience with AWS (core and advanced services).
- Expertise in Jenkins CI/CD pipelines.
- Solid background working with MongoDB in production environments.
- Good understanding of networking: VPCs, subnets, security groups, NAT, routing.
- Strong scripting experience (Bash, Python, Shell).
- Experience handling risk identification, root cause analysis, and incident management.
Nice to Have
- Experience with OTT, video streaming, media, or any content-heavy product environments.
- Familiarity with containers (Docker), orchestration (Kubernetes/EKS), and service mesh.
- Understanding of CDN, caching, and streaming pipelines.
Personality & Mindset
- Strong sense of ownership and urgency—DevOps is mission critical at OTT scale.
- Proactive problem solver with ability to think about long-term scalability.
- Comfortable working with cross-functional engineering teams.
Why Join company?
• Build and operate infrastructure powering millions of monthly users.
• Opportunity to shape DevOps culture and cloud architecture from the ground up.
• High-impact role in a fast-scaling Indian OTT product.
ROLE & RESPONSIBILITIES:
We are hiring a Senior DevSecOps / Security Engineer with 8+ years of experience securing AWS cloud, on-prem infrastructure, DevOps platforms, MLOps environments, CI/CD pipelines, container orchestration, and data/ML platforms. This role is responsible for creating and maintaining a unified security posture across all systems used by DevOps and MLOps teams — including AWS, Kubernetes, EMR, MWAA, Spark, Docker, GitOps, observability tools, and network infrastructure.
KEY RESPONSIBILITIES:
1. Cloud Security (AWS)-
- Secure all AWS resources consumed by DevOps/MLOps/Data Science: EC2, EKS, ECS, EMR, MWAA, S3, RDS, Redshift, Lambda, CloudFront, Glue, Athena, Kinesis, Transit Gateway, VPC Peering.
- Implement IAM least privilege, SCPs, KMS, Secrets Manager, SSO & identity governance.
- Configure AWS-native security: WAF, Shield, GuardDuty, Inspector, Macie, CloudTrail, Config, Security Hub.
- Harden VPC architecture, subnets, routing, SG/NACLs, multi-account environments.
- Ensure encryption of data at rest/in transit across all cloud services.
2. DevOps Security (IaC, CI/CD, Kubernetes, Linux)-
Infrastructure as Code & Automation Security:
- Secure Terraform, CloudFormation, Ansible with policy-as-code (OPA, Checkov, tfsec).
- Enforce misconfiguration scanning and automated remediation.
CI/CD Security:
- Secure Jenkins, GitHub, GitLab pipelines with SAST, DAST, SCA, secrets scanning, image scanning.
- Implement secure build, artifact signing, and deployment workflows.
Containers & Kubernetes:
- Harden Docker images, private registries, runtime policies.
- Enforce EKS security: RBAC, IRSA, PSP/PSS, network policies, runtime monitoring.
- Apply CIS Benchmarks for Kubernetes and Linux.
Monitoring & Reliability:
- Secure observability stack: Grafana, CloudWatch, logging, alerting, anomaly detection.
- Ensure audit logging across cloud/platform layers.
3. MLOps Security (Airflow, EMR, Spark, Data Platforms, ML Pipelines)-
Pipeline & Workflow Security:
- Secure Airflow/MWAA connections, secrets, DAGs, execution environments.
- Harden EMR, Spark jobs, Glue jobs, IAM roles, S3 buckets, encryption, and access policies.
ML Platform Security:
- Secure Jupyter/JupyterHub environments, containerized ML workspaces, and experiment tracking systems.
- Control model access, artifact protection, model registry security, and ML metadata integrity.
Data Security:
- Secure ETL/ML data flows across S3, Redshift, RDS, Glue, Kinesis.
- Enforce data versioning security, lineage tracking, PII protection, and access governance.
ML Observability:
- Implement drift detection (data drift/model drift), feature monitoring, audit logging.
- Integrate ML monitoring with Grafana/Prometheus/CloudWatch.
4. Network & Endpoint Security-
- Manage firewall policies, VPN, IDS/IPS, endpoint protection, secure LAN/WAN, Zero Trust principles.
- Conduct vulnerability assessments, penetration test coordination, and network segmentation.
- Secure remote workforce connectivity and internal office networks.
5. Threat Detection, Incident Response & Compliance-
- Centralize log management (CloudWatch, OpenSearch/ELK, SIEM).
- Build security alerts, automated threat detection, and incident workflows.
- Lead incident containment, forensics, RCA, and remediation.
- Ensure compliance with ISO 27001, SOC 2, GDPR, HIPAA (as applicable).
- Maintain security policies, procedures, RRPs (Runbooks), and audits.
IDEAL CANDIDATE:
- 8+ years in DevSecOps, Cloud Security, Platform Security, or equivalent.
- Proven ability securing AWS cloud ecosystems (IAM, EKS, EMR, MWAA, VPC, WAF, GuardDuty, KMS, Inspector, Macie).
- Strong hands-on experience with Docker, Kubernetes (EKS), CI/CD tools, and Infrastructure-as-Code.
- Experience securing ML platforms, data pipelines, and MLOps systems (Airflow/MWAA, Spark/EMR).
- Strong Linux security (CIS hardening, auditing, intrusion detection).
- Proficiency in Python, Bash, and automation/scripting.
- Excellent knowledge of SIEM, observability, threat detection, monitoring systems.
- Understanding of microservices, API security, serverless security.
- Strong understanding of vulnerability management, penetration testing practices, and remediation plans.
EDUCATION:
- Master’s degree in Cybersecurity, Computer Science, Information Technology, or related field.
- Relevant certifications (AWS Security Specialty, CISSP, CEH, CKA/CKS) are a plus.
PERKS, BENEFITS AND WORK CULTURE:
- Competitive Salary Package
- Generous Leave Policy
- Flexible Working Hours
- Performance-Based Bonuses
- Health Care Benefits
Required Skills: TypeScript, MVC, Cloud experience (Azure, AWS, etc.), mongodb, Express.js, Nest.js
Criteria:
Need candidates from Growing startups or Product based companies only
1. 4–8 years’ experience in backend engineering
2. Minimum 2+ years hands-on experience with:
- TypeScript
- Express.js / Nest.js
3. Strong experience with MongoDB (or MySQL / PostgreSQL / DynamoDB)
4. Strong understanding of system design & scalable architecture
5. Hands-on experience in:
- Event-driven architecture / Domain-driven design
- MVC / Microservices
6. Strong in automated testing (especially integration tests)
7. Experience with CI/CD pipelines (GitHub Actions or similar)
8. Experience managing production systems
9. Solid understanding of performance, reliability, observability
10. Cloud experience (AWS preferred; GCP/Azure acceptable)
11. Strong coding standards — Clean Code, code reviews, refactoring
Description
About the opportunity
We are looking for an exceptional Senior Software Engineer to join our Backend team. This is a unique opportunity to join a fast-growing company where you will get to solve real customer and business problems, shape the future of a product built for Bharat and build the engineering culture of the team. You will have immense responsibility and autonomy to push the boundaries of engineering to deliver scalable and resilient systems.
As a Senior Software Engineer, you will be responsible for shipping innovative features at breakneck speed, designing the architecture, mentoring other engineers on the team and pushing for a high bar of engineering standards like code quality, automated testing, performance, CI/CD, etc. If you are someone who loves solving problems for customers, technology, the craft of software engineering, and the thrill of building startups, we would like to talk to you.
What you will be doing
- Build and ship features in our Node.js (and now migrating to TypeScript) codebase that directly impact user experience and help move the top and bottom line of the business.
- Collaborate closely with our product, design and data team to build innovative features to deliver a world class product to our customers. At company, product managers don’t “tell” what to build. In fact, we all collaborate on how to solve a problem for our customers and the business. Engineering plays a big part in it.
- Design scalable platforms that empower our product and marketing teams to rapidly experiment.
- Own the quality of our products by writing automated tests, reviewing code, making systems observable and resilient to failures.
- Drive code quality and pay down architectural debt by continuous analysis of our codebases and systems, and continuous refactoring.
- Architect our systems for faster iterations, releasability, scalability and high availability using practices like Domain Driven Design, Event Driven Architecture, Cloud Native Architecture and Observability.
- Set the engineering culture with the rest of the team by defining how we should work as a team, set standards for quality, and improve the speed of engineering execution.
The role could be ideal for you if you
- Experience of 4-8 years of working in backend engineering with at least 2 years of production experience in TypeScript, Express.js (or another popular framework like Nest.js) and MongoDB (or any popular database like MySQL, PostgreSQL, DynamoDB, etc.).
- Well versed with one or more architectures and design patterns such as MVC, Domain Driven Design, CQRS, Event Driven Architecture, Cloud Native Architecture, etc.
- Experienced in writing automated tests (especially integration tests) and Continuous Integration. At company, engineers own quality and hence, writing automated tests is crucial to the role.
- Experience with managing production infrastructure using technologies like public cloud providers (AWS, GCP, Azure, etc.). Bonus: if you have experience in using Kubernetes.
- Experience in observability techniques like code instrumentation for metrics, tracing and logging.
- Care deeply about code quality, code reviews, software architecture (think about Object Oriented Programming, Clean Code, etc.), scalability and reliability. Bonus: if you have experience in this from your past roles.
- Understand the importance of shipping fast in a startup environment and constantly try to find ingenious ways to achieve the same.
- Collaborate well with everyone on the team. We communicate a lot and don’t hesitate to get quick feedback from other members on the team sooner than later.
- Can take ownership of goals and deliver them with high accountability.
Don’t hesitate to try out new technologies. At company, nobody is limited to a role. Every engineer in our team is an expert of at least one technology but often ventures out in adjacent technologies like React.js, Flutter, Data Platforms, AWS and Kubernetes. If you are not excited by this, you will not like working at company. Bonus: if you have experience in adjacent technologies like AWS (or any public cloud provider, Github Actions (or CircleCI), Kubernetes, Infrastructure as Code (Terraform, Pulumi, etc.), etc.
Strong DevSecOps / Cloud Security profile
Mandatory (Experience 1) – Must have 8+ years total experience in DevSecOps / Cloud Security / Platform Security roles securing AWS workloads and CI/CD systems.
Mandatory (Experience 2) – Must have strong hands-on experience securing AWS services (including but not limited to) KMS, WAF, Shield, CloudTrail, AWS Config, Security Hub, Inspector, Macie and IAM governance
Mandatory (Experience 3) – Must have hands-on expertise in Identity & Access Security including RBAC, IRSA, PSP/PSS, SCPs and IAM least-privilege enforcement
Mandatory (Experience 4) – Must have hands-on experience with security automation using Terraform and Ansible for configuration hardening and compliance
Mandatory (Experience 5) – Must have strong container & Kubernetes security experience including Docker image scanning, EKS runtime controls, network policies, and registry security
Mandatory (Experience 6) – Must have strong CI/CD pipeline security expertise including SAST, DAST, SCA, Jenkins Security, artifact integrity, secrets protection, and automated remediation
Mandatory (Experience 7) – Must have experience securing data & ML platforms including databases, data centers/on-prem environments, MWAA/Airflow, and sensitive ETL/ML workflows
Mandatory (Company) - Product companies preferred; Exception for service company candidates with strong MLOps + AWS depth















