Cutshort logo

50+ Terraform Jobs in India

Apply to 50+ Terraform Jobs on CutShort.io. Find your next job, effortlessly. Browse Terraform Jobs and apply today!

icon
VegaStack
Careers VegaStack
Posted by Careers VegaStack
Bengaluru (Bangalore)
2 - 3 yrs
₹5L - ₹8L / yr
skill iconGit
skill iconGitHub
gitlab
skill iconDocker
skill iconJenkins
+3 more

Who We Are

We're a DevOps and Automation company based in Bengaluru, India. We have delivered over 170 automation projects for 65+ global clients, including Fortune 500 enterprises that trust us with mission-critical infrastructure and operations. We're bootstrapped, profitable, and scaling quickly by consistently solving complex engineering challenges with precision and reliability.

What We Value

  • Ownership: As part of our team, you're responsible for strategy and outcomes, not just completing assigned tasks.
  • High Velocity: We move fast, iterate faster, and amplify our impact, always prioritizing quality over speed.

Who We Seek

We are hiring DevOps Engineers (6 months - 1 year experience) to join our DevOps team. You will work on infrastructure automation, CI/CD pipelines, cloud deployments, container orchestration, and system reliability.

This role is ideal for someone who wants to work with modern DevOps tooling and contribute to high-impact engineering decisions.

🌏 Job Location: Bengaluru (Work From Office)

What You Will Be Doing

CI/CD Pipeline Management

  • Design, implement, and maintain efficient CI/CD pipelines using Jenkins, GitLab CI, Azure DevOps, or similar tools.
  • Automate build, test, and deployment processes to increase delivery speed and reliability.

Infrastructure as Code (IaC)

  • Provision and manage infrastructure on AWS, Azure, or GCP using Terraform, CloudFormation, or Ansible.
  • Maintain scalable, secure, and cost-optimized environments.

Containerization & Orchestration

  • Build and manage Docker-based environments.
  • Deploy and scale workloads using Kubernetes.

Monitoring & Alerting

  • Implement monitoring, logging, and alerting systems using Prometheus, Grafana, ELK Stack, Datadog, or similar.
  • Develop dashboards and alerts to detect issues proactively.

System Reliability & Performance

  • Implement systems for high availability, disaster recovery, and fault tolerance.
  • Troubleshoot and optimize infrastructure performance.

Scripting & Automation

  • Write automation scripts in Python, Bash, or Shell to streamline operations.
  • Automate repetitive workflows to reduce manual intervention.

Collaboration & Best Practices

  • Work closely with Development, QA, and Security teams to embed DevOps best practices into the SDLC.
  • Follow security standards for deployments and infrastructure.
  • Work efficiently with Unix/Linux systems and understand core networking concepts (DNS, DHCP, NAT, VPN, TCP/IP).

What We’re Looking For

  • Strong understanding of Linux distributions (Ubuntu, CentOS, RHEL) and Windows environments.
  • Proficiency with Git and experience using GitHub, GitLab, or Bitbucket.
  • Ability to write automation scripts using Bash/Shell or Python.
  • Basic knowledge of relational databases like MySQL or PostgreSQL.
  • Familiarity with web servers such as NGINX or Apache2.
  • Experience working with AWS, Azure, GCP, or DigitalOcean.
  • Foundational understanding of Ansible for configuration management.
  • Basic knowledge of Terraform or CloudFormation for IaC.
  • Hands-on experience with Jenkins or GitLab CI/CD pipelines.
  • Strong knowledge of Docker for containerization.
  • Basic exposure to Kubernetes for orchestration.
  • Familiarity with at least one programming language (Java, Node.js, or Python).

Benefits

🤝 Work directly with founders and engineering leaders.

💪 Drive projects that create real business impact, not busywork.

💡 Gain practical, industry-relevant skills you won’t learn in college.

🚀 Accelerate your growth by working on meaningful engineering challenges.

📈 Learn continuously with mentorship and structured development opportunities.

🤗 Be part of a collaborative, high-energy workplace that values innovation.

Read more
VegaStack
Careers VegaStack
Posted by Careers VegaStack
Bengaluru (Bangalore)
1 - 2 yrs
₹4L - ₹5L / yr
skill iconGit
skill iconGitHub
gitlab
skill iconDocker
skill iconJenkins
+3 more

Who We Are

We're a DevOps and Automation company based in Bengaluru, India. We have delivered over 170 automation projects for 65+ global clients, including Fortune 500 enterprises that trust us with mission-critical infrastructure and operations. We're bootstrapped, profitable, and scaling quickly by consistently solving complex engineering challenges with precision and reliability.

What We Value

  • Ownership: As part of our team, you're responsible for strategy and outcomes, not just completing assigned tasks.
  • High Velocity: We move fast, iterate faster, and amplify our impact, always prioritizing quality over speed.

Who We Seek

We are hiring DevOps Engineers (6 months - 1 year experience) to join our DevOps team. You will work on infrastructure automation, CI/CD pipelines, cloud deployments, container orchestration, and system reliability.

This role is ideal for someone who wants to work with modern DevOps tooling and contribute to high-impact engineering decisions.

🌏 Job Location: Bengaluru (Work From Office)

What You Will Be Doing

CI/CD Pipeline Management

  • Design, implement, and maintain efficient CI/CD pipelines using Jenkins, GitLab CI, Azure DevOps, or similar tools.
  • Automate build, test, and deployment processes to increase delivery speed and reliability.

Infrastructure as Code (IaC)

  • Provision and manage infrastructure on AWS, Azure, or GCP using Terraform, CloudFormation, or Ansible.
  • Maintain scalable, secure, and cost-optimized environments.

Containerization & Orchestration

  • Build and manage Docker-based environments.
  • Deploy and scale workloads using Kubernetes.

Monitoring & Alerting

  • Implement monitoring, logging, and alerting systems using Prometheus, Grafana, ELK Stack, Datadog, or similar.
  • Develop dashboards and alerts to detect issues proactively.

System Reliability & Performance

  • Implement systems for high availability, disaster recovery, and fault tolerance.
  • Troubleshoot and optimize infrastructure performance.

Scripting & Automation

  • Write automation scripts in Python, Bash, or Shell to streamline operations.
  • Automate repetitive workflows to reduce manual intervention.

Collaboration & Best Practices

  • Work closely with Development, QA, and Security teams to embed DevOps best practices into the SDLC.
  • Follow security standards for deployments and infrastructure.
  • Work efficiently with Unix/Linux systems and understand core networking concepts (DNS, DHCP, NAT, VPN, TCP/IP).

What We’re Looking For

  • Strong understanding of Linux distributions (Ubuntu, CentOS, RHEL) and Windows environments.
  • Proficiency with Git and experience using GitHub, GitLab, or Bitbucket.
  • Ability to write automation scripts using Bash/Shell or Python.
  • Basic knowledge of relational databases like MySQL or PostgreSQL.
  • Familiarity with web servers such as NGINX or Apache2.
  • Experience working with AWS, Azure, GCP, or DigitalOcean.
  • Foundational understanding of Ansible for configuration management.
  • Basic knowledge of Terraform or CloudFormation for IaC.
  • Hands-on experience with Jenkins or GitLab CI/CD pipelines.
  • Strong knowledge of Docker for containerization.
  • Basic exposure to Kubernetes for orchestration.
  • Familiarity with at least one programming language (Java, Node.js, or Python).

Benefits

🤝 Work directly with founders and engineering leaders.

💪 Drive projects that create real business impact, not busywork.

💡 Gain practical, industry-relevant skills you won’t learn in college.

🚀 Accelerate your growth by working on meaningful engineering challenges.

📈 Learn continuously with mentorship and structured development opportunities.

🤗 Be part of a collaborative, high-energy workplace that values innovation.

Read more
VegaStack
Careers VegaStack
Posted by Careers VegaStack
Bengaluru (Bangalore)
0 - 1 yrs
₹10 - ₹15 / mo
skill iconGit
gitlab
Ansible
skill iconDocker
Terraform
+2 more

Who We Are

We're a DevOps and Automation company based in Bengaluru, India. We have successfully delivered over 170 automation projects for 65+ global businesses, including Fortune 500 companies that trust us with their mission-critical infrastructure and operations. We're bootstrapped, profitable, and scaling quickly by consistently solving high-impact engineering problems.

What We Value

Ownership: You take accountability for outcomes, not just tasks.

High Velocity: We iterate fast, learn constantly, and deliver with precision.

Who We Seek

We are looking for a DevOps Intern to join our DevOps team and gain hands-on experience working with real infrastructure, automation pipelines, and deployment environments. You will support CI/CD processes, cloud environments, monitoring, and system reliability while learning industry-standard tools and practices.

We’re seeking someone who is technically curious, eager to learn, and driven to build reliable systems in a fast-paced engineering environment.

🌏 Job Location: Bengaluru (Work From Office)

What You Will Be Doing

  • Assist in deploying product updates, monitoring system performance, and identifying production issues.
  • Contribute to building and improving CI/CD pipelines for automated deployments.
  • Support the provisioning, configuration, and maintenance of cloud infrastructure.
  • Work with tools like Docker, Jenkins, Git, and monitoring systems to streamline workflows.
  • Help automate recurring operational processes using scripting and DevOps tools.
  • Participate in backend integrations aligned with product or customer requirements.
  • Collaborate with developers, QA, and operations to improve reliability and scalability.
  • Gain exposure to containerization, infrastructure-as-code, and cloud platforms.
  • Document processes, configurations, and system behaviours to support team efficiency.
  • Learn and apply DevOps best practices in real-world environments.

What We’re Looking For

  • Hands-on experience or coursework with Docker, Linux, or cloud fundamentals.
  • Familiarity with Jenkins, Git, or basic CI/CD concepts.
  • Basic understanding of AWS, Azure, or Google Cloud environments.
  • Exposure to configuration management tools like Ansible, Puppet, or similar.
  • Interest in Kubernetes, Terraform, or infrastructure-as-code practices.
  • Ability to write or modify simple shell or Python scripts.
  • Strong analytical and troubleshooting mindset.
  • Good communication skills with the ability to articulate technical concepts clearly.
  • Eagerness to learn, take initiative, and adapt in a fast-moving engineering environment.
  • Attention to detail and a commitment to accuracy and reliability.

Benefits

🤝 Work directly with founders and senior engineers.

💪 Contribute to live projects that impact real customers and systems.

💡 Learn tools and practices that engineering programs rarely teach.

🚀 Accelerate your growth through real-world problem solving.

📈 Build a strong DevOps foundation with continuous learning opportunities.

🤗 Thrive in a collaborative environment that encourages experimentation and growth.

Read more
Poshmark

at Poshmark

3 candid answers
1 recruiter
Eman Khan
Posted by Eman Khan
Chennai
8 - 15 yrs
₹20L - ₹40L / yr
skill iconKubernetes
skill iconAmazon Web Services (AWS)
Terraform
Reliability engineering
DevOps

We’re looking for an experienced Site Reliability Engineer to fill the mission-critical role of ensuring that our complex, web-scale systems are healthy, monitored, automated, and designed to scale. You will use your background as an operations generalist to work closely with our development teams from the early stages of design all the way through identifying and resolving production issues. The ideal candidate will be passionate about an operations role that involves deep knowledge of both the application and the product, and will also believe that automation is a key component to operating large-scale systems.


6-Month Accomplishments

  • Familiarize with poshmark tech stack and functional requirements.
  • Get comfortable with automation tools/frameworks used within cloudops organization and deployment processes associated with.
  • Gain in depth knowledge related to related product functionality and infrastructure required for it.
  • Start Contributing by working on small to medium scale projects.
  • Understand and follow on call rotation as a secondary to get familiarized with the on call process.


12+ Month Accomplishments

  • Execute projects related to comms functionality, independently, with little guidance from lead.
  • Create meaningful alerts and dashboards for various sub-system involved in targeted infrastructure.
  • Identify gaps in infrastructure and suggest improvements or work on it.
  • Get involved in on-call rotation.


Responsibilities

  • Serve as a primary point responsible for the overall health, performance, and capacity of one or more of our Internet-facing services.
  • Gain deep knowledge of our complex applications.
  • Assist in the roll-out and deployment of new product features and installations to facilitate our rapid iteration and constant growth.
  • Develop tools to improve our ability to rapidly deploy and effectively monitor custom applications in a large-scale UNIX environment.
  • Work closely with development teams to ensure that platforms are designed with "operability" in mind.
  • Function well in a fast-paced, rapidly-changing environment.
  • Participate in a 24x7 on-call rotation.


Desired Skills

  • 5+ years of experience in Systems Engineering/Site Reliability Operations role is required, ideally in a startup or fast-growing company.
  • 5+ years in a UNIX-based large-scale web operations role.
  • 5+ years of experience in doing 24/7 support for large scale production environments.
  • Battle-proven, real-life experience in running a large scale production operation.
  • Experience working on cloud-based infrastructure e.g AWS, GCP, Azure.
  • Hands-on experience with continuous integration tools such as Jenkins, configuration management with Ansible, systems monitoring and alerting with tools such as Nagios, New Relic, Graphite.
  • Experience scripting/coding
  • Ability to use a wide variety of open source technologies and tools.


Technologies we use:

  • Ruby, JavaScript, NodeJs, Tomcat, Nginx, HaProxy
  • MongoDB, RabbitMQ, Redis, ElasticSearch.
  • Amazon Web Services (EC2, RDS, CloudFront, S3, etc.)
  • Terraform, Packer, Jenkins, Datadog, Kubernetes, Docker, Ansible and other DevOps tools.
Read more
Albert Invent

at Albert Invent

4 candid answers
3 recruiters
Nikita Sinha
Posted by Nikita Sinha
Hyderabad
2 - 4 yrs
Upto ₹16L / yr (Varies
)
Automation
Terraform
skill iconPython
skill iconNodeJS (Node.js)
skill iconAmazon Web Services (AWS)

The Software Engineer – SRE will be responsible for building and maintaining highly reliable, scalable, and secure infrastructure that powers the Albert platform. This role focuses on automation, observability, and operational excellence to ensure seamless deployment, performance, and reliability of core platform services.


Key Responsibilities

  • Act as a passionate representative of the Albert product and brand.
  • Collaborate with Product Engineering and other stakeholders to plan and deliver core platform capabilities that enable scalability, reliability, and developer productivity.
  • Work with the Site Reliability Engineering (SRE) team on shared full-stack ownership of a collection of services and/or technology areas.
  • Understand the end-to-end configuration, technical dependencies, and overall behavioral characteristics of all microservices.
  • Design and deliver the mission-critical stack, focusing on security, resiliency, scale, and performance.
  • Take ownership of end-to-end performance and operability.
  • Apply strong knowledge of automation and orchestration principles.
  • Serve as the ultimate escalation point for complex or critical issues not yet documented as Standard Operating Procedures (SOPs).
  • Troubleshoot and define mitigations using a deep understanding of service topology and dependencies.

Requirements

  • Bachelor’s degree in Computer Science, Engineering, or equivalent experience.
  • 2+ years of software engineering experience, with at least 1 year in an SRE role focused on automation.
  • Strong experience in Infrastructure as Code (IAC), preferably using Terraform.
  • Proficiency in Python or Node.js, with experience designing RESTful APIs and working in microservices architecture.
  • Solid expertise in AWS cloud infrastructure and platform technologies including APIs, distributed systems, and microservices.
  • Hands-on experience with observability stacks, including centralized log management, metrics, and tracing.
  • Familiarity with CI/CD tools (e.g., CircleCI) and performance testing tools like K6.
  • Passion for bringing automation and standardization to engineering operations.
  • Ability to build high-performance APIs with low latency (<200ms).
  • Ability to work in a fast-paced environment, learning from peers and leaders.
  • Demonstrated ability to mentor other engineers and contribute to team growth, including participation in recruiting activities.

Good to Have

  • Experience with Kubernetes and container orchestration.
  • Familiarity with observability tools such as Prometheus, Grafana, OpenTelemetry, or Datadog.
  • Experience building Internal Developer Platforms (IDPs) or reusable frameworks for engineering teams.
  • Exposure to ML infrastructure or data engineering workflows.
  • Experience working in compliance-heavy environments (e.g., SOC2, HIPAA).


Read more
Poshmark

at Poshmark

3 candid answers
1 recruiter
Eman Khan
Posted by Eman Khan
Chennai
4 - 8 yrs
₹15L - ₹30L / yr
skill iconKubernetes
skill iconAmazon Web Services (AWS)
Terraform
Reliability engineering
DevOps

About Poshmark

Poshmark is a leading fashion resale marketplace powered by a vibrant, highly engaged community of buyers and sellers and real-time social experiences. Designed to make online selling fun, more social and easier than ever, Poshmark empowers its sellers to turn their closet into a thriving business and share their style with the world. Since its founding in 2011, Poshmark has grown its community to over 130 million users and generated over $10 billion in GMV, helping sellers realize billions in earnings, delighting buyers with deals and one-of-a-kind items, and building a more sustainable future for fashion. For more information, please visit www.poshmark.com, and for company news, visit newsroom.poshmark.com.


We’re looking for an experienced Site Reliability Engineer to fill the mission-critical role of ensuring that our complex, web-scale systems are healthy, monitored, automated, and designed to scale. You will use your background as an operations generalist to work closely with our development teams from the early stages of design all the way through identifying and resolving production issues. The ideal candidate will be passionate about an operations role that involves deep knowledge of both the application and the product, and will also believe that automation is a key component to operating large-scale systems.


6-Month Accomplishments

  • Familiarize with poshmark tech stack and functional requirements.
  • Get comfortable with automation tools/frameworks used within cloudops organization and deployment processes associated with.
  • Gain in depth knowledge related to related product functionality and infrastructure required for it.
  • Start Contributing by working on small to medium scale projects.
  • Understand and follow on call rotation as a secondary to get familiarized with the on call process.


12+ Month Accomplishments

  • Execute projects related to comms functionality, independently, with little guidance from lead.
  • Create meaningful alerts and dashboards for various sub-system involved in targeted infrastructure.
  • Identify gaps in infrastructure and suggest improvements or work on it.
  • Get involved in on-call rotation.


Responsibilities

  • Serve as a primary point responsible for the overall health, performance, and capacity of one or more of our Internet-facing services.
  • Gain deep knowledge of our complex applications.
  • Assist in the roll-out and deployment of new product features and installations to facilitate our rapid iteration and constant growth.
  • Develop tools to improve our ability to rapidly deploy and effectively monitor custom applications in a large-scale UNIX environment.
  • Work closely with development teams to ensure that platforms are designed with "operability" in mind.
  • Function well in a fast-paced, rapidly-changing environment.
  • Participate in a 24x7 on-call rotation


Desired Skills

  • 4+ years of experience in Systems Engineering/Site Reliability Operations role is required, ideally in a startup or fast-growing company.
  • 4+ years in a UNIX-based large-scale web operations role.
  • 4+ years of experience in doing 24/7 support for large scale production environments.
  • Battle-proven, real-life experience in running a large scale production operation.
  • Experience working on cloud-based infrastructure e.g AWS, GCP, Azure.
  • Hands-on experience with continuous integration tools such as Jenkins, configuration management with Ansible, systems monitoring and alerting with tools such as Nagios, New Relic, Graphite.
  • Experience scripting/coding
  • Ability to use a wide variety of open source technologies and tools.


Technologies we use:

  • Ruby, JavaScript, NodeJs, Tomcat, Nginx, HaProxy
  • MongoDB, RabbitMQ, Redis, ElasticSearch.
  • Amazon Web Services (EC2, RDS, CloudFront, S3, etc.)
  • Terraform, Packer, Jenkins, Datadog, Kubernetes, Docker, Ansible and other DevOps tools.
Read more
CyberWarFare Labs
Yash Bharadwaj
Posted by Yash Bharadwaj
Bengaluru (Bangalore)
1 - 3 yrs
₹4L - ₹6L / yr
skill iconAmazon Web Services (AWS)
Microsoft Windows Azure
Google Cloud Platform (GCP)
skill iconDocker
CI/CD
+4 more


Job Overview:

We are looking for a full-time Infrastructure & DevOps Engineer to support and enhance our cloud, server, and network operations. The role involves managing virtualization platforms, container environments, automation tools, and CI/CD workflows while ensuring smooth, secure, and reliable infrastructure performance. The ideal candidate should be proactive, technically strong, and capable of working collaboratively across teams.


Qualifications and Requirements

  • Bachelor’s/Master’s degree in Computer Science, Engineering, or related field (B.E/B.Tech/BCA/MCA/M.Tech).
  • Strong understanding of cloud platforms (AWS, Azure, GCP),including core services and IT infrastructure concepts.
  • Hands-on experience with virtualization tools including vCenter, hypervisors, nested virtualization, and bare-metal servers and concepts.
  • Practical knowledge of Linux and Windows servers, including cron jobs and essential Linux commands.
  • Experience working with Docker, Kubernetes, and CI/CD pipelines.
  • Strong understanding of Terraform and Ansible for infrastructure automation.
  • Scripting proficiency in Python and Bash (PowerShell optional).
  • Networking fundamentals (IP, routing, subnetting, LAN/WAN/WLAN).
  • Experience with firewalls, basic security concepts, and tools like pfSense.
  • Familiarity with Git/GitHub for version control and team collaboration.
  • Ability to perform API testing using cURL and Postman.
  • Strong understanding of the application deployment lifecycle and basic application deployment processes.
  • Good problem-solving, analytical thinking, and documentation skills.


Roles and Responsibility

  • Manage and maintain Linux/Windows servers, virtualization environments, and cloud infrastructure across AWS/Azure/GCP.
  • Use Terraform and Ansible to provision, automate, and manage infrastructure components.
  • Support application deployment lifecycle—from build and testing to release and rollout.
  • Deploy and maintain Kubernetes clusters and containerized workloads using Docker.
  • Develop, enhance, and troubleshoot CI/CD pipelines and integrate DevSecOps practices.
  • Write automation scripts using Python/Bash to optimize recurring tasks.
  • Conduct API testing using curl and Postman to validate integrations and service functionality.
  • Configure and monitor firewalls including pfSense for secure access control.
  • Troubleshoot network, server, and application issues using tools like Wireshark, ping, traceroute, and SNMP.
  • Maintain Git/GitHub repos, manage branching strategies, and participate in code reviews.
  • Prepare clear, detailed documentation including infrastructure diagrams, workflows, SOPs, and configuration records.


Read more
E-Commerce Industry

E-Commerce Industry

Agency job
via Peak Hire Solutions by Dhara Thakkar
Bengaluru (Bangalore)
6 - 10 yrs
₹30L - ₹50L / yr
Security Information and Event Management (SIEM)
Information security governance
ISO/IEC 27001:2005
Systems Development Life Cycle (SDLC)
Software Development
+67 more

SENIOR INFORMATION SECURITY ENGINEER (DEVSECOPS)

Key Skills: Software Development Life Cycle (SDLC), CI/CD

About Company: Consumer Internet / E-Commerce

Company Size: Mid-Sized

Experience Required: 6 - 10 years

Working Days: 5 days/week

Office Location: Bengaluru [Karnataka]


Review Criteria:

Mandatory:

  • Strong DevSecOps profile
  • Must have 5+ years of hands-on experience in Information Security, with a primary focus on cloud security across AWS, Azure, and GCP environments.
  • Must have strong practical experience working with Cloud Security Posture Management (CSPM) tools such as Prisma Cloud, Wiz, or Orca along with SIEM / IDS / IPS platforms
  • Must have proven experience in securing Kubernetes and containerized environments including image security,runtime protection, RBAC, and network policies.
  • Must have hands-on experience integrating security within CI/CD pipelines using tools such as Snyk, GitHub Advanced Security,or equivalent security scanning solutions.
  • Must have solid understanding of core security domains including network security, encryption, identity and access management key management, and security governance including cloud-native security services like GuardDuty, Azure Security Center etc
  • Must have practical experience with Application Security Testing tools including SAST, DAST, and SCA in real production environments
  • Must have hands-on experience with security monitoring, incident response, alert investigation, root-cause analysis (RCA), and managing VAPT / penetration testing activities
  • Must have experience securing infrastructure-as-code and cloud deployments using Terraform, CloudFormation, ARM, Docker, and Kubernetes
  • B2B SaaS Product companies
  • Must have working knowledge of globally recognized security frameworks and standards such as ISO 27001, NIST, and CIS with exposure to SOC2, GDPR, or HIPAA compliance environments


Preferred:

  • Experience with DevSecOps automation, security-as-code, and policy-as-code implementations
  • Exposure to threat intelligence platforms, cloud security monitoring, and proactive threat detection methodologies, including EDR / DLP or vulnerability management tools
  • Must demonstrate strong ownership mindset, proactive security-first thinking, and ability to communicate risks in clear business language


Roles & Responsibilities:

We are looking for a Senior Information Security Engineer who can help protect our cloud infrastructure, applications, and data while enabling teams to move fast and build securely.


This role sits deep within our engineering ecosystem. You’ll embed security into how we design, build, deploy, and operate systems—working closely with Cloud, Platform, and Application Engineering teams. You’ll balance proactive security design with hands-on incident response, and help shape a strong, security-first culture across the organization.


If you enjoy solving real-world security problems, working close to systems and code, and influencing how teams build securely at scale, this role is for you.


What You’ll Do-

Cloud & Infrastructure Security:

  • Design, implement, and operate cloud-native security controls across AWS, Azure, GCP, and Oracle.
  • Strengthen IAM, network security, and cloud posture using services like GuardDuty, Azure Security Center and others.
  • Partner with platform teams to secure VPCs, security groups, and cloud access patterns.


Application & DevSecOps Security:

  • Embed security into the SDLC through threat modeling, secure code reviews, and security-by-design practices.
  • Integrate SAST, DAST, and SCA tools into CI/CD pipelines.
  • Secure infrastructure-as-code and containerized workloads using Terraform, CloudFormation, ARM, Docker, and Kubernetes.


Security Monitoring & Incident Response:

  • Monitor security alerts and investigate potential threats across cloud and application layers.
  • Lead or support incident response efforts, root-cause analysis, and corrective actions.
  • Plan and execute VAPT and penetration testing engagements (internal and external), track remediation, and validate fixes.
  • Conduct red teaming activities and tabletop exercises to test detection, response readiness, and cross-team coordination.
  • Continuously improve detection, response, and testing maturity.


Security Tools & Platforms:

  • Manage and optimize security tooling including firewalls, SIEM, EDR, DLP, IDS/IPS, CSPM, and vulnerability management platforms.
  • Ensure tools are well-integrated, actionable, and aligned with operational needs.


Compliance, Governance & Awareness:

  • Support compliance with industry standards and frameworks such as SOC2, HIPAA, ISO 27001, NIST, CIS, and GDPR.
  • Promote secure engineering practices through training, documentation, and ongoing awareness programs.
  • Act as a trusted security advisor to engineering and product teams.


Continuous Improvement:

  • Stay ahead of emerging threats, cloud vulnerabilities, and evolving security best practices.
  • Continuously raise the bar on a company's security posture through automation and process improvement.


Endpoint Security (Secondary Scope):

  • Provide guidance on endpoint security tooling such as SentinelOne and Microsoft Defender when required.


Ideal Candidate:

  • Strong hands-on experience in cloud security across AWS and Azure.
  • Practical exposure to CSPM tools (e.g., Prisma Cloud, Wiz, Orca) and SIEM / IDS / IPS platforms.
  • Experience securing containerized and Kubernetes-based environments.
  • Familiarity with CI/CD security integrations (e.g., Snyk, GitHub Advanced Security, or similar).
  • Solid understanding of network security, encryption, identity, and access management.
  • Experience with application security testing tools (SAST, DAST, SCA).
  • Working knowledge of security frameworks and standards such as ISO 27001, NIST, and CIS.
  • Strong analytical, troubleshooting, and problem-solving skills.


Nice to Have:

  • Experience with DevSecOps automation and security-as-code practices.
  • Exposure to threat intelligence and cloud security monitoring solutions.
  • Familiarity with incident response frameworks and forensic analysis.
  • Security certifications such as CISSP, CISM, CCSP, or CompTIA Security+.


Perks, Benefits and Work Culture:

A wholesome opportunity in a fast-paced environment that will enable you to juggle between concepts, yet maintain the quality of content, interact and share your ideas and have loads of learning while at work. Work with a team of highly talented young professionals and enjoy the comprehensive benefits that company offers.

Read more
Tarento Group

at Tarento Group

3 candid answers
1 recruiter
Reshika Mendiratta
Posted by Reshika Mendiratta
STOCKHOLM (Sweden), Bengaluru (Bangalore)
8yrs+
Best in industry
DevOps
Microsoft Windows Server
Microsoft IIS administration
Windows Azure
Powershell
+2 more

About Tarento:

Tarento is a fast-growing technology consulting company headquartered in Stockholm, with a strong presence in India and clients across the globe. We specialize in digital transformation, product engineering, and enterprise solutions, working across diverse industries including retail, manufacturing, and healthcare. Our teams combine Nordic values with Indian expertise to deliver innovative, scalable, and high-impact solutions.

 

We're proud to be recognized as a Great Place to Work, a testament to our inclusive culture, strong leadership, and commitment to employee well-being and growth. At Tarento, you’ll be part of a collaborative environment where ideas are valued, learning is continuous, and careers are built on passion and purpose.


Scope of Work:

  • Support the migration of applications from Windows Server 2008 to Windows Server 2019 or 2022 in an IaaS environment.
  • Migrate IIS websites, Windows Services, and related application components.
  • Assist with migration considerations for SQL Server connections, instances, and basic data-related dependencies.
  • Evaluate and migrate message queues (MSMQ or equivalent technologies).
  • Document the existing environment, migration steps, and post-migration state.
  • Work closely with DevOps, development, and infrastructure teams throughout the project.


Required Skills & Experience:

  • Strong hands-on experience with IIS administration, configuration, and application migration.
  • Proven experience migrating workloads between Windows Server versions, ideally legacy to modern.
  • Knowledge of Windows Services setup, configuration, and troubleshooting.
  • Practical understanding of SQL Server (connection strings, service accounts, permissions).
  • Experience with queues IBM/MSMQ or similar) and their migration considerations.
  • Ability to identify migration risks, compatibility constraints, and remediation options.
  • Strong troubleshooting and analytical skills.
  • Familiarity with Microsoft technologies (.Net, etc)
  • Networking and Active Directory related knowledge

Desirable / Nice-to-Have

  • Exposure to CI/CD tools, especially TeamCity and Octopus Deploy.
  • Familiarity with Azure services and related tools (Terraform, etc)
  • PowerShell scripting for automation or configuration tasks.
  • Understanding enterprise change management and documentation practices.
  • Security

Soft Skills

  • Clear written and verbal communication.
  • Ability to work independently while collaborating with cross-functional teams.
  • Strong attention to detail and a structured approach to execution.
  • Troubleshooting
  • Willingness to learn.


Location & Engagement Details

We are looking for a Senior DevOps Consultant for an onsite role in Stockholm (Sundbyberg office). This opportunity is open to candidates currently based in Bengaluru who are willing to relocate to Sweden for the assignment.

The role will start with an initial 6-month onsite engagement, with the possibility of extension based on project requirements and performance.

Read more
Service Co

Service Co

Agency job
via Vikash Technologies by Rishika Teja
Mumbai, Navi Mumbai
5 - 9 yrs
₹10L - ₹17L / yr
skill iconAmazon Web Services (AWS)
skill iconKubernetes
IAC
Terraform
skill iconPython
+4 more

5+ yrs of experience in Cloud/DevOps roles.


 Strong hands-on experience with AWS architecture, operations &amp; automation (70% focus).


Solid Kubernetes/EKS administration experience (30% focus).


IaC experience (Terraform preferred). 


Scripting (Python / Bash).


 CI/CD tools (Jenkins, GitLab, GitHub Actions). 


Experience working with BFSI or Managed Service projects is mandatory

Read more
Procedure

at Procedure

4 candid answers
3 recruiters
Adithya K
Posted by Adithya K
Remote only
5 - 10 yrs
₹40L - ₹60L / yr
Software Development
skill iconAmazon Web Services (AWS)
skill iconPython
TypeScript
skill iconPostgreSQL
+3 more

Procedure is hiring for Drover.


This is not a DevOps/SRE/cloud-migration role — this is a hands-on backend engineering and architecture role where you build the platform powering our hardware at scale.


About Drover

Ranching is getting harder. Increased labor costs and a volatile climate are placing mounting pressure to provide for a growing population. Drover is empowering ranchers to efficiently and sustainably feed the world by making it cheaper and easier to manage livestock, unlock productivity gains, and reduce carbon footprint with rotational grazing. Not only is this a $46B opportunity, you'll be working on a climate solution with the potential for real, meaningful impact.


We use patent-pending low-voltage electrical muscle stimulation (EMS) to steer and contain cows, replacing the need for physical fences or electric shock. We are building something that has never been done before, and we have hundreds of ranches on our waitlist.


Drover is founded by Callum Taylor (ex-Harvard), who comes from 5 generations of ranching, and Samuel Aubin, both of whom grew up in Australian ranching towns and have an intricate understanding of the problem space. We are well-funded and supported by Workshop Ventures, a VC firm with experience in building unicorn IoT companies.


We're looking to assemble a team of exceptional talent with a high eagerness to dive headfirst into understanding the challenges and opportunities within ranching.


About The Role

As our founding cloud engineer, you will be responsible for building and scaling the infrastructure that powers our IoT platform, connecting thousands of devices across ranches nationwide.


Because we are an early-stage startup, you will have high levels of ownership in what you build. You will play a pivotal part in architecting our cloud infrastructure, building robust APIs, and ensuring our systems can scale reliably. We are looking for someone who is excited about solving complex technical challenges at the intersection of IoT, agriculture, and cloud computing.


What You'll Do

  • Develop Drover IoT cloud architecture from the ground up (it’s a green field project)
  • Design and implement services to support wearable devices, mobile app, and backend API
  • Implement data processing and storage pipelines
  • Create and maintain Infrastructure-as-Code
  • Support the engineering team across all aspects of early-stage development -- after all, this is a startup


Requirements

  • 5+ years of experience developing cloud architecture on AWS
  • In-depth understanding of various AWS services, especially those related to IoT
  • Expertise in cloud-hosted, event-driven, serverless architectures
  • Expertise in programming languages suitable for AWS micro-services (eg: TypeScript, Python)
  • Experience with networking and socket programming
  • Experience with Kubernetes or similar orchestration platforms
  • Experience with Infrastructure-as-Code tools (e.g., Terraform, AWS CDK)
  • Familiarity with relational databases (PostgreSQL)
  • Familiarity with Continuous Integration and Continuous Deployment (CI/CD)


Nice To Have

  • Bachelor’s or Master’s degree in Computer Science, Software Engineering, Electrical Engineering, or a related field


Read more
Service Co

Service Co

Agency job
via Vikash Technologies by Rishika Teja
Mumbai, Navi Mumbai
7 - 12 yrs
₹15L - ₹25L / yr
skill iconAmazon Web Services (AWS)
Windows Azure
skill iconKubernetes
skill iconDocker
Terraform
+5 more

Hiring for SRE Lead


Exp: 7 - 12 yrs

Work Location : Mumbai ( Kurla West )

WFO


Skills :

Proficient in cloud platforms (AWS, Azure, or GCP), containerization (Kubernetes/Docker), and Infrastructure as Code (Terraform, Ansible, or Puppet). 


Coding/Scripting: Strong programming or scripting skills in at least one language (e.g., Python, Go, Java) for automation and tooling development.


 System Knowledge: Deep understanding of Linux/Unix fundamentals, networking concepts, and distributed systems.

Read more
AI-First Company

AI-First Company

Agency job
via Peak Hire Solutions by Dhara Thakkar
Bengaluru (Bangalore), Mumbai, Hyderabad, Gurugram
5 - 17 yrs
₹30L - ₹45L / yr
Data engineering
Data architecture
SQL
Data modeling
GCS
+47 more

ROLES AND RESPONSIBILITIES:

You will be responsible for architecting, implementing, and optimizing Dremio-based data Lakehouse environments integrated with cloud storage, BI, and data engineering ecosystems. The role requires a strong balance of architecture design, data modeling, query optimization, and governance enablement in large-scale analytical environments.


  • Design and implement Dremio lakehouse architecture on cloud (AWS/Azure/Snowflake/Databricks ecosystem).
  • Define data ingestion, curation, and semantic modeling strategies to support analytics and AI workloads.
  • Optimize Dremio reflections, caching, and query performance for diverse data consumption patterns.
  • Collaborate with data engineering teams to integrate data sources via APIs, JDBC, Delta/Parquet, and object storage layers (S3/ADLS).
  • Establish best practices for data security, lineage, and access control aligned with enterprise governance policies.
  • Support self-service analytics by enabling governed data products and semantic layers.
  • Develop reusable design patterns, documentation, and standards for Dremio deployment, monitoring, and scaling.
  • Work closely with BI and data science teams to ensure fast, reliable, and well-modeled access to enterprise data.


IDEAL CANDIDATE:

  • Bachelor’s or Master’s in Computer Science, Information Systems, or related field.
  • 5+ years in data architecture and engineering, with 3+ years in Dremio or modern lakehouse platforms.
  • Strong expertise in SQL optimization, data modeling, and performance tuning within Dremio or similar query engines (Presto, Trino, Athena).
  • Hands-on experience with cloud storage (S3, ADLS, GCS), Parquet/Delta/Iceberg formats, and distributed query planning.
  • Knowledge of data integration tools and pipelines (Airflow, DBT, Kafka, Spark, etc.).
  • Familiarity with enterprise data governance, metadata management, and role-based access control (RBAC).
  • Excellent problem-solving, documentation, and stakeholder communication skills.


PREFERRED:

  • Experience integrating Dremio with BI tools (Tableau, Power BI, Looker) and data catalogs (Collibra, Alation, Purview).
  • Exposure to Snowflake, Databricks, or BigQuery environments.
  • Experience in high-tech, manufacturing, or enterprise data modernization programs.
Read more
Capital Squared
Remote only
5 - 10 yrs
₹25L - ₹55L / yr
MLOps
DevOps
Google Cloud Platform (GCP)
CI/CD
skill iconPostgreSQL
+4 more

Role: Full-Time, Long-Term Required: Docker, GCP, CI/CD Preferred: Experience with ML pipelines


OVERVIEW

We are seeking a DevOps engineer to join as a core member of our technical team. This is a long-term position for someone who wants to own infrastructure and deployment for a production machine learning system. You will ensure our prediction pipeline runs reliably, deploys smoothly, and scales as needed.


The ideal candidate thinks about failure modes obsessively, automates everything possible, and builds systems that run without constant attention.


CORE TECHNICAL REQUIREMENTS

Docker (Required): Deep experience with containerization. Efficient Dockerfiles, layer caching, multi-stage builds, debugging container issues. Experience with Docker Compose for local development.


Google Cloud Platform (Required): Strong GCP experience: Cloud Run for serverless containers, Compute Engine for VMs, Artifact Registry for images, Cloud Storage, IAM. You can navigate the console but prefer scripting everything.


CI/CD (Required): Build and maintain deployment pipelines. GitHub Actions required. You automate testing, building, pushing, and deploying. You understand the difference between continuous integration and continuous deployment.


Linux Administration (Required): Comfortable on the command line. SSH, diagnose problems, manage services, read logs, fix things. Bash scripting is second nature.


PostgreSQL (Required): Database administration basics—backups, monitoring, connection management, basic performance tuning. Not a DBA, but comfortable keeping a production database healthy.


Infrastructure as Code (Preferred): Terraform, Pulumi, or similar. Infrastructure should be versioned, reviewed, and reproducible—not clicked together in a console.


WHAT YOU WILL OWN

Deployment Pipeline: Maintaining and improving deployment scripts and CI/CD workflows. Code moves from commit to production reliably with appropriate testing gates.


Cloud Run Services: Managing deployments for model fitting, data cleansing, and signal discovery services. Monitor health, optimize cold starts, handle scaling.


VM Infrastructure: PostgreSQL and Streamlit on GCP VMs. Instance management, updates, backups, security.


Container Registry: Managing images in GitHub Container Registry and Google Artifact Registry. Cleanup policies, versioning, access control.


Monitoring and Alerting: Building observability. Logging, metrics, health checks, alerting. Know when things break before users tell us.


Environment Management: Configuration across local and production. Secrets management. Environment parity where it matters.


WHAT SUCCESS LOOKS LIKE

Deployments are boring—no drama, no surprises. Systems recover automatically from transient failures. Engineers deploy with confidence. Infrastructure changes are versioned and reproducible. Costs are reasonable and resources scale appropriately.


ENGINEERING STANDARDS

Automation First: If you do something twice, automate it. Manual processes are bugs waiting to happen.


Documentation: Runbooks, architecture diagrams, deployment guides. The next person can understand and operate the system.


Security Mindset: Secrets never in code. Least-privilege access. You think about attack surfaces.


Reliability Focus: Design for failure. Backups are tested. Recovery procedures exist and work.


CURRENT ENVIRONMENT

GCP (Cloud Run, Compute Engine, Artifact Registry, Cloud Storage), Docker, Docker Compose, GitHub Actions, PostgreSQL 16, Bash deployment scripts with Python wrapper.


WHAT WE ARE LOOKING FOR

Ownership Mentality: You see a problem, you fix it. You do not wait for assignment.


Calm Under Pressure: When production breaks, you diagnose methodically.


Communication: You explain infrastructure decisions to non-infrastructure people. You document what you build.


Long-Term Thinking: You build systems maintained for years, not quick fixes creating tech debt.


EDUCATION

University degree in Computer Science, Engineering, or related field preferred. Equivalent demonstrated expertise also considered.


TO APPLY

Include: (1) CV/resume, (2) Brief description of infrastructure you built or maintained, (3) Links to relevant work if available, (4) Availability and timezone.

Read more
iMerit
Bengaluru (Bangalore)
6 - 9 yrs
₹10L - ₹15L / yr
DevOps
Terraform
Apache Kafka
skill iconPython
skill iconGo Programming (Golang)
+4 more

Exp: 7- 10 Years

CTC: up to 35 LPA


Skills:

  • 6–10 years DevOps / SRE / Cloud Infrastructure experience
  • Expert-level Kubernetes (networking, security, scaling, controllers)
  • Terraform Infrastructure-as-Code mastery
  • Hands-on Kafka production experience
  • AWS cloud architecture and networking expertise
  • Strong scripting in Python, Go, or Bash
  • GitOps and CI/CD tooling experience


Key Responsibilities:

  • Design highly available, secure cloud infrastructure supporting distributed microservices at scale
  • Lead multi-cluster Kubernetes strategy optimized for GPU and multi-tenant workloads
  • Implement Infrastructure-as-Code using Terraform across full infrastructure lifecycle
  • Optimize Kafka-based data pipelines for throughput, fault tolerance, and low latency
  • Deliver zero-downtime CI/CD pipelines using GitOps-driven deployment models
  • Establish SRE practices with SLOs, p95 and p99 monitoring, and FinOps discipline
  • Ensure production-ready disaster recovery and business continuity testing



If interested Kindly share your updated resume at 82008 31681

Read more
Global digital transformation solutions provider.

Global digital transformation solutions provider.

Agency job
via Peak Hire Solutions by Dhara Thakkar
Chennai, Kochi (Cochin), Pune, Trivandrum, Thiruvananthapuram
5 - 7 yrs
₹10L - ₹25L / yr
Google Cloud Platform (GCP)
skill iconJenkins
CI/CD
skill iconDocker
skill iconKubernetes
+15 more

Job Description

We are seeking a highly skilled Site Reliability Engineer (SRE) with strong expertise in Google Cloud Platform (GCP) and CI/CD automation to lead cloud infrastructure initiatives. The ideal candidate will design and implement robust CI/CD pipelines, automate deployments, ensure platform reliability, and drive continuous improvement in cloud operations and DevOps practices.


Key Responsibilities:

  • Design, develop, and optimize end-to-end CI/CD pipelines using Jenkins, with a strong focus on Declarative Pipeline syntax.
  • Automate deployment, scaling, and management of applications across various GCP services including GKE, Cloud Run, Compute Engine, Cloud SQL, Cloud Storage, VPC, and Cloud Functions.
  • Collaborate closely with development and DevOps teams to ensure seamless integration of applications into the CI/CD pipeline and GCP environment.
  • Implement and manage monitoring, logging, and ing solutions to maintain visibility, reliability, and performance of cloud infrastructure and applications.
  • Ensure compliance with security best practices and organizational policies across GCP environments.
  • Document processes, configurations, and architectural decisions to maintain operational transparency.
  • Stay updated with the latest GCP services, DevOps, and SRE best practices to enhance infrastructure efficiency and reliability.


Mandatory Skills:

  • Google Cloud Platform (GCP) – Hands-on experience with core GCP compute, networking, and storage services.
  • Jenkins – Expertise in Declarative Pipeline creation and optimization.
  • CI/CD – Strong understanding of automated build, test, and deployment workflows.
  • Solid understanding of SRE principles including automation, scalability, observability, and system reliability.
  • Familiarity with containerization and orchestration tools (Docker, Kubernetes – GKE).
  • Proficiency in scripting languages such as Shell, Python, or Groovy for automation tasks.


Preferred Skills:

  • Experience with TerraformAnsible, or Cloud Deployment Manager for Infrastructure as Code (IaC).
  • Exposure to monitoring and observability tools like Stackdriver, Prometheus, or Grafana.
  • Knowledge of multi-cloud or hybrid environments (AWS experience is a plus).
  • GCP certification (Professional Cloud DevOps Engineer / Cloud Architect) preferred.


Skills

Gcp, Jenkins, CICD Aws,


Nice to Haves

Experience with Terraform, Ansible, or Cloud Deployment Manager for Infrastructure as Code (IaC).

Exposure to monitoring and observability tools like Stackdriver, Prometheus, or Grafana.

Knowledge of multi-cloud or hybrid environments (AWS experience is a plus).

GCP certification (Professional Cloud DevOps Engineer / Cloud Architect) preferred.

 

******

Notice period - 0 to 15days only

Location – Pune, Trivandrum, Kochi, Chennai

Read more
Media and Entertainment Industry

Media and Entertainment Industry

Agency job
via Peak Hire Solutions by Dhara Thakkar
Noida
5 - 7 yrs
₹15L - ₹25L / yr
DevOps
skill iconAmazon Web Services (AWS)
CI/CD
Infrastructure
Scripting
+28 more

Required Skills: Advanced AWS Infrastructure Expertise, CI/CD Pipeline Automation, Monitoring, Observability & Incident Management, Security, Networking & Risk Management, Infrastructure as Code & Scripting


Criteria:

  • 5+ years of DevOps/SRE experience in cloud-native, product-based companies (B2C scale preferred)
  • Strong hands-on AWS expertise across core and advanced services (EC2, ECS/EKS, Lambda, S3, CloudFront, RDS, VPC, IAM, ELB/ALB, Route53)
  • Proven experience designing high-availability, fault-tolerant cloud architectures for large-scale traffic
  • Strong experience building & maintaining CI/CD pipelines (Jenkins mandatory; GitHub Actions/GitLab CI a plus)
  • Prior experience running production-grade microservices deployments and automated rollout strategies (Blue/Green, Canary)
  • Hands-on experience with monitoring & observability tools (Grafana, Prometheus, ELK, CloudWatch, New Relic, etc.)
  • Solid hands-on experience with MongoDB in production, including performance tuning, indexing & replication
  • Strong scripting skills (Bash, Shell, Python) for automation
  • Hands-on experience with IaC (Terraform, CloudFormation, or Ansible)
  • Deep understanding of networking fundamentals (VPC, subnets, routing, NAT, security groups)
  • Strong experience in incident management, root cause analysis & production firefighting

 

Description

Role Overview

Company is seeking an experienced Senior DevOps Engineer to design, build, and optimize cloud infrastructure on AWS, automate CI/CD pipelines, implement monitoring and security frameworks, and proactively identify scalability challenges. This role requires someone who has hands-on experience running infrastructure at B2C product scale, ideally in media/OTT or high-traffic applications.

 

 Key Responsibilities

1. Cloud Infrastructure — AWS (Primary Focus)

  • Architect, deploy, and manage scalable infrastructure using AWS services such as EC2, ECS/EKS, Lambda, S3, CloudFront, RDS, ELB/ALB, VPC, IAM, Route53, etc.
  • Optimize cloud cost, resource utilization, and performance across environments.
  • Design high-availability, fault-tolerant systems for streaming workloads.

 

2. CI/CD Automation

  • Build and maintain CI/CD pipelines using Jenkins, GitHub Actions, or GitLab CI.
  • Automate deployments for microservices, mobile apps, and backend APIs.
  • Implement blue/green and canary deployments for seamless production rollouts.

 

3. Observability & Monitoring

  • Implement logging, metrics, and alerting using tools like Grafana, Prometheus, ELK, CloudWatch, New Relic, etc.
  • Perform proactive performance analysis to minimize downtime and bottlenecks.
  • Set up dashboards for real-time visibility into system health and user traffic spikes.

 

4. Security, Compliance & Risk Highlighting

• Conduct frequent risk assessments and identify vulnerabilities in:

  o Cloud architecture

  o Access policies (IAM)

  o Secrets & key management

  o Data flows & network exposure


• Implement security best practices including VPC isolation, WAF rules, firewall policies, and SSL/TLS management.

 

5. Scalability & Reliability Engineering

  • Analyze traffic patterns for OTT-specific load variations (weekends, new releases, peak hours).
  • Identify scalability gaps and propose solutions across:
  •   o Microservices
  •   o Caching layers
  •   o CDN distribution (CloudFront)
  •   o Database workloads
  • Perform capacity planning and load testing to ensure readiness for 10x traffic growth.

 

6. Database & Storage Support

  • Administer and optimize MongoDB for high-read/low-latency use cases.
  • Design backup, recovery, and data replication strategies.
  • Work closely with backend teams to tune query performance and indexing.

 

7. Automation & Infrastructure as Code

  • Implement IaC using Terraform, CloudFormation, or Ansible.
  • Automate repetitive infrastructure tasks to ensure consistency across environments.

 

Required Skills & Experience

Technical Must-Haves

  • 5+ years of DevOps/SRE experience in cloud-native, product-based companies.
  • Strong hands-on experience with AWS (core and advanced services).
  • Expertise in Jenkins CI/CD pipelines.
  • Solid background working with MongoDB in production environments.
  • Good understanding of networking: VPCs, subnets, security groups, NAT, routing.
  • Strong scripting experience (Bash, Python, Shell).
  • Experience handling risk identification, root cause analysis, and incident management.

 

Nice to Have

  • Experience with OTT, video streaming, media, or any content-heavy product environments.
  • Familiarity with containers (Docker), orchestration (Kubernetes/EKS), and service mesh.
  • Understanding of CDN, caching, and streaming pipelines.

 

Personality & Mindset

  • Strong sense of ownership and urgency—DevOps is mission critical at OTT scale.
  • Proactive problem solver with ability to think about long-term scalability.
  • Comfortable working with cross-functional engineering teams.

 

Why Join company?

• Build and operate infrastructure powering millions of monthly users.

• Opportunity to shape DevOps culture and cloud architecture from the ground up.

• High-impact role in a fast-scaling Indian OTT product.

Read more
AdTech Industry

AdTech Industry

Agency job
via Peak Hire Solutions by Dhara Thakkar
Noida
8 - 12 yrs
₹50L - ₹75L / yr
Ansible
Terraform
skill iconAmazon Web Services (AWS)
Platform as a Service (PaaS)
CI/CD
+30 more

ROLE & RESPONSIBILITIES:

We are hiring a Senior DevSecOps / Security Engineer with 8+ years of experience securing AWS cloud, on-prem infrastructure, DevOps platforms, MLOps environments, CI/CD pipelines, container orchestration, and data/ML platforms. This role is responsible for creating and maintaining a unified security posture across all systems used by DevOps and MLOps teams — including AWS, Kubernetes, EMR, MWAA, Spark, Docker, GitOps, observability tools, and network infrastructure.


KEY RESPONSIBILITIES:

1.     Cloud Security (AWS)-

  • Secure all AWS resources consumed by DevOps/MLOps/Data Science: EC2, EKS, ECS, EMR, MWAA, S3, RDS, Redshift, Lambda, CloudFront, Glue, Athena, Kinesis, Transit Gateway, VPC Peering.
  • Implement IAM least privilege, SCPs, KMS, Secrets Manager, SSO & identity governance.
  • Configure AWS-native security: WAF, Shield, GuardDuty, Inspector, Macie, CloudTrail, Config, Security Hub.
  • Harden VPC architecture, subnets, routing, SG/NACLs, multi-account environments.
  • Ensure encryption of data at rest/in transit across all cloud services.

 

2.     DevOps Security (IaC, CI/CD, Kubernetes, Linux)-

Infrastructure as Code & Automation Security:

  • Secure Terraform, CloudFormation, Ansible with policy-as-code (OPA, Checkov, tfsec).
  • Enforce misconfiguration scanning and automated remediation.

CI/CD Security:

  • Secure Jenkins, GitHub, GitLab pipelines with SAST, DAST, SCA, secrets scanning, image scanning.
  • Implement secure build, artifact signing, and deployment workflows.

Containers & Kubernetes:

  • Harden Docker images, private registries, runtime policies.
  • Enforce EKS security: RBAC, IRSA, PSP/PSS, network policies, runtime monitoring.
  • Apply CIS Benchmarks for Kubernetes and Linux.

Monitoring & Reliability:

  • Secure observability stack: Grafana, CloudWatch, logging, alerting, anomaly detection.
  • Ensure audit logging across cloud/platform layers.


3.     MLOps Security (Airflow, EMR, Spark, Data Platforms, ML Pipelines)-

Pipeline & Workflow Security:

  • Secure Airflow/MWAA connections, secrets, DAGs, execution environments.
  • Harden EMR, Spark jobs, Glue jobs, IAM roles, S3 buckets, encryption, and access policies.

ML Platform Security:

  • Secure Jupyter/JupyterHub environments, containerized ML workspaces, and experiment tracking systems.
  • Control model access, artifact protection, model registry security, and ML metadata integrity.

Data Security:

  • Secure ETL/ML data flows across S3, Redshift, RDS, Glue, Kinesis.
  • Enforce data versioning security, lineage tracking, PII protection, and access governance.

ML Observability:

  • Implement drift detection (data drift/model drift), feature monitoring, audit logging.
  • Integrate ML monitoring with Grafana/Prometheus/CloudWatch.


4.     Network & Endpoint Security-

  • Manage firewall policies, VPN, IDS/IPS, endpoint protection, secure LAN/WAN, Zero Trust principles.
  • Conduct vulnerability assessments, penetration test coordination, and network segmentation.
  • Secure remote workforce connectivity and internal office networks.


5.     Threat Detection, Incident Response & Compliance-

  • Centralize log management (CloudWatch, OpenSearch/ELK, SIEM).
  • Build security alerts, automated threat detection, and incident workflows.
  • Lead incident containment, forensics, RCA, and remediation.
  • Ensure compliance with ISO 27001, SOC 2, GDPR, HIPAA (as applicable).
  • Maintain security policies, procedures, RRPs (Runbooks), and audits.


IDEAL CANDIDATE:

  • 8+ years in DevSecOps, Cloud Security, Platform Security, or equivalent.
  • Proven ability securing AWS cloud ecosystems (IAM, EKS, EMR, MWAA, VPC, WAF, GuardDuty, KMS, Inspector, Macie).
  • Strong hands-on experience with Docker, Kubernetes (EKS), CI/CD tools, and Infrastructure-as-Code.
  • Experience securing ML platforms, data pipelines, and MLOps systems (Airflow/MWAA, Spark/EMR).
  • Strong Linux security (CIS hardening, auditing, intrusion detection).
  • Proficiency in Python, Bash, and automation/scripting.
  • Excellent knowledge of SIEM, observability, threat detection, monitoring systems.
  • Understanding of microservices, API security, serverless security.
  • Strong understanding of vulnerability management, penetration testing practices, and remediation plans.


EDUCATION:

  • Master’s degree in Cybersecurity, Computer Science, Information Technology, or related field.
  • Relevant certifications (AWS Security Specialty, CISSP, CEH, CKA/CKS) are a plus.


PERKS, BENEFITS AND WORK CULTURE:

  • Competitive Salary Package
  • Generous Leave Policy
  • Flexible Working Hours
  • Performance-Based Bonuses
  • Health Care Benefits
Read more
Media and Entertainment Industry

Media and Entertainment Industry

Agency job
via Peak Hire Solutions by Dhara Thakkar
Noida
4 - 8 yrs
₹20L - ₹45L / yr
TypeScript
skill iconMongoDB
Microservices
MVC Framework
Google Cloud Platform (GCP)
+14 more

Required Skills: TypeScript, MVC, Cloud experience (Azure, AWS, etc.), mongodb, Express.js, Nest.js

 

Criteria:

Need candidates from Growing startups or Product based companies only

1. 4–8 years’ experience in backend engineering

2. Minimum 2+ years hands-on experience with:

  • TypeScript
  • Express.js / Nest.js

3. Strong experience with MongoDB (or MySQL / PostgreSQL / DynamoDB)

4. Strong understanding of system design & scalable architecture

5. Hands-on experience in:

  • Event-driven architecture / Domain-driven design
  • MVC / Microservices

6. Strong in automated testing (especially integration tests)

7. Experience with CI/CD pipelines (GitHub Actions or similar)

8. Experience managing production systems

9. Solid understanding of performance, reliability, observability

10. Cloud experience (AWS preferred; GCP/Azure acceptable)

11. Strong coding standards — Clean Code, code reviews, refactoring

 

Description 

About the opportunity

We are looking for an exceptional Senior Software Engineer to join our Backend team. This is a unique opportunity to join a fast-growing company where you will get to solve real customer and business problems, shape the future of a product built for Bharat and build the engineering culture of the team. You will have immense responsibility and autonomy to push the boundaries of engineering to deliver scalable and resilient systems.

As a Senior Software Engineer, you will be responsible for shipping innovative features at breakneck speed, designing the architecture, mentoring other engineers on the team and pushing for a high bar of engineering standards like code quality, automated testing, performance, CI/CD, etc. If you are someone who loves solving problems for customers, technology, the craft of software engineering, and the thrill of building startups, we would like to talk to you.

 

What you will be doing

  • Build and ship features in our Node.js (and now migrating to TypeScript) codebase that directly impact user experience and help move the top and bottom line of the business.
  • Collaborate closely with our product, design and data team to build innovative features to deliver a world class product to our customers. At company, product managers don’t “tell” what to build. In fact, we all collaborate on how to solve a problem for our customers and the business. Engineering plays a big part in it.
  • Design scalable platforms that empower our product and marketing teams to rapidly experiment.
  • Own the quality of our products by writing automated tests, reviewing code, making systems observable and resilient to failures.
  • Drive code quality and pay down architectural debt by continuous analysis of our codebases and systems, and continuous refactoring.
  • Architect our systems for faster iterations, releasability, scalability and high availability using practices like Domain Driven Design, Event Driven Architecture, Cloud Native Architecture and Observability.
  • Set the engineering culture with the rest of the team by defining how we should work as a team, set standards for quality, and improve the speed of engineering execution.

 

The role could be ideal for you if you

  • Experience of 4-8 years of working in backend engineering with at least 2 years of production experience in TypeScript, Express.js (or another popular framework like Nest.js) and MongoDB (or any popular database like MySQL, PostgreSQL, DynamoDB, etc.).
  • Well versed with one or more architectures and design patterns such as MVC, Domain Driven Design, CQRS, Event Driven Architecture, Cloud Native Architecture, etc.
  • Experienced in writing automated tests (especially integration tests) and Continuous Integration. At company, engineers own quality and hence, writing automated tests is crucial to the role.
  • Experience with managing production infrastructure using technologies like public cloud providers (AWS, GCP, Azure, etc.). Bonus: if you have experience in using Kubernetes.
  • Experience in observability techniques like code instrumentation for metrics, tracing and logging.
  • Care deeply about code quality, code reviews, software architecture (think about Object Oriented Programming, Clean Code, etc.), scalability and reliability. Bonus: if you have experience in this from your past roles.
  • Understand the importance of shipping fast in a startup environment and constantly try to find ingenious ways to achieve the same.
  • Collaborate well with everyone on the team. We communicate a lot and don’t hesitate to get quick feedback from other members on the team sooner than later.
  • Can take ownership of goals and deliver them with high accountability.

 

Don’t hesitate to try out new technologies. At company, nobody is limited to a role. Every engineer in our team is an expert of at least one technology but often ventures out in adjacent technologies like React.js, Flutter, Data Platforms, AWS and Kubernetes. If you are not excited by this, you will not like working at company. Bonus: if you have experience in adjacent technologies like AWS (or any public cloud provider, Github Actions (or CircleCI), Kubernetes, Infrastructure as Code (Terraform, Pulumi, etc.), etc.

 

 

Read more
Talent Pro
Mayank choudhary
Posted by Mayank choudhary
Noida
8 - 12 yrs
₹60L - ₹80L / yr
DevOps
cicd
skill iconAmazon Web Services (AWS)
Terraform
Ansible
+1 more

Strong DevSecOps / Cloud Security profile

Mandatory (Experience 1) – Must have 8+ years total experience in DevSecOps / Cloud Security / Platform Security roles securing AWS workloads and CI/CD systems.

Mandatory (Experience 2) – Must have strong hands-on experience securing AWS services (including but not limited to) KMS, WAF, Shield, CloudTrail, AWS Config, Security Hub, Inspector, Macie and IAM governance

Mandatory (Experience 3) – Must have hands-on expertise in Identity & Access Security including RBAC, IRSA, PSP/PSS, SCPs and IAM least-privilege enforcement

Mandatory (Experience 4) – Must have hands-on experience with security automation using Terraform and Ansible for configuration hardening and compliance

Mandatory (Experience 5) – Must have strong container & Kubernetes security experience including Docker image scanning, EKS runtime controls, network policies, and registry security

Mandatory (Experience 6) – Must have strong CI/CD pipeline security expertise including SAST, DAST, SCA, Jenkins Security, artifact integrity, secrets protection, and automated remediation

Mandatory (Experience 7) – Must have experience securing data & ML platforms including databases, data centers/on-prem environments, MWAA/Airflow, and sensitive ETL/ML workflows

Mandatory (Company) - Product companies preferred; Exception for service company candidates with strong MLOps + AWS depth

Read more
Arcitech
Arcitech HR Department
Posted by Arcitech HR Department
Navi Mumbai
5 - 7 yrs
₹12L - ₹16L / yr
skill iconAmazon Web Services (AWS)
CI/CD
Cyber Security
VAPT
Terraform
+1 more

Job Title: Senior DevOps Engineer (Cybersecurity & VAPT)


Location: Vashi (On-site)

Shift: 10:00 AM – 7:00 PM

Experience: 5+ years



Job Summary

Hiring a Senior DevOps Engineer with strong cloud, CI/CD, automation skills and hands-on experience in Cybersecurity & VAPT to manage deployments, secure infrastructure, and support DevSecOps initiatives.


Key Responsibilities

Cloud & Infrastructure

Manage deployments on AWS/Azure

Maintain Linux servers & cloud environments

Ensure uptime, performance, and scalability

CI/CD & Automation

Build and optimize pipelines (Jenkins, GitHub Actions, GitLab CI/CD)

Automate tasks using Bash/Python

Implement IaC (Terraform/CloudFormation)

Containerization

Build and run Docker containers

Work with basic Kubernetes concepts

Cybersecurity & VAPT

Perform Vulnerability Assessment & Penetration Testing

Identify, track, and mitigate security vulnerabilities

Implement hardening and support DevSecOps practices

Assist with firewall/security policy management

Monitoring & Troubleshooting

Use ELK, Prometheus, Grafana, CloudWatch

Resolve cloud, deployment, and infra issues

Cross-Team Collaboration

Work with Dev, QA, and Security for secure releases

Maintain documentation and best practices


Required Skills


AWS/Azure, Linux, Docker

CI/CD tools: Jenkins, GitHub Actions, GitLab

Terraform / IaC

VAPT experience + understanding of OWASP, cloud security

Bash/Python scripting

Monitoring tools (ELK, Prometheus, Grafana)

Strong troubleshooting & communication

Read more
One2n

at One2n

3 candid answers
Reshika Mendiratta
Posted by Reshika Mendiratta
Pune
6yrs+
Upto ₹35L / yr (Varies
)
skill iconKubernetes
Monitoring
skill iconAmazon Web Services (AWS)
JVM
skill iconDocker
+7 more

About the role:

We are looking for a Senior Site Reliability Engineer who understands the nuances of production systems. If you care about building and running reliable software systems in production, you'll like working at One2N.

You will primarily work with our startups and mid-size clients. We work on One-to-N kind problems (hence the name One2N), those where Proof of concept is done and the work revolves around scalability, maintainability, and reliability. In this role, you will be responsible for architecting and optimizing our observability and infrastructure to provide actionable insights into performance and reliability.


Responsibilities:

  • Conceptualise, think, and build platform engineering solutions with a self-serve model to enable product engineering teams.
  • Provide technical guidance and mentorship to young engineers.
  • Participate in code reviews and contribute to best practices for development and operations.
  • Design and implement comprehensive monitoring, logging, and alerting solutions to collect, analyze, and visualize data (metrics, logs, traces) from diverse sources.
  • Develop custom monitoring metrics, dashboards, and reports to track key performance indicators (KPIs), detect anomalies, and troubleshoot issues proactively.
  • Improve Developer Experience (DX) to help engineers improve their productivity.
  • Design and implement CI/CD solutions to optimize velocity and shorten the delivery time.
  • Help SRE teams set up on-call rosters and coach them for effective on-call management.
  • Automating repetitive manual tasks from CI/CD pipelines, operations tasks, and infrastructure as code (IaC) practices.
  • Stay up-to-date with emerging technologies and industry trends in cloud-native, observability, and platform engineering space.


Requirements:

  • 6-9 years of professional experience in DevOps practices or software engineering roles, with a focus on Kubernetes on an AWS platform.
  • Expertise in observability and telemetry tools and practices, including hands-on experience with some of Datadog, Honeycomb, ELK, Grafana, and Prometheus.
  • Working knowledge of programming using Golang, Python, Java, or equivalent.
  • Skilled in diagnosing and resolving Linux operating system issues.
  • Strong proficiency in scripting and automation to build monitoring and analytics solutions.
  • Solid understanding of microservices architecture, containerization (Docker, Kubernetes), and cloud-native technologies.
  • Experience with infrastructure as code (IaC) tools such as Terraform, Pulumi.
  • Excellent analytical and problem-solving skills, keen attention to detail, and a passion for continuous improvement.
  • Strong written, communication, and collaboration skills, with the ability to work effectively in a fast-paced, agile environment.
Read more
Watsoo Express
Gurgaon Udyog vihar phase 5
6 - 10 yrs
₹9L - ₹11L / yr
skill iconDocker
skill iconKubernetes
helm
cicd
skill iconGitHub
+9 more

Profile: Sr. Devops Engineer

Location: Gurugram

Experience: 05+ Years

Notice Period: can join Immediate to 1 week

Company: Watsoo

Required Skills & Qualifications

  • Bachelor’s degree in Computer Science, Engineering, or related field.
  • 5+ years of proven hands-on DevOps experience.
  • Strong experience with CI/CD tools (Jenkins, GitLab CI, GitHub Actions, etc.).
  • Expertise in containerization & orchestration (Docker, Kubernetes, Helm).
  • Hands-on experience with cloud platforms (AWS, Azure, or GCP).
  • Proficiency in Infrastructure as Code (IaC) tools (Terraform, Ansible, Pulumi, or CloudFormation).
  • Experience with monitoring and logging solutions (Prometheus, Grafana, ELK, CloudWatch, etc.).
  • Proficiency in scripting languages (Python, Bash, or Shell).
  • Knowledge of networking, security, and system administration.
  • Strong problem-solving skills and ability to work in fast-paced environments.
  • Troubleshoot production issues, perform root cause analysis, and implement preventive measures.

Advocate DevOps best practices, automation, and continuous improvement

Read more
MyOperator - VoiceTree Technologies

at MyOperator - VoiceTree Technologies

1 video
3 recruiters
Vijay Muthu
Posted by Vijay Muthu
Remote only
3 - 5 yrs
₹12L - ₹20L / yr
skill iconPython
skill iconDjango
MySQL
skill iconPostgreSQL
Microservices architecture
+26 more

About Us:

MyOperator and Heyo are India’s leading conversational platforms, empowering 40,000+ businesses with Call and WhatsApp-based engagement. We’re a product-led SaaS company scaling rapidly, and we’re looking for a skilled Software Developer to help build the next generation of scalable backend systems.


Role Overview:

We’re seeking a passionate Python Developer with strong experience in backend development and cloud infrastructure. This role involves building scalable microservices, integrating AI tools like LangChain/LLMs, and optimizing backend performance for high-growth B2B products.


Key Responsibilities:

  • Develop robust backend services using Python, Django, and FastAPI
  • Design and maintain a scalable microservices architecture
  • Integrate LangChain/LLMs into AI-powered features
  • Write clean, tested, and maintainable code with pytest
  • Manage and optimize databases (MySQL/Postgres)
  • Deploy and monitor services on AWS
  • Collaborate across teams to define APIs, data flows, and system architecture

Must-Have Skills:

  • Python and Django
  • MySQL or Postgres
  • Microservices architecture
  • AWS (EC2, RDS, Lambda, etc.)
  • Unit testing using pytest
  • LangChain or Large Language Models (LLM)
  • Strong grasp of Data Structures & Algorithms
  • AI coding assistant tools (e.g., Chat GPT & Gemini)

Good to Have:

  • MongoDB or ElasticSearch
  • Go or PHP
  • FastAPI
  • React, Bootstrap (basic frontend support)
  • ETL pipelines, Jenkins, Terraform

Why Join Us?

  • 100% Remote role with a collaborative team
  • Work on AI-first, high-scale SaaS products
  • Drive real impact in a fast-growing tech company
  • Ownership and growth from day one


Read more
Tradelab Technologies
Aakanksha Yadav
Posted by Aakanksha Yadav
Bengaluru (Bangalore)
2 - 4 yrs
₹7L - ₹18L / yr
CI/CD
skill iconJenkins
gitlab
ArgoCD
skill iconAmazon Web Services (AWS)
+8 more

About Us:

Tradelab Technologies Pvt Ltd is not for those seeking comfort—we are for those hungry to make a mark in the trading and fintech industry.


Key Responsibilities

CI/CD and Infrastructure Automation

  • Design, implement, and maintain CI/CD pipelines to support fast and reliable releases
  • Automate deployments using tools such as Terraform, Helm, and Kubernetes
  • Improve build and release processes to support high-performance and low-latency trading applications
  • Work efficiently with Linux/Unix environments

Cloud and On-Prem Infrastructure Management

  • Deploy, manage, and optimize infrastructure on AWS, GCP, and on-premises environments
  • Ensure system reliability, scalability, and high availability
  • Implement Infrastructure as Code (IaC) to standardize and streamline deployments

Performance Monitoring and Optimization

  • Monitor system performance and latency using Prometheus, Grafana, and ELK stack
  • Implement proactive alerting and fault detection to ensure system stability
  • Troubleshoot and optimize system components for maximum efficiency

Security and Compliance

  • Apply DevSecOps principles to ensure secure deployment and access management
  • Maintain compliance with financial industry regulations such as SEBI
  • Conduct vulnerability assessments and maintain logging and audit controls


Required Skills and Qualifications

  • 2+ years of experience as a DevOps Engineer in a software or trading environment
  • Strong expertise in CI/CD tools (Jenkins, GitLab CI/CD, ArgoCD)
  • Proficiency in cloud platforms such as AWS and GCP
  • Hands-on experience with Docker and Kubernetes
  • Experience with Terraform or CloudFormation for IaC
  • Strong Linux administration and networking fundamentals (TCP/IP, DNS, firewalls)
  • Familiarity with Prometheus, Grafana, and ELK stack
  • Proficiency in scripting using Python, Bash, or Go
  • Solid understanding of security best practices including IAM, encryption, and network policies


Good to Have (Optional)

  • Experience with low-latency trading infrastructure or real-time market data systems
  • Knowledge of high-frequency trading environments
  • Exposure to FIX protocol, FPGA, or network optimization techniques
  • Familiarity with Redis or Nginx for real-time data handling


Why Join Us?

  • Work with a team that expects and delivers excellence.
  • A culture where risk-taking is rewarded, and complacency is not.
  • Limitless opportunities for growth—if you can handle the pace.
  • A place where learning is currency, and outperformance is the only metric that matters.
  • The opportunity to build systems that move markets, execute trades in microseconds, and redefine fintech.


This isn’t just a job—it’s a proving ground. Ready to take the leap? Apply now.


Read more
Planview

at Planview

3 candid answers
3 recruiters
Reshika Mendiratta
Posted by Reshika Mendiratta
Bengaluru (Bangalore)
12 - 16 yrs
Upto ₹65L / yr (Varies
)
Linux/Unix
Virtualization
Operating systems
Computer Networking
CI/CD
+9 more

Role Summary

Our CloudOps/DevOps teams are distributed across India, Canada, and Israel.

As a Manager, you will lead teams of Engineers and champion configuration management, cloud technologies, and continuous improvement. The role involves close collaboration with global leaders to ensure our applications, infrastructure, and processes remain scalable, secure, and supportable. You will work closely with Engineers across Dev, DevOps, and DBOps to design and implement solutions that improve customer value, reduce costs, and eliminate toil.


Key Responsibilities

  • Guide the professional development of Engineers and support teams in meeting business objectives
  • Collaborate with leaders in Israel on priorities, architecture, delivery, and product management
  • Build secure, scalable, and self-healing systems
  • Manage and optimize deployment pipelines
  • Triage and remediate production issues
  • Participate in on-call escalations


Key Qualifications

  • Bachelor’s in CS or equivalent experience
  • 3+ years managing Engineering teams
  • 8+ years as a Site Reliability or Platform Engineer
  • 5+ years administering Linux and Windows environments
  • 3+ years programming/scripting (Python, JavaScript, PowerShell)
  • Strong experience with OS internals, virtualization, storage, networking, and firewalls
  • Experience maintaining On-Prem (90%) and Cloud (10%) environments (AWS, GCP, Azure)
Read more
AdTech Industry

AdTech Industry

Agency job
via Peak Hire Solutions by Dhara Thakkar
Noida
8 - 12 yrs
₹30L - ₹40L / yr
DevOps
skill iconDocker
CI/CD
skill iconAmazon Web Services (AWS)
AWS CloudFormation
+22 more

ROLES AND RESPONSIBILITIES:

We are seeking a highly skilled Senior DevOps Engineer with 8+ years of hands-on experience in designing, automating, and optimizing cloud-native solutions on AWS. AWS and Linux expertise are mandatory. The ideal candidate will have strong experience across databases, automation, CI/CD, containers, and observability, with the ability to build and scale secure, reliable cloud environments.


KEY RESPONSIBILITIES:

Cloud & Infrastructure as Code (IaC)-

  • Architect and manage AWS environments ensuring scalability, security, and high availability.
  • Implement infrastructure automation using Terraform, CloudFormation, and Ansible.
  • Configure VPC Peering, Transit Gateway, and PrivateLink/Connect for advanced networking.


CI/CD & Automation:

  • Build and maintain CI/CD pipelines (Jenkins, GitHub, SonarQube, automated testing).
  • Automate deployments, provisioning, and monitoring across environments.


Containers & Orchestration:

  • Deploy and operate workloads on Docker and Kubernetes (EKS).
  • Implement IAM Roles for Service Accounts (IRSA) for secure pod-level access.
  • Optimize performance of containerized and microservices applications.


Monitoring & Reliability:

  • Implement observability with Prometheus, Grafana, ELK, CloudWatch, M/Monit, and Datadog.
  • Establish logging, alerting, and proactive monitoring for high availability.


Security & Compliance:

  • Apply AWS security best practices including IAM, IRSA, SSO, and role-based access control.
  • Manage WAF, Guard Duty, Inspector, and other AWS-native security tools.
  • Configure VPNs, firewalls, and secure access policies and AWS organizations.


Databases & Analytics:

  • Must have expertise in MongoDB, Snowflake, Aerospike, RDS, PostgreSQL, MySQL/MariaDB, and other RDBMS.
  • Manage data reliability, performance tuning, and cloud-native integrations.
  • Experience with Apache Airflow and Spark.


IDEAL CANDIDATE:

  • 8+ years in DevOps engineering, with strong AWS Cloud expertise (EC2, VPC, TG, RDS, S3, IAM, EKS, EMR, SCP, MWAA, Lambda, CloudFront, SNS, SES etc.).
  • Linux expertise is mandatory (system administration, tuning, troubleshooting, CIS hardening etc).
  • Strong knowledge of databases: MongoDB, Snowflake, Aerospike, RDS, PostgreSQL, MySQL/MariaDB, and other RDBMS.
  • Hands-on with Docker, Kubernetes (EKS), Terraform, CloudFormation, Ansible.
  • Proven ability with CI/CD pipeline automation and DevSecOps practices.
  • Practical experience with VPC Peering, Transit Gateway, WAF, Guard Duty, Inspector and advanced AWS networking and security tools.
  • Expertise in observability tools: Prometheus, Grafana, ELK, CloudWatch, M/Monit, and Datadog.
  • Strong scripting skills (Shell/bash, Python, or similar) for automation.
  • Bachelor / Master’s degree
  • Effective communication skills


PERKS, BENEFITS AND WORK CULTURE:

  • Competitive Salary Package
  • Generous Leave Policy
  • Flexible Working Hours
  • Performance-Based Bonuses
  • Health Care Benefits
Read more
NeoGenCode Technologies Pvt Ltd
Mumbai
5 - 10 yrs
₹10L - ₹24L / yr
DevOps
skill iconKubernetes
helm
GitOps
skill iconAmazon Web Services (AWS)
+2 more

Job Title : DevOps Engineer – Fintech (Product-Based)

Experience : 5+ Years

Location : Mumbai

Job Type : Full-Time | Product Company


Role Summary :

We are hiring a DevOps Engineer with strong product-based experience to manage infrastructure for a Fintech platform built on stateful microservices.

The role involves working across hybrid cloud + on-prem, with deep expertise in Kubernetes, Helm, GitOps, IaC, and Cloud Networking.


Mandatory Skills :

Product-based experience, deep Kubernetes (managed & self-managed), custom Helm Chart development, ArgoCD/FluxCD (GitOps), strong AWS/Azure cloud networking & security, IaC module development (Terraform/Pulumi/CloudFormation), experience with stateful microservices (DBs/queues/caches), multi-tenant deployments, HA/load balancing/SSL/TLS/cert management.


Key Responsibilities :

  • Deploy and manage stateful microservices in production.
  • Handle both managed & self-managed Kubernetes clusters.
  • Develop and maintain custom Helm Charts.
  • Implement GitOps pipelines using ArgoCD/FluxCD.
  • Architect and operate secure infra on AWS/Azure (VPC, IAM, networking).
  • Build reusable IaC modules using Terraform/CloudFormation/Pulumi.
  • Design multi-tenant cluster deployments.
  • Manage HA, load balancers, certificates, DNS, and networking.

Mandatory Skills :

  • Product-based company experience.
  • Strong Kubernetes (EKS/AKS/GKE + self-managed).
  • Custom Helm Chart development.
  • GitOps tools : ArgoCD/FluxCD.
  • AWS/Azure cloud networking & security.
  • IaC module development (Terraform/Pulumi/CloudFormation).
  • Experience with stateful components (DBs, queues, caches).
  • Understanding of multi-tenant deployments, HA, SSL/TLS, ingress, LB.
Read more
Tech AI startup in Bangalore

Tech AI startup in Bangalore

Agency job
via Recruit Square by Priyanka choudhary
Remote only
5 - 8 yrs
₹12L - ₹22L / yr
skill iconPostgreSQL
Windows Azure
Terraform
helm

Infrastructure Engineer – Database & Storage


Responsibilities

  • Design and maintain PostgreSQLOpenSearch, and Azure Blob/S3 clusters.
  • Implement schema registry, metadata catalog, and time-versioned storage.
  • Configure read replicas, backups, encryption-at-rest, and WORM (Write Once Read Many) compliance.
  • Optimize query execution, indexing, and replication latency.
  • Partner with DevOps on infrastructure as code and cross-region replication.

Requirements

  • 6 + years database / data-infrastructure administration.
  • Mastery of indexing, partitioning, query tuning, sharding.
  • Proven experience deploying cloud-native DB stacks with Terraform or Helm.


Read more
Spark Eighteen
Rishabh Jain
Posted by Rishabh Jain
Delhi
5 - 10 yrs
₹23L - ₹30L / yr
cicd
skill iconAmazon Web Services (AWS)
skill iconDocker
skill iconKubernetes
skill iconJenkins
+2 more

About the Job

This is a full-time role for a Lead DevOps Engineer at Spark Eighteen. We are seeking an experienced DevOps professional to lead our infrastructure strategy, design resilient systems, and drive continuous improvement in our deployment processes. In this role, you will architect scalable solutions, mentor junior engineers, and ensure the highest standards of reliability and security across our cloud infrastructure. The job location is flexible with preference for the Delhi NCR region.


Responsibilities

  • Lead and mentor the DevOps/SRE team
  • Define and drive DevOps strategy and roadmaps
  • Oversee infrastructure automation and CI/CD at scale
  • Collaborate with architects, developers, and QA teams to integrate DevOps practices
  • Ensure security, compliance, and high availability of platforms
  • Own incident response, postmortems, and root cause analysis
  • Budgeting, team hiring, and performance evaluation


Requirements

Technical Skills

  • Bachelor's or Master's degree in Computer Science, Engineering, or related field.
  • 7+ years of professional DevOps experience with demonstrated progression.
  • Strong architecture and leadership background
  • Deep hands-on knowledge of infrastructure as code, CI/CD, and cloud
  • Proven experience with monitoring, security, and governance
  • Effective stakeholder and project management
  • Experience with tools like Jenkins, ArgoCD, Terraform, Vault, ELK, etc.
  • Strong understanding of business continuity and disaster recovery


Soft Skills

  • Cross-functional communication excellence with ability to lead technical discussions.
  • Strong mentorship capabilities for junior and mid-level team members.
  • Advanced strategic thinking and ability to propose innovative solutions.
  • Excellent knowledge transfer skills through documentation and training.
  • Ability to understand and align technical solutions with broader business strategy.
  • Proactive problem-solving approach with focus on continuous improvement.
  • Strong leadership skills in guiding team performance and technical direction.
  • Effective collaboration across development, QA, and business teams.
  • Ability to make complex technical decisions with minimal supervision.
  • Strategic approach to risk management and mitigation.


What We Offer

  • Professional Growth: Continuous learning opportunities through diverse projects and mentorship from experienced leaders
  • Global Exposure: Work with clients from 20+ countries, gaining insights into different markets and business cultures
  • Impactful Work: Contribute to projects that make a real difference, with solutions generating over $1B in revenue
  • Work-Life Balance: Flexible arrangements that respect personal wellbeing while fostering productivity
  • Career Advancement: Clear progression pathways as you develop skills within our growing organization
  • Competitive Compensation: Attractive salary packages that recognize your contributions and expertise


Our Culture

At Spark Eighteen, our culture centers on innovation, excellence, and growth. We believe in:

  • Quality-First: Delivering excellence rather than just quick solutions
  • True Partnership: Building relationships based on trust and mutual respect
  • Communication: Prioritizing clear, effective communication across teams
  • Innovation: Encouraging curiosity and creative approaches to problem-solving
  • Continuous Learning: Supporting professional development at all levels
  • Collaboration: Combining diverse perspectives to achieve shared goals
  • Impact: Measuring success by the value we create for clients and users


Apply Here - https://tinyurl.com/t6x23p9b

Read more
Wissen Technology

at Wissen Technology

4 recruiters
Praffull Shinde
Posted by Praffull Shinde
Pune, Mumbai, Bengaluru (Bangalore)
8 - 14 yrs
Best in industry
Google Cloud Platform (GCP)
Terraform
skill iconKubernetes
DevOps
skill iconPython

JD for Cloud engineer

 

Job Summary:


We are looking for an experienced GCP Cloud Engineer to design, implement, and manage cloud-based solutions on Google Cloud Platform (GCP). The ideal candidate should have expertise in GKE (Google Kubernetes Engine), Cloud Run, Cloud Loadbalancer, Cloud function, Azure DevOps, and Terraform, with a strong focus on automation, security, and scalability.


You will work closely with development, operations, and security teams to ensure robust cloud infrastructure and CI/CD pipelines while optimizing performance and cost.

 

Key Responsibilities:

1. Cloud Infrastructure Design & Management

  • Architect, deploy, and maintain GCP cloud resources via terraform/other automation.
  • Implement Google Cloud Storage, Cloud SQL, filestore,  for data storage and processing needs.
  • Manage and configure Cloud Load Balancers (HTTP(S), TCP/UDP, and SSL Proxy) for high availability and scalability.
  • Optimize resource allocation, monitoring, and cost efficiency across GCP environments.


2. Kubernetes & Container Orchestration

  • Deploy, manage, and optimize workloads on Google Kubernetes Engine (GKE).
  • Work with Helm charts for microservices deployments.
  • Automate scaling, rolling updates, and zero-downtime deployments.

 

3. Serverless & Compute Services

  • Deploy and manage applications on Cloud Run and Cloud Functions for scalable, serverless workloads.
  • Optimize containerized applications running on Cloud Run for cost efficiency and performance.

 

4. CI/CD & DevOps Automation

  • Design, implement, and manage CI/CD pipelines using Azure DevOps.
  • Automate infrastructure deployment using Terraform, Bash and Powershell scripting
  • Integrate security and compliance checks into the DevOps workflow (DevSecOps).

 

 

Required Skills & Qualifications:

Experience: 8+ years in Cloud Engineering, with a focus on GCP.

Cloud Expertise: Strong knowledge of GCP services (GKE, Compute Engine, IAM, VPC, Cloud Storage, Cloud SQL, Cloud Functions).

Kubernetes & Containers: Experience with GKE, Docker, GKE Networking, Helm.

DevOps Tools: Hands-on experience with Azure DevOps for CI/CD pipeline automation.

Infrastructure-as-Code (IaC): Expertise in Terraform for provisioning cloud resources.

Scripting & Automation: Proficiency in Python, Bash, or PowerShell for automation.

Security & Compliance: Knowledge of cloud security principles, IAM, and compliance standards.

Read more
Wissen Technology

at Wissen Technology

4 recruiters
Janane Mohanasankaran
Posted by Janane Mohanasankaran
Bengaluru (Bangalore), Mumbai, Pune
7 - 12 yrs
₹1L - ₹45L / yr
Google Cloud Platform (GCP)
skill iconKubernetes
skill iconDocker
google kubernetes engineer
azure devops
+2 more

Required Skills & Qualifications:

✔ Experience: 4+ years in Cloud Engineering, with a focus on GCP.

✔ Cloud Expertise: Strong knowledge of GCP services (GKE, Compute Engine, IAM, VPC, Cloud Storage, Cloud SQL, Cloud Functions).

✔ Kubernetes & Containers: Experience with GKE, Docker, GKE Networking, Helm.

✔ DevOps Tools: Hands-on experience with Azure DevOps for CI/CD pipeline automation.

✔ Infrastructure-as-Code (IaC): Expertise in Terraform for provisioning cloud resources.

✔ Scripting & Automation: Proficiency in Python, Bash, or PowerShell for automation.

Read more
Reliable Group

at Reliable Group

2 candid answers
Nilesh Gend
Posted by Nilesh Gend
Pune
5 - 12 yrs
₹15L - ₹35L / yr
Google Cloud Platform (GCP)
Ansible
Terraform

Job Title: GCP Cloud Engineer/Lead


Location: Pune, Balewadi

Shift / Time Zone: 1:30 PM – 10:30 PM IST (3:00 AM – 12:00 PM EST, 3–4 hours overlap with US Eastern Time)


Role Summary

We are seeking an experienced GCP Cloud Engineer to join our team supporting CVS. The ideal candidate will have a strong background in Google Cloud Platform (GCP) architecture, automation, microservices, and Kubernetes, along with the ability to translate business strategy into actionable technical initiatives. This role requires a blend of hands-on technical expertise, cross-functional collaboration, and customer engagement to ensure scalable and secure cloud solutions.


Key Responsibilities

  • Design, implement, and manage cloud infrastructure on Google Cloud Platform (GCP) leveraging best practices for scalability, performance, and cost efficiency.
  • Develop and maintain microservices-based architectures and containerized deployments using Kubernetes and related technologies.
  • Evaluate and recommend new tools, services, and architectures that align with enterprise cloud strategies.
  • Collaborate closely with Infrastructure Engineering Leadership to translate long-term customer strategies into actionable enablement plans, onboarding frameworks, and proactive support programs.
  • Act as a bridge between customers, Product Management, and Engineering teams, translating business needs into technical requirements and providing strategic feedback to influence product direction.
  • Identify and mitigate technical risks and roadblocks in collaboration with executive stakeholders and engineering teams.
  • Advocate for customer needs within the engineering organization to enhance adoption, performance, and cost optimization.
  • Contribute to the development of Customer Success methodologies and mentor other engineers in best practices.


Must-Have Skills

  • 8+ years of total experience, with 5+ years specifically as a GCP Cloud Engineer.
  • Deep expertise in Google Cloud Platform (GCP) — including Compute Engine, Cloud Storage, Networking, IAM, and Cloud Functions.
  • Strong experience in microservices-based architecture and Kubernetes container orchestration.
  • Hands-on experience with infrastructure automation tools (Terraform, Ansible, or similar).
  • Proven ability to design, automate, and optimize CI/CD pipelines for cloud workloads.
  • Excellent problem-solving, communication, and collaboration skills.
  • GCP Professional Certification (Cloud Architect / DevOps Engineer / Cloud Engineer) preferred or in progress.
  • Ability to multitask effectively in a fast-paced, dynamic environment with shifting priorities.


Good-to-Have Skills

  • Experience with Cloud Monitoring, Logging, and Security best practices in GCP.
  • Exposure to DevOps tools (Jenkins, GitHub Actions, ArgoCD, or similar).
  • Familiarity with multi-cloud or hybrid-cloud environments.
  • Knowledge of Python, Go, or Shell scripting for automation and infrastructure management.
  • Understanding of network design, VPC architecture, and service mesh (Istio/Anthos).
  • Experience working with enterprise-scale customers and cross-functional product teams.
  • Strong presentation and stakeholder communication skills, particularly with executive audiences.


Read more
Reliable Group

at Reliable Group

2 candid answers
Nilesh Gend
Posted by Nilesh Gend
Pune
10 - 16 yrs
₹15L - ₹40L / yr
Quality control
skill iconAmazon Web Services (AWS)
Terraform
Amazon VPC
IAM

Position: AWS Cloud Lead Engineer / Architect

Location: Smartworks, 43EQ, Balewadi High Street, Pune

Shift: 4:30 PM IST – 1:30 AM IST (Remote for the first 3 months; after that, regular general timings, 5 days from office)


About Reliable Group


Reliable Group is a US-based company headquartered in New York, with two offices in India:

  • New Mumbai (Airoli)
  • Smartworks, 43EQ, Balewadi High Street, Pune


We operate across three key business verticals:

  • On-Demand – Providing specialized technology talent for global clients.
  • GCC (Global Capability Centers) – Partnering with enterprises to build and scale their India operations.
  • Product Development – Our in-house AI/ML product company develops AI chatbots and intelligent solutions for US healthcare and insurance companies.


About This Opportunity

This role is for one of Reliable Group’s biggest GCC accounts (RSC India), which we are building in Pune. We are on a mission to hire 1,000+ people for this account over the next phase.

You will be joining the founding team for this GCC and playing a critical role in shaping its AWS cloud infrastructure from the ground up.

The client is the second-largest healthcare company in the USA, ranked in the Fortune 50, offering a unique opportunity to work on high-impact, enterprise-scale cloud solutions in the healthcare domain.


We are seeking a highly skilled AWS Cloud Lead Engineer / Architect with deep hands-on experience in designing, implementing, and automating enterprise-grade AWS environments. The ideal candidate will possess a strong command of multi-account provisioning, networking, security, and DevOps (IAC, CI/CD, Automation) etc.


This role demands an implementation-oriented engineer — someone who understands designing cloud solutions and must also build, configure, and troubleshoot complex AWS environments independently. The candidate will lead engineering teams through end-to-end delivery, ensuring secure, scalable, and compliant AWS deployments aligned with best practices. 


Key Responsibilities 


1. AWS Environment Provisioning 


  • Define and implement multi-account environments using AWS Control Tower or Organizations. 
  • Define account structure, guardrails, and IAM governance aligned with enterprise security policies. 
  • Standardize AWS landing zones for multiple business units.
  • Implementation of different services in compute, storage, databases, networking etc. 


2. Networking 


  • Design and deploy VPCs, subnets, and routing architectures across multiple regions and accounts. 
  • Implement Transit Gateway, VPC Peering, PrivateLink, and Direct Connect for hybrid connectivity. 
  • Configure network security, firewalls, and NACLs for private and public access patterns. 


3. DevOps 


  • Develop reusable Terraform modules and CloudFormation stacks for repeatable provisioning. 
  • Implement version-controlled CI/CD pipelines using tools like GitHub Actions, Jenkins, or AWS CodePipeline leveraging IaC.
  • Microservices deployment on ECS/EKS.
  • Build automation scripts using Python, Bash, or PowerShell for orchestration and monitoring.


4. Security, Compliance & Governance 


  • Configure IAM roles/policies, service control policies (SCPs), and cross-account access models. 
  • Implement encryption (KMS, SSL/TLS), CloudTrail auditing, and compliance enforcement via AWS Config, GuardDuty, and Security Hub. 
  • Participate in cloud security assessments and remediation plans.


5. Observability 


  • Integrate Observability into infrastructure, applications, security, logging and network. 
  • Deploy and configure CloudWatch, Grafana, and Prometheus for full-stack observability. 
  • Define automated alerting, log retention, distributed tracing and performance dashboards (APM). 


6. Collaboration & Leadership 


  • Partner with internal teams to design and build robust cloud-native solutions. 
  • Mentor junior engineers on best practices in cloud provisioning, automation, and troubleshooting. 
  • Contribute to architecture governance and technical design reviews. 


Required Skills:


  • 10+ years in IT, with 6+ years in hands-on AWS cloud implementation and management.
  • Proven expertise in AWS multi-account implementation, VPC networking, and cross-region deployments leveraging multiple AWS services in compute, storage, networking, databases, security etc.. 
  • Strong experience with Terraform and CloudFormation for IaC automation. 
  • Proficiency in Python / Bash / PowerShell scripting for automation and operational tooling. 
  • Solid understanding of IAM, Security Hub, GuardDuty, and AWS Config for governance. Experience integrating infrastructure builds into CI/CD workflows using Jenkins, GitHub Actions, or AWS-native tools. 
  • Good with monitoring frameworks — CloudWatch, Prometheus, Grafana. 
  • Hands-on troubleshooting and root cause analysis in distributed AWS environments.
  • Good with Kubernetes (EKS), containerization (Docker), and serverless architectures deployment and management.


Added Advantage:


  • Good understanding of multi-cloud architectures and design patterns.
  • AWS Certified DevOps Professional
  • AWS Certified SysOps/Developer– Associate 
  • Terraform Associate


Read more
QAgile Services

at QAgile Services

1 recruiter
Radhika Chotai
Posted by Radhika Chotai
Noida
3 - 6 yrs
₹5L - ₹12L / yr
DevOps
Windows Azure
AWS CloudFormation
skill iconAmazon Web Services (AWS)
skill iconKubernetes
+3 more

We seek a skilled and motivated Azure DevOps engineer to join our dynamic team. The ideal candidate will design, implement, and manage CI/CD pipelines, automate deployments, and optimize cloud infrastructure using Azure DevOps tools and services. You will collaborate closely with development and IT teams to ensure seamless integration and delivery of software solutions in a fast-paced environment.

Responsibilities:

  • Design, implement, and manage CI/CD pipelines using Azure DevOps.
  • Automate infrastructure provisioning and deployments using Infrastructure as Code (IaC) tools like Terraform, ARM templates, or Azure CLI.
  • Monitor and optimize Azure environments to ensure high availability, performance, and security.
  • Collaborate with development, QA, and IT teams to streamline the software development lifecycle (SDLC).
  • Troubleshoot and resolve issues related to build, deployment, and infrastructure.
  • Implement and manage version control systems, primarily using Git.
  • Manage containerization and orchestration using tools like Docker and Kubernetes.
  • Ensure compliance with industry standards and best practices for security, scalability, and reliability.


Read more
IT Company

IT Company

Agency job
via Jobdost by Saida Pathan
Pune
10 - 15 yrs
₹28L - ₹30L / yr
AWS CloudFormation
Terraform
AWS RDS
DynamoDB
Apache Aurora
+1 more

Description


Role Overview

We are seeking a highly skilled AWS Cloud Architect with proven experience in building AWS environments from the ground up—not just consuming existing services. This role requires an AWS builder mindset, capable of designing, provisioning, and managing multi-account AWS architectures, networking, security, and database platforms end-to-end.

Key Responsibilities

AWS Environment Provisioning:

- Design and provision multi-account AWS environments using best practices (Control Tower, Organizations).

- Set up and configure networking (VPC, Transit Gateway, Private Endpoints, Subnets, Routing, Firewalls).

- Provision and manage AWS database platforms (RDS, Aurora, DynamoDB) with high availability and security.

- Manage full AWS account lifecycle, including IAM roles, policies, and access controls.

Infrastructure as Code (IaC):

- Develop and maintain AWS infrastructure using Terraform and AWS CloudFormation.

- Automate account provisioning, networking, and security configuration.

Security & Compliance:

- Implement AWS security best practices, including IAM governance, encryption, and compliance automation.

- Use tools like AWS Config, GuardDuty, Security Hub, and Vault to enforce standards.

Automation & CI/CD:

- Create automation scripts in Python, Bash, or PowerShell for provisioning and management tasks.

- Integrate AWS infrastructure with CI/CD pipelines (Jenkins, GitHub Actions, GitLab CI/CD).

Monitoring & Optimization:

- Implement monitoring solutions (CloudWatch, Prometheus, Grafana) for infrastructure health and performance.

- Optimize cost, performance, and scalability of AWS environments.

Required Skills & Experience:

- 10+ years of experience in Cloud Engineering, with 7+ years focused on AWS provisioning.

Strong expertise in(Must Have):

 • AWS multi-account setup (Control Tower/Organizations)

 • VPC design and networking (Transit Gateway, Private Endpoints, routing, firewalls)

 • IAM policies, role-based access control, and security hardening

 • Database provisioning (RDS, Aurora, DynamoDB)

- Proficiency in Terraform and AWS CloudFormation.

- Hands-on experience with scripting (Python, Bash, PowerShell).

- Experience with CI/CD pipelines and automation tools.

- Familiarity with monitoring and logging tools.

Preferred Certifications

- AWS Certified Solutions Architect – Professional

- AWS Certified DevOps Engineer – Professional

- HashiCorp Certified: Terraform Associate


Looking for Immediate Joiners or 15 days of Notice period candidates Only.

• Should have created more than 200 or 300 accounts from scratch using control towers or AWS services.

• Should have atleast 7+ years of working experience in AWS

• AWS multi-account setup (Control Tower/Organizations)

• VPC design and networking (Transit Gateway, Private Endpoints, routing, firewalls)

• IAM policies, role-based access control, and security hardening

• Database provisioning (RDS, Aurora, DynamoDB)

- Proficiency in Terraform and AWS CloudFormation.

- Hands-on experience with scripting (Python, Bash, PowerShell).

- Experience with CI/CD pipelines and automation tools.

 

First 3 months will be remote (With office timings: 4:30 PM to 1:30 AM

After 3 months will be WFO (With Standard office timings)


Read more
Euphoric Thought Technologies
Bengaluru (Bangalore)
8 - 12 yrs
₹15L - ₹28L / yr
skill iconAmazon Web Services (AWS)
skill iconKubernetes
Ansible
Terraform
skill iconJenkins
+2 more

Job Summary :


We are looking for a proactive and skilled Senior DevOps Engineer to join our team and play a key role in building, managing, and scaling infrastructure for high-performance systems. The ideal candidate will have hands-on experience with Kubernetes, Docker, Python scripting, cloud platforms, and DevOps practices around CI/CD, monitoring, and incident response.


Key Responsibilities :


- Design, build, and maintain scalable, reliable, and secure infrastructure on cloud platforms (AWS, GCP, or Azure).


- Implement Infrastructure as Code (IaC) using tools like Terraform, Cloud Formation, or similar.


- Manage Kubernetes clusters, configure namespaces, services, deployments, and auto scaling.


CI/CD & Release Management :


- Build and optimize CI/CD pipelines for automated testing, building, and deployment of services.


- Collaborate with developers to ensure smooth and frequent deployments to production.


- Manage versioning and rollback strategies for critical deployments.


Containerization & Orchestration using Kubernetes :


- Containerize applications using Docker, and manage them using Kubernetes.


- Write automation scripts using Python or Shell for infrastructure tasks, monitoring, and deployment flows.


- Develop utilities and tools to enhance operational efficiency and reliability.


Monitoring & Incident Management :


- Analyze system performance and implement infrastructure scaling strategies based on load and usage trends.


- Optimize application and system performance through proactive monitoring and configuration tuning.


Desired Skills and Experience :


- Experience Required - 8+ yrs.


- Hands-on experience on cloud services like AWS, EKS etc.


- Ability to design a good cloud solution.


- Strong Linux troubleshooting, Shell Scripting, Kubernetes, Docker, Ansible, Jenkins Skills.


- Design and implement the CI/CD pipeline following the best industry practices using open-source tools.


- Use knowledge and research to constantly modernize our applications and infrastructure stacks.


- Be a team player and strong problem-solver to work with a diverse team.


- Having good communication skills.

Read more
Wissen Technology

at Wissen Technology

4 recruiters
Bipasha Rath
Posted by Bipasha Rath
Mumbai, Pune
5 - 9 yrs
Best in industry
Google Cloud Platform (GCP)
Terraform
iac
Axure

We are seeking Cloud Developer with experience in GCP/Azure along with Terraform coding. They will help to manage & standardize the IaC module.

 

Experience:5 - 8 Years

Location:Mumbai & Pune

Mode of Work:Full Time

 Key Responsibilities:

  • Design,   develop, and maintain robust software applications using most   common and popular coding languages suitable for the application   design, with a strong focus on clean, maintainable, and efficient   code.
  • Develop,   maintain, and enhance Terraform modules to encapsulate common   infrastructure patterns and promote code reuse and standardization.
  • Develop   RESTful APIs and backend services aligned with modern architectural   practices.
  • Apply   object-oriented programming principles and design patterns to build   scalable systems.
  • Build   and maintain automated test frameworks and scripts to   ensure high product quality.
  • Troubleshoot   and resolve technical issues across application layers, from code to   infrastructure.
  • Work   with cloud platforms such as Azure or   Google Cloud Platform (GCP).
  • Use Git and   related version control practices effectively in a team-based   development environment.
  • Integrate   and experiment with AI development tools like GitHub   Copilot, Azure OpenAI, or similar to boost engineering   efficiency.

 

Requirements:

  • 5+   years of experience
  • Experience   with IaC Module
  • Terraform   coding experience along with Terraform   Module as a part of central platform team
  • Azure/GCP   cloud experience is a must
  • Experience   with C#/Python/Java Coding - is good to have

 

 If interested please share your updated resume with below details :

Total Experience -

Relevant Experience -

Current Location -

Current CTC -

Expected CTC -

Notice period -

Any offer in hand -


Read more
Agentic AI Platform

Agentic AI Platform

Agency job
via Peak Hire Solutions by Dhara Thakkar
Gurugram
3 - 6 yrs
₹10L - ₹25L / yr
DevOps
skill iconPython
Google Cloud Platform (GCP)
Linux/Unix
CI/CD
+21 more

Review Criteria

  • Strong DevOps /Cloud Engineer Profiles
  • Must have 3+ years of experience as a DevOps / Cloud Engineer
  • Must have strong expertise in cloud platforms – AWS / Azure / GCP (any one or more)
  • Must have strong hands-on experience in Linux administration and system management
  • Must have hands-on experience with containerization and orchestration tools such as Docker and Kubernetes
  • Must have experience in building and optimizing CI/CD pipelines using tools like GitHub Actions, GitLab CI, or Jenkins
  • Must have hands-on experience with Infrastructure-as-Code tools such as Terraform, Ansible, or CloudFormation
  • Must be proficient in scripting languages such as Python or Bash for automation
  • Must have experience with monitoring and alerting tools like Prometheus, Grafana, ELK, or CloudWatch
  • Top tier Product-based company (B2B Enterprise SaaS preferred)


Preferred

  • Experience in multi-tenant SaaS infrastructure scaling.
  • Exposure to AI/ML pipeline deployments or iPaaS / reverse ETL connectors.


Role & Responsibilities

We are seeking a DevOps Engineer to design, build, and maintain scalable, secure, and resilient infrastructure for our SaaS platform and AI-driven products. The role will focus on cloud infrastructure, CI/CD pipelines, container orchestration, monitoring, and security automation, enabling rapid and reliable software delivery.


Key Responsibilities:

  • Design, implement, and manage cloud-native infrastructure (AWS/Azure/GCP).
  • Build and optimize CI/CD pipelines to support rapid release cycles.
  • Manage containerization & orchestration (Docker, Kubernetes).
  • Own infrastructure-as-code (Terraform, Ansible, CloudFormation).
  • Set up and maintain monitoring & alerting frameworks (Prometheus, Grafana, ELK, etc.).
  • Drive cloud security automation (IAM, SSL, secrets management).
  • Partner with engineering teams to embed DevOps into SDLC.
  • Troubleshoot production issues and drive incident response.
  • Support multi-tenant SaaS scaling strategies.


Ideal Candidate

  • 3–6 years' experience as DevOps/Cloud Engineer in SaaS or enterprise environments.
  • Strong expertise in AWS, Azure, or GCP.
  • Strong expertise in LINUX Administration.
  • Hands-on with Kubernetes, Docker, CI/CD tools (GitHub Actions, GitLab, Jenkins).
  • Proficient in Terraform/Ansible/CloudFormation.
  • Strong scripting skills (Python, Bash).
  • Experience with monitoring stacks (Prometheus, Grafana, ELK, CloudWatch).
  • Strong grasp of cloud security best practices.



Read more
HaystackAnalytics
Careers Hr
Posted by Careers Hr
Navi Mumbai
2 - 4 yrs
₹5L - ₹10L / yr
skill iconNextJs (Next.js)
skill iconReact.js
skill iconReact Native
skill iconNodeJS (Node.js)
skill iconPython
+11 more


Job Description


Position -   Full stack Developer

Location - Mumbai

    Experience - 2-5 Years 


Who are we

Based out of IIT Bombay, HaystackAnalytics is a HealthTech company creating clinical genomics products, which enable diagnostic labs and hospitals to offer accurate and personalized diagnostics. Supported by India's most respected science agencies (DST, BIRAC, DBT), we created and launched a portfolio of products to offer genomics in infectious diseases. Our genomics based diagnostic solution for Tuberculosis was recognized as one of top innovations supported by BIRAC in the past 10 years, and was launched by the Prime Minister of India in the BIRAC Showcase event in Delhi, 2022.


Objectives of this Role:

  • Work across the full stack, building highly scalable distributed solutions that enable positive user experiences and measurable business growth
  • Ideate and develop new product features in collaboration with domain experts in healthcare and genomics 
  • Develop state of the art enterprise standard front-end and backend services
  • Develop cloud platform services based on container orchestration platform 
  • Continuously embrace automation  for repetitive tasks
  • Ensure application performance, uptime, and scale, maintaining high standards of code quality  by using clean coding principles and solid design patterns 
  • Build robust tech modules  that are Unit Testable, Automating recurring tasks and processes  
  • Engage effectively with team members and collaborate to upskill and unblock each other



Frontend Skills 

  • HTML 5  
  • CSS framework  (  LESS/ SASS / Tailwind ) 
  • Es6 / Typescript 
  • Electron app /Tauri)
  • Component library  ( Bootstrap , material UI, Lit ) 
  • Responsive web layout ( Flex layout , Grid layout ) 
  • Package manager --> yarn-/ npm / turbo
  • Build tools - > (Vite/Webpack/Parcel)
  • Frameworks -- > React  with Redux of Mobx / Next JS
  • Design patterns 
  • Testing - JEST / MOCHA / JASMINE / Cypress)
  • Functional  Programming concepts  
  • Scripting  ( powershell , bash , python )



Backend Skills 

  • Nodejs - Express / NEST JS 
  • Python /  Rust
  • REST API 
  • SOLID Design Principles
  • Database (postgresql / mysql / redis /  cassandra / mongodb ) 
  • Caching  ( Redis ) 
  • Container Technology  ( Docker / Kubernetes )  
  • Cloud ( azure , aws , openshift, google cloud) 
  • Version  Control - GIT 
  • GITOPS 
  • Automation ( terraform , ansible ) 


Cloud  Skills 

  • Object storage
  • VPC   concepts 
  • Containerize Deployment
  • Serverless architecture 


Other  Skills 

  • Innovation and thought leadership
  • UI - UX design skills  
  • Interest in in learning new tools, languages, workflows, and philosophies to grow
  • Communication 


To know more about us- https://haystackanalytics.in/




Read more
GLOBAL DIGITAL TRANSFORMATION SOLUTIONS PROVIDER

GLOBAL DIGITAL TRANSFORMATION SOLUTIONS PROVIDER

Agency job
via Peak Hire Solutions by Dhara Thakkar
Thiruvananthapuram, Trivandrum, Bengaluru (Bangalore), Mumbai, Navi Mumbai, Ahmedabad, Chennai, Coimbatore, Gurugram, Hyderabad, Kochi (Cochin), Kolkata, Calcutta, Noida, Pune
8 - 12 yrs
₹20L - ₹40L / yr
skill icon.NET
Agile/Scrum
skill iconVue.js
Software Development
API
+21 more

Job Position: Lead II - Software Engineering

Domain: Information technology (IT)

Location: India - Thiruvananthapuram

Salary: Best in Industry

Job Positions: 1

Experience: 8 - 12 Years

Skills: .Net, Sql Azure, Rest Api, Vue.Js

Notice Period: Immediate – 30 Days


Job Summary:

We are looking for a highly skilled Senior .NET Developer with a minimum of 7 years of experience across the full software development lifecycle, including post-live support. The ideal candidate will have a strong background in .NET backend API development, Agile methodologies, and Cloud infrastructure (preferably Azure). You will play a key role in solution design, development, DevOps pipeline enhancement, and mentoring junior engineers.


Key Responsibilities:

  • Design, develop, and maintain scalable and secure .NET backend APIs.
  • Collaborate with product owners and stakeholders to understand requirements and translate them into technical solutions.
  • Lead and contribute to Agile software delivery processes (Scrum, Kanban).
  • Develop and improve CI/CD pipelines and support release cadence targets, using Infrastructure as Code tools (e.g., Terraform).
  • Provide post-live support, troubleshooting, and issue resolution as part of full lifecycle responsibilities.
  • Implement unit and integration testing to ensure code quality and system stability.
  • Work closely with DevOps and cloud engineering teams to manage deployments on Azure (Web Apps, Container Apps, Functions, SQL).
  • Contribute to front-end components when necessary, leveraging HTML, CSS, and JavaScript UI frameworks.
  • Mentor and coach engineers within a co-located or distributed team environment.
  • Maintain best practices in code versioning, testing, and documentation.


Mandatory Skills:

  • 7+ years of .NET development experience, including API design and development
  • Strong experience with Azure Cloud services, including:
  • Web/Container Apps
  • Azure Functions
  • Azure SQL Server
  • Solid understanding of Agile development methodologies (Scrum/Kanban)
  • Experience in CI/CD pipeline design and implementation
  • Proficient in Infrastructure as Code (IaC) – preferably Terraform
  • Strong knowledge of RESTful services and JSON-based APIs
  • Experience with unit and integration testing techniques
  • Source control using Git
  • Strong understanding of HTML, CSS, and cross-browser compatibility


Good-to-Have Skills:

  • Experience with Kubernetes and Docker
  • Knowledge of JavaScript UI frameworks, ideally Vue.js
  • Familiarity with JIRA and Agile project tracking tools
  • Exposure to Database as a Service (DBaaS) and Platform as a Service (PaaS) concepts
  • Experience mentoring or coaching junior developers
  • Strong problem-solving and communication skills
Read more
Wissen Technology

at Wissen Technology

4 recruiters
Bipasha Rath
Posted by Bipasha Rath
Pune, Mumbai
5 - 8 yrs
Best in industry
Google Cloud Platform (GCP)
AZURE
Terraform
skill icon.NET
skill iconPython
+2 more

Job Description:


Position - Cloud Developer

Experience - 5 - 8 years

Location - Mumbai & Pune


Responsibilities:

  • Design, develop, and maintain robust software applications using most common and popular coding languages suitable for the application design, with a strong focus on clean, maintainable, and efficient code.
  • Develop, maintain, and enhance Terraform modules to encapsulate common infrastructure patterns and promote code reuse and standardization.
  • Develop RESTful APIs and backend services aligned with modern architectural practices.
  • Apply object-oriented programming principles and design patterns to build scalable systems.
  • Build and maintain automated test frameworks and scripts to ensure high product quality.
  • Troubleshoot and resolve technical issues across application layers, from code to infrastructure.
  • Work with cloud platforms such as Azure or Google Cloud Platform (GCP).
  • Use Git and related version control practices effectively in a team-based development environment.
  • Integrate and experiment with AI development tools like GitHub Copilot, Azure OpenAI, or similar to boost engineering efficiency.


Skills:

  • 5+ years of experience
  • Experience with IaC Module
  • Terraform coding experience along with Terraform Module as a part of central platform team
  • Azure/GCP cloud experience is a must
  • Experience with C#/Python/Java Coding - is good to have


Read more
Bengaluru (Bangalore), Mumbai, Delhi, Gurugram, Noida, Ghaziabad, Faridabad, Pune, Hyderabad, Mohali, Dehradun, Panchkula, Chennai
6 - 14 yrs
₹12L - ₹28L / yr
Test Automation (QA)
skill iconKubernetes
helm
skill iconDocker
skill iconAmazon Web Services (AWS)
+13 more

Job Title : Senior QA Automation Architect (Cloud & Kubernetes)

Experience : 6+ Years

Location : India (Multiple Offices)

Shift Timings : 12 PM to 9 PM (Noon Shift)

Working Days : 5 Days WFO (NO Hybrid)


About the Role :

We’re looking for a Senior QA Automation Architect with deep expertise in cloud-native systems, Kubernetes, and automation frameworks.

You’ll design scalable test architectures, enhance automation coverage, and ensure product reliability across hybrid-cloud and distributed environments.


Key Responsibilities :

  • Architect and maintain test automation frameworks for microservices.
  • Integrate automated tests into CI/CD pipelines (Jenkins, GitHub Actions).
  • Ensure reliability, scalability, and observability of test systems.
  • Work closely with DevOps and Cloud teams to streamline automation infrastructure.

Mandatory Skills :

  • Kubernetes, Helm, Docker, Linux
  • Cloud Platforms : AWS / Azure / GCP
  • CI/CD Tools : Jenkins, GitHub Actions
  • Scripting : Python, Pytest, Bash
  • Monitoring & Performance : Prometheus, Grafana, Jaeger, K6
  • IaC Practices : Terraform / Ansible

Good to Have :

  • Experience with Service Mesh (Istio/Linkerd).
  • Container Security or DevSecOps exposure.
Read more
Webkul Software PvtLtd
Avantika Giri
Posted by Avantika Giri
Noida
2 - 5 yrs
₹5L - ₹15L / yr
skill iconAmazon Web Services (AWS)
AWS Lambda
DevOps
Cloud Computing
Amazon EC2
+10 more

Job Specification:

  • Job Location - Noida
  • Experience - 2-5 Years
  • Qualification - B.Tech, BE, MCA (Technical background required)
  • Working Days - 5
  • Job nature - Permanent
  • Role IT Cloud Engineer
  • Proficient in Linux.
  • Hands on experience with AWS cloud or Google Cloud.
  • Knowledge of container technology like Docker.
  • Expertise in scripting languages. (Shell scripting or Python scripting)
  • Working knowledge of LAMP/LEMP stack, networking and version control system like Gitlab or Github.

Job Description:

The incumbent would be responsible for:

  • Deployment of various infrastructures on Cloud platforms like AWS, GCP, Azure, OVH etc.
  • Server monitoring, analysis and troubleshooting.
  • Deploying multi-tier architectures using microservices.
  • Integration of Container technologies like Docker, Kubernetes etc as per application requirement.
  • Automating workflow with python or shell scripting.
  • CI and CD integration for application lifecycle management.
  • Hosting and managing websites on Linux machines.
  • Frontend, backend and database optimization.
  • Protecting operations by keeping information confidential.
  • Providing information by collecting, analyzing, summarizing development & service issues.
  • Prepares & installs solutions by determining and designing system specifications, standards & programming.


Read more
Wissen Technology

at Wissen Technology

4 recruiters
Amita Soni
Posted by Amita Soni
Pune
5 - 15 yrs
Best in industry
skill iconPython
Terraform


Senior SRE Developer 

 

The Site Reliability Engineer (SRE) position is a software development-oriented role, focusing heavily on coding, automation, and ensuring the stability and reliability of our global platform. The ideal candidate will primarily be a skilled software developer capable of participating in on-call rotations. The SRE team develops sophisticated telemetry and automation tools, proactively monitoring platform health and executing automated corrective actions. As guardians of the production environment, the SRE team leverages advanced telemetry to anticipate and mitigate issues, ensuring continuous platform stability.

Responsibilities:

  • ● Develop and maintain advanced telemetry and automation tools for monitoring and managing global platform health.
  • ● Actively participate in on-call rotations, swiftly diagnosing and resolving system issues and escalations from the customer support team (this is not a customer-facing role).
  • ● Implement automated solutions for incident response, system optimization, and reliability improvement.

Requirements: Software Development:

  • ● 3+ years of professional Python development experience.
  • ● Strong grasp of Python object-oriented programming concepts and inheritance.
  • ● Experience developing multi-threaded Python applications.
  • ● 2+ years of experience using Terraform, with proficiency in creating modules and submodules
  • from scratch.
  • ● Proficiency or willingness to learn Golang.
  • Operating Systems:
  • ● Experience with Linux operating systems.
  • ● Strong understanding of monitoring critical system health parameters.
  • Cloud:
  • ● 3+ years of hands-on experience with AWS services including EC2, Lambda, CloudWatch, EKS, ELB, RDS, DynamoDB, and SQS.
  • ● AWS Associate-level certification or higher preferred. Networking:

● Basic understanding of network protocols: ○ TCP/IP

○ DNS

○ HTTP

○ Load balancing concepts

Additional Qualifications (Preferred):

● Familiarity with trading systems and low-latency environments is advantageous but not required.


Read more
Deqode

at Deqode

1 recruiter
Shraddha Katare
Posted by Shraddha Katare
Bengaluru (Bangalore), Pune, Chennai, Hyderabad, Gurugram
5 - 7 yrs
₹7L - ₹15L / yr
skill iconAmazon Web Services (AWS)
DevOps
Terraform

Job Title: AWS DevOps Engineer

Experience Level: 5+ Years

Location: Bangalore, Pune, Hyderabad, Chennai and Gurgaon

Summary:

We are looking for a hands-on Platform Engineer with strong execution skills to provision and manage cloud infrastructure. The ideal candidate will have experience with Linux, AWS services, Kubernetes, and Terraform, and should be capable of troubleshooting complex issues in cloud and container environments.

Key Responsibilities:

  • Provision AWS infrastructure using Terraform (IaC).
  • Manage and troubleshoot Kubernetes clusters (EKS/ECS).
  • Work with core AWS services: VPC, EC2, S3, RDS, Lambda, ALB, WAF, and CloudFront.
  • Support CI/CD pipelines using Jenkins and GitHub.
  • Collaborate with teams to resolve infrastructure and deployment issues.
  • Maintain documentation of infrastructure and operational procedures.

Required Skills:

  • 3+ years of hands-on experience in AWS infrastructure provisioning using Terraform.
  • Strong Linux administration and troubleshooting skills.
  • Experience managing Kubernetes clusters.
  • Basic experience with CI/CD tools like Jenkins and GitHub.
  • Good communication skills and a positive, team-oriented attitude.

Preferred:

  • AWS Certification (e.g., Solutions Architect, DevOps Engineer).
  • Exposure to Agile and DevOps practices.
  • Experience with monitoring and logging tools.


Read more
Deqode

at Deqode

1 recruiter
Apoorva Jain
Posted by Apoorva Jain
Pune
4 - 7 yrs
₹4L - ₹16L / yr
skill iconAmazon Web Services (AWS)
DevOps
skill iconDocker
skill iconKubernetes
skill iconJenkins
+1 more

Job Summary:

We are seeking a highly skilled and proactive DevOps Engineer with 4+ years of experience to join our dynamic team. This role requires strong technical expertise across cloud infrastructure, CI/CD pipelines, container orchestration, and infrastructure as code (IaC). The ideal candidate should also have direct client-facing experience and a proactive approach to managing both internal and external stakeholders.


Key Responsibilities:

  • Collaborate with cross-functional teams and external clients to understand infrastructure requirements and implement DevOps best practices.
  • Design, build, and maintain scalable cloud infrastructure on AWS (EC2, S3, RDS, ECS, etc.).
  • Develop and manage infrastructure using Terraform or CloudFormation.
  • Manage and orchestrate containers using Docker and Kubernetes (EKS).
  • Implement and maintain CI/CD pipelines using Jenkins or GitHub Actions.
  • Write robust automation scripts using Python and Shell scripting.
  • Monitor system performance and availability, and ensure high uptime and reliability.
  • Execute and optimize SQL queries for MSSQL and PostgreSQL databases.
  • Maintain clear documentation and provide technical support to stakeholders and clients.

Required Skills:

  • Minimum 4+ years of experience in a DevOps or related role.
  • Proven experience in client-facing engagements and communication.
  • Strong knowledge of AWS services – EC2, S3, RDS, ECS, etc.
  • Proficiency in Infrastructure as Code using Terraform or CloudFormation.
  • Hands-on experience with Docker and Kubernetes (EKS).
  • Strong experience in setting up and maintaining CI/CD pipelines with Jenkins or GitHub.
  • Solid understanding of SQL and working experience with MSSQL and PostgreSQL.
  • Proficient in Python and Shell scripting.

Preferred Qualifications:

AWS Certifications (e.g., AWS Certified DevOps Engineer) are a plus.

Experience working in Agile/Scrum environments.

Strong problem-solving and analytical skills.


Read more
Wissen Technology

at Wissen Technology

4 recruiters
Anurag Sinha
Posted by Anurag Sinha
Pune, Mumbai, Navi Mumbai
5 - 8 yrs
Best in industry
Google Cloud Platform (GCP)
Azure Cloud
Terraform
DevOps
  • Looking manage IaC module
  • Terraform experience is a must
  • Terraform Module as a part of central platform team
  • Azure/GCP exp is a must
  • C#/Python/Java coding – is good to have

 

Read more
Quantalent AI is hiring for a fastest growing fin-tech firm

Quantalent AI is hiring for a fastest growing fin-tech firm

Agency job
via Quantalent AI by Mubashira Sultana
Bengaluru (Bangalore)
6 - 8 yrs
₹30L - ₹41L / yr
DevOps
DNS
skill iconKubernetes
SRE
Terraform
+5 more

Job Title: DevOps - 3


Roles and Responsibilities:


  • Develop deep understanding of the end-to-end configurations, dependencies, customer requirements, and overall characteristics of the production services as the accountable owner for overall service operations
  • Implementing best practices, challenging the status quo, and tab on industry and technical trends, changes, and developments to ensure the team is always striving for best-in-class work
  • Lead incident response efforts, working closely with cross-functional teams to resolve issues quickly and minimize downtime. Implement effective incident management processes and post-incident reviews
  • Participate in on-call rotation responsibilities, ensuring timely identification and resolution of infrastructure issues
  • Possess expertise in designing and implementing capacity plans, accurately estimating costs and efforts for infrastructure needs.
  • Systems and Infrastructure maintenance and ownership for production environments, with a continued focus on improving efficiencies, availability, and supportability through automation and well defined runbooks
  • Provide mentorship and guidance to a team of DevOps engineers, fostering a collaborative and high-performing work environment. Mentor team members in best practices, technologies, and methodologies.
  • Design for Reliability - Architect & implement solutions that keeps Infrastructure running with Always On availability and ensures high uptime SLA for the Infrastructure
  • Manage individual project priorities, deadlines, and deliverables related to your technical expertise and assigned domains
  • Collaborate with Product & Information Security teams to ensure the integrity and security of Infrastructure and applications. Implement security best practices and compliance standards.


Must Haves

  • 5-8 years of experience as Devops / SRE / Platform Engineer.
  • Strong expertise in automating Infrastructure provisioning and configuration using tools like Ansible, Packer, Terraform, Docker, Helm Charts etc.
  • Strong skills in network services such as DNS, TLS/SSL, HTTP, etc
  • Expertise in managing large-scale cloud infrastructure (preferably AWS and Oracle)
  • Expertise in managing production grade Kubernetes clusters
  • Experience in scripting using programming languages like Bash, Python, etc.
  • Expertise in skill sets for centralized logging systems, metrics, and tooling frameworks such as ELK, Prometheus/VictoriaMetrics, and Grafana etc.
  • Experience in Managing and building High scale API Gateway, Service Mesh, etc
  • Systematic problem-solving approach, coupled with strong communication skills and a sense of ownership and drive
  • Have a working knowledge of a backend programming language
  • Deep knowledge & experience with Unix / Linux operating systems internals (Eg. filesystems, user management, etc)
  • A working knowledge and deep understanding of cloud security concepts
  • Proven track record of driving results and delivering high-quality solutions in a fast-paced environment
  • Demonstrated ability to communicate clearly with both technical and non-technical project stakeholders, with the ability to work effectively in a cross-functional team environment. 


Read more
A modern configuration management platform based on advanced

A modern configuration management platform based on advanced

Agency job
via Scaling Theory by Keerthana Prabkharan
Bengaluru (Bangalore)
3 - 5 yrs
₹25L - ₹100L / yr
DevOps
skill iconKubernetes
Terraform
Ansible
skill iconDocker
+1 more

Key Responsibilities:

Kubernetes Management:

Deploy, configure, and maintain Kubernetes clusters on AKS, EKS, GKE, and OKE.

Troubleshoot and resolve issues related to cluster performance and availability.

Database Migration:

 Plan and execute database migration strategies across multicloud environments, ensuring data integrity and minimal downtime.

Collaborate with database teams to optimize data flow and management.

Coding and Development:

 Develop, test, and optimize code with a focus on enhancing algorithms and data structures for system performance.

Implement best coding practices and contribute to code reviews.

Cross-Platform Integration:

  Facilitate seamless integration of services across different cloud providers to enhance interoperability.

Collaborate with development teams to ensure consistent application performance across environments.

Performance Optimization:

  Monitor system performance metrics, identify bottlenecks, and implement effective solutions to optimize resource utilization.

Conduct regular performance assessments and provide recommendations for improvements.

Experience:

  Minimum of 2+ years of experience in cloud computing, with a strong focus on Kubernetes management across multiple platforms.

Technical Skills:

  Proficient in cloud services and infrastructure, including networking and security considerations.

Strong programming skills in languages such as Python, Go, or Java, with a solid understanding of algorithms and data structures.

Problem-Solving:

 Excellent analytical and troubleshooting skills with a proactive approach to identifying and resolving issues.

Communication:

 Strong verbal and written communication skills, with the ability to collaborate effectively with cross-functional teams.

Preferred Skills:

- Familiarity with CI/CD tools and practices.

- Experience with container orchestration and management tools.

- Knowledge of microservices architecture and design patterns.


Read more
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Find more jobs
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort