Cutshort logo

50+ Terraform Jobs in India

Apply to 50+ Terraform Jobs on CutShort.io. Find your next job, effortlessly. Browse Terraform Jobs and apply today!

icon
Vivanet

at Vivanet

1 candid answer
Ashish Uikey
Posted by Ashish Uikey
Pune
15 - 20 yrs
Best in industry
Team leadership
skill iconLeadership
DevOps
CI/CD
skill iconJenkins
+13 more

TEAM LEAD PLATFORM ENGINEERING (F/M)


SUMMARY OF THE ROLE


The Team Lead Platform Engineering is responsible for leading a team of (platform) engineers and Architects in designing, building, and maintaining the infrastructure and tooling that support scalable, secure, and resilient digital platforms. This role ensures the reliability and performance of core systems while driving innovation and operational excellence across the platform landscape.


KEY RESPONSIBILITIES

• Leading and mentoring a team of platform engineers, fostering a high-performance and collaborative culture.

•Overseeing the design, implementation, and maintenance of platform infrastructure (cloud, on-premises, hybrid).

•Driving automation, standardization, and continuous improvement across platform operations.

•Collaborating with development, security, and operations teams to ensure platform alignment with business and technical requirements.

• Managing platform lifecycle activities including upgrades, patching, and capacity planning.

•Ensuring compliance with security, governance, and regulatory standards.

• Monitoring platform health, performance, and availability, and coordinating incident response.

• Reporting on team performance, platform metrics, and strategic initiatives to senior leadership.

•Documenting platform architecture, operational procedures, and best practices.


PROFILE

Academic background & Experience


•Bachelor’s or Master’s degree in Computer Science, Engineering, or related fields.

•10+ years of experience in DevOps, platform engineering, or system administration roles.

• Proven experience in a leadership or team lead capacity.

•Profound hands-on experience with DevOps tools (e.g., Jenkins, GitHub, GitHub Actions, Jira, Confluence, Docker, Kubernetes, Ansible, Terraform).

•Experience managing cloud infrastructure (e.g., AWS, Azure, GCP) and hybrid environments.

•Track record of implementing automation and supporting production systems.


Behavioral Capabilities

• Strong leadership and team-building skills.

•Strategic thinker with a hands-on approach to problem-solving.

• Proactive and results-driven, with a focus on continuous improvement.

• Effective communicator and collaborator across technical and business domains.

•Ability to manage multiple priorities and perform under pressure.

•Commitment to reliability, quality, and customer satisfaction.


Skills

• Strong understanding of platform engineering principles, cloud architecture, and automation.

•Proficiency in scripting and programming languages (e.g., Python, Bash, PowerShell).

•Knowledge of networking, security, and system administration (Linux/Windows).

•Experience with CI/CD pipelines and DevOps practices.

• Ability to troubleshoot complex systems and lead root cause analysis.

• Excellent documentation, communication, and stakeholder engagement skills.


Why Join Us?

•Global Reach: Influence DevOps strategy across continents and business units.

•Technical Challenge: Work on complex identity scenarios in a hybrid, high-scale environment.

•Supportive Culture: Join a team that values innovation, transparency, and continuous learning.

Read more
Vivanet

at Vivanet

1 candid answer
Ashish Uikey
Posted by Ashish Uikey
Pune
10 - 20 yrs
Best in industry
DevOps
skill iconLeadership
Team leadership
CI/CD
skill iconJenkins
+12 more

TEAM LEAD DEVOPS (F/M)


SUMMARY OF THE ROLE


The Team Lead DevOps is responsible for leading a team of DevOps engineers in designing, implementing, and maintaining the organization’s DevOps platforms and practices. This role ensures the delivery of reliable, scalable, and secure infrastructure and automation solutions that support continuous integration, deployment, and operations.


KEY RESPONSIBILITIES

• Leading and mentoring a team of DevOps engineers, fostering a culture of collaboration and innovation, over different platforms as DevOps Forge, Integration as a service, Global Secure Transfer as a service and Global API Management

•Overseeing the development and maintenance of CI/CD pipelines and infrastructure automation.

• Driving the adoption of DevOps best practices, tools, and methodologies across teams.

•Collaborating with software development, QA, and operations teams to streamline workflows.

•Ensuring compliance with governance, security, and regulatory requirements.

• Reporting on team performance, project progress, and platform metrics to senior management.

•Documenting architecture, processes, and operational procedures.


PROFILE

Academic background & Experience

•Bachelor’s or Master’s degree in Computer Science, Engineering, or related fields.

•5+ years of experience in DevOps, platform engineering, or system administration roles.

• Proven experience in a leadership or team lead capacity.

• Profound hands-on experience with DevOps tools (e.g., Jenkins, Github, Github Actions, Jira, Confluence, Docker, Kubernetes, Ansible, Terraform).

•Experience managing cloud infrastructure (e.g., AWS, Azure, GCP) and hybrid environments.

•Track record of implementing automation and supporting production systems.


Behavioral Capabilities

•Strong leadership and team-building skills.

•Analytical and detail-oriented with excellent problem-solving abilities.

•Proactive and self-motivated, with a focus on continuous improvement.

•Effective communicator and collaborator across technical and business teams.

•Ability to manage multiple priorities and perform under pressure.

•Commitment to reliability, quality, and customer satisfaction


Skills

•Strong understanding of DevOps principles, platform engineering, and automation.

•Proficiency in scripting languages (e.g., Python, Bash, PowerShell).

•Knowledge of networking, security, and system administration (Linux/Windows).

•Experience with containerization, orchestration, and infrastructure as code.

•Familiarity with monitoring and logging tools (e.g., Prometheus, Grafana, ELK Stack).

• Excellent documentation, communication, and stakeholder management skills.


Why Join Us?

•Global Reach: Influence DevOps strategy across continents and business units.

•Technical Challenge: Work on complex identity scenarios in a hybrid, high-scale environment.

•Supportive Culture: Join a team that values innovation, transparency, and continuous learning.

Read more
Quantiphi

at Quantiphi

3 candid answers
1 video
Nikita Sinha
Posted by Nikita Sinha
Bengaluru (Bangalore)
6 - 12 yrs
Upto ₹40L / yr (Varies
)
Windows Azure
skill iconKubernetes
Terraform
DevOps

Role & Responsibilities

  • Design, develop, and deliver automation solutions to enhance platform functionality and reliability.
  • Deploy, manage, and maintain Azure cloud infrastructure ensuring high availability, scalability, and security.
  • Champion and implement Infrastructure as Code (IaC) practices using Terraform.
  • Build and maintain containerized environments using Kubernetes (AKS).
  • Develop self-service, self-healing, monitoring, and alerting systems for cloud platforms.
  • Automate development, testing, and deployment workflows using CI/CD pipelines.
  • Integrate DevOps tools such as Git, Jenkins/Azure DevOps, SonarQube, Artifactory, and Docker to streamline delivery pipelines.
  • Ensure platform observability through monitoring, logging, and alerting frameworks.

Requirements

  • 6+ years of experience in Cloud / DevOps / Platform Engineering roles.
  • Strong hands-on experience with Microsoft Azure cloud infrastructure.
  • Experience working with Azure services such as compute, networking, storage, identity, and messaging services.
  • Expertise in container orchestration using Kubernetes (AKS).
  • Strong experience implementing Infrastructure as Code using Terraform.
  • Familiarity with cloud-native and microservices architecture patterns.
  • Experience with relational and NoSQL databases such as PostgreSQL or Cassandra.

Additional Skills

  • Strong Linux administration and troubleshooting skills.
  • Programming or scripting experience in Bash, Python, Java, or similar languages.
  • Hands-on experience with CI/CD tools such as Jenkins, Git, Maven, or Azure DevOps.
  • Experience managing multi-region or high-availability cloud environments.
  • Familiarity with Agile / Scrum / DevOps practices and collaboration tools.


Read more
NeoGenCode Technologies Pvt Ltd
Akshay Patil
Posted by Akshay Patil
Bengaluru (Bangalore)
4 - 10 yrs
₹10L - ₹30L / yr
skill iconPython
SQL
Spark
skill iconAmazon Web Services (AWS)
Amazon S3
+13 more

Job Title : AWS Data Engineer

Experience : 4+ Years

Location : Bengaluru (HSR – Hybrid, 3 Days WFO)

Notice Period : Immediate Joiner


💡 Role Overview :

We are looking for a skilled AWS Data Engineer to design, build, and scale modern data platforms. The role involves working with AWS-native services, Python, Spark, and DBT to deliver secure, scalable, and high-performance data solutions in an Agile environment.


🔥 Mandatory Skills :

Python, SQL, Spark, AWS (S3, Glue, EMR, Redshift, Athena, Lambda), DBT, ETL/ELT pipeline development, Airflow/Step Functions, Data Lake (Parquet/ORC/Iceberg), Terraform & CI/CD, Data Governance & Security


🚀 Key Responsibilities :

  • Design, build, and optimize ETL/ELT pipelines using Python, DBT, and AWS services
  • Develop and manage scalable data lakes on S3 using formats like Parquet, ORC, and Iceberg
  • Build end-to-end data solutions using Glue, EMR, Lambda, Redshift, and Athena
  • Implement data governance, security, and metadata management using Glue Data Catalog, Lake Formation, IAM, and KMS
  • Orchestrate workflows using Airflow, Step Functions, or AWS-native tools
  • Ensure reliability and automation via CloudWatch, CloudTrail, CodePipeline, and Terraform
  • Collaborate with data analysts and data scientists to deliver actionable insights
  • Work in an Agile environment to deliver high-quality data solutions

✅ Mandatory Skills :

  • Strong Python (including AWS SDKs), SQL, Spark
  • Hands-on experience with AWS data stack (S3, Glue, EMR, Redshift, Athena, Lambda)
  • Experience with DBT and ETL/ELT pipeline development
  • Workflow orchestration using Airflow / Step Functions
  • Knowledge of data lake formats (Parquet, ORC, Iceberg)
  • Exposure to DevOps practices (Terraform, CI/CD)
  • Strong understanding of data governance and security best practices
  • Minimum 4–7 years in Data Engineering (3+ years on AWS)

➕ Good to Have :

  • Understanding of Data Mesh architecture
  • Experience with platforms like Data.World
  • Exposure to Hadoop / HDFS ecosystems

🤝 What We’re Looking For :

  • Strong problem-solving and analytical skills
  • Ability to work in a collaborative, cross-functional environment
  • Good communication and stakeholder management skills
  • Self-driven and adaptable to fast-paced environments

📝 Interview Process :

  1. Online Assessment
  2. Technical Interview
  3. Fitment Round
  4. Client Round
Read more
Saysoft Tech Services
Bengaluru (Bangalore)
7 - 10 yrs
₹30L - ₹35L / yr
skill iconJava
skill iconAmazon Web Services (AWS)
skill iconKubernetes
Terraform
DevOps
+3 more

Required Skills

  • 8+ years of DevOps / Cloud Engineering experience
  • Strong hands-on experience with AWS services (EC2, S3, RDS, IAM, VPC, etc.)
  • Expertise in Kubernetes (deployment, scaling, cluster management)
  • Strong experience in PostgreSQL and AWS RDS administration
  • Proficiency in Terraform for infrastructure automation
  • Experience building and maintaining CI/CD pipelines (Jenkins, GitLab CI, etc.)
  • Strong knowledge of Java (mandatory) and application deployment lifecycle
  • Experience with Docker and containerization
  • Solid understanding of networking, security, and system architecture
  • Strong troubleshooting and problem-solving skills


Read more
Bengaluru (Bangalore)
8 - 15 yrs
₹25L - ₹40L / yr
skill iconJava
skill iconSpring Boot
skill iconNodeJS (Node.js)
Microservices
Architecture
+20 more

Job Title: Consultant – Enterprise Application Development

Location: Bengaluru (Hybrid / On-site)

Engagement: Full-Time

Experience: 10 – 15 years preferred


About Us: Introducing VTT, a comprehensive mobility service provider catering to diverse multinational sectors like IT/ITES, KPO/BPO, Financial, Pharma, and more across Indian cities. Our “Managed Mobility Program” includes Fleet Management, Technology, Resource Management, Car Rentals, Logistics, and Special Services (Ambulance and PWD vehicles). Trusted by Fortune companies such as Cisco, Morgan Stanley, Wells Fargo, Google, PWC, and others, we pride ourselves on leveraging expertise and cutting-edge technology for safe, efficient, and uninterrupted service delivery. With a commitment to excellence, we ensure best-in-class standards for all our clients. Trip to school is now timely, comfy and secure! Our well maintained f leet is here to enrich your child’s commute, keeping students punctual and safe thanks to GPS tracking paired with well-trained drivers. Our routes are carefully planned, our drivers attentive, and everything hassle-free.


Role Overview

We are looking for a seasoned Consultant with comprehensive expertise in enterprise-level application development across backend, frontend, mobile, DevOps, and cloud. The role demands a strong architectural mindset combined with hands-on execution. The Consultant will also play a critical role in understanding the current system architecture end-to-end, driving technical improvements, building the tech team foundation, and establishing structured technical documentation.


Key Responsibilities

• Understand the complete architecture of the existing systems, including web, mobile, backend services, and cloud environment.

• Provide hands-on leadership across backend, frontend, mobile, DevOps, and cloud infrastructure.

• Architect and optimize enterprise-grade applications for scalability, security, performance, and reliability.

• Conduct technical due diligence on current systems and propose improvements or refactoring plans.

• Build the foundation for the internal engineering team including hiring support, role definitions, and best-practice processes.

• Drive engineering workflows including coding standards, branching strategy, CI/CD, monitoring, and release management.

• Create comprehensive technical documentation covering system architecture, API specs, deployment playbooks, and SOPs.

• Review code and provide mentorship to engineering resources.

• Coordinate with product and business teams to translate requirements into technical design and actionable development roadmap.

• Troubleshoot and resolve deep-stack issues during development or production.


Technical Expertise Required


Backend


• Java / Spring Boot

• Node.js

•Microservices architecture

• REST / GraphQL


Frontend


• React js

• Responsive UI, component-based architecture, state management


Mobile


• Flutter

• React Native


Cloud & DevOps


• AWS (ECS / EKS / EC2 / RDS / Lambda / S3 / IAM / CloudWatch etc.)

• CI/CD pipelines (GitHub Actions / Jenkins / GitLab CI or equivalent)

• Docker / Kubernetes

• Infrastructure-as-code (Terraform / CloudFormation)


Database


• MongoDB

• Knowledge of PostgreSQL / MySQL is an added advantage


Professional Attributes


• Strong architectural thinking with the ability to simplify complex systems.

• Excellent communication and stakeholder management skills.

• Ability to work independently without constant supervision.

• Capability to mentor, lead, and build an engineering team from scratch.

• Process-driven mindset with a focus on best practices and documentation.


Deliverables


• Architectural understanding and documentation of current systems.

• Recommendations and implementation plan for system upgrades or restructuring.

• Establishment of core engineering processes and standards.

• Hiring support and technical evaluation of developers.

Read more
Incubyte

at Incubyte

4 recruiters
Titiksha Singh
Posted by Titiksha Singh
Remote only
5.5 - 11 yrs
Best in industry
skill iconPython
DevOps
skill iconKubernetes
skill iconReact.js
Terraform

About Us


We believe the future of software development is AI-native — where engineers operate at a higher level of abstraction and quality remains non-negotiable. 

Incubyte is a software craft consultancy where the “how” of building software matters as much as the “what”.  

We partner with companies of all sizes, from helping enterprises build, scale, and modernize to early-stage founders bring their ideas to life. 

Our engineers operate in an AI-native development model, using AI as a collaborator across the SDLC to accelerate development while upholding the discipline of software craftsmanship. Guided by Software Craftsmanship and Extreme Programming practices, we build reliable, maintainable, and scalable systems with speed, without compromising quality. If this way of building software resonates with you, we’d like to talk. 


Our Guiding Principles 

These principles define how we work at Incubyte. They are non-negotiable. 

Relentless Pursuit of Quality with Pragmatism 

  We build high-quality systems without losing sight of delivery. 

Extreme Ownership 

  We take responsibility end-to-end for decisions, execution, and outcomes. 

Proactive Collaboration 

  We collaborate closely, challenge each other, and solve problems together. 

Active Pursuit of Mastery 

  We continuously improve our craft and raise our bar. 

Invite, Give, and Act on Feedback 

We seek, give, and act on feedback to get better every day. 

Ensuring Client Success 

We act as trusted partners and focus on real outcomes, not just output. 


Job Description

This is a remote position.


Experience Level​

This role is ideal for engineers with 3–15 years of experience and a strong background in building secure, scalable platforms.

We are looking for hands-on DevOps and Backend Engineers with real-world experience in handling production incidents, distributed systems, and modern infrastructure challenges.


What You’ll Do as a Software Craftsperson

  • Design and document real-world DevOps and backend scenarios based on production incidents such as outages, scaling challenges, and secure deployments
  • Translate real engineering experiences into benchmark tasks that contribute to training next-generation AI systems
  • Contribute to building secure, scalable, Kubernetes-native architectures across modern infrastructure environments
  • Work across critical engineering domains including CI/CD pipelines, observability, identity & access management, infrastructure-as-code, and backend services
  • Collaborate with internal teams to design and simulate realistic engineering workflows and system behaviors
  • Apply practical engineering judgment to model distributed systems challenges and improve system resilience and reliability


Requirements


What You’ll Bring

5–15 years of experience in DevOps and Backend Engineering with a strong foundation in building secure, scalable systems.

Strong hands-on expertise in DevOps and backend technologies including:

  • Kubernetes, Terraform, and CI/CD pipelines
  • Tools such as k9s, k3s (GitLab CI preferred)
  • Backend technologies such as Go, Python, or Java
  • Experience with Docker, gRPC, and Kubernetes-native services

Demonstrated experience working with secure, offline or air-gapped deployments (highly preferred)

Familiarity with distributed systems and backend architecture, with exposure to ML or distributed pipelines being a plus.

Hands-on experience across multiple core functional areas, with exposure to at least five of the following:

  • Identity & Access Management
  • Observability (Prometheus + Grafana)
  • CI/CD Pipelines
  • Keycloak
  • GitLab CI
  • Terraform OSS
  • Kubernetes ecosystem tools

Strong problem-solving ability with real-world experience in handling production systems, incidents, and infrastructure challenges

Ability to work across multiple layers of the stack, from infrastructure to backend services, while ensuring scalability, reliability, and security


Benefits


Life at Incubyte

We are a remote-first company with structured flexibility. Teams commit to shared rhythms during core hours, ensuring smooth collaboration while maintaining autonomy. Twice a year, we come together in person for a co-working sprint and once a year for a retreat - with all travel expenses covered.

Our environment is built for crafters: experimenting with real-world systems, solving complex infrastructure challenges, and contributing to cutting-edge AI initiatives. We are all lifelong learners, and our work is our passion.

Perks

Dedicated learning & development budget

Sponsorship for conference talks

Comprehensive medical & term insurance

Employee-friendly leave policies

Home Office fund

Medical Insurance


Read more
Redpin

at Redpin

1 candid answer
Lakshman Dornala
Posted by Lakshman Dornala
Hyderabad
5 - 10 yrs
Best in industry
skill iconPython
Data Transformation Tool (DBT)
Apache Airflow
Terraform
skill iconKubernetes
+5 more

Senior Data (Platform) Engineer 

Location: Hyderabad | Department: Technology, Data 


About the Role 


Are you passionate about building reliable, scalable data platforms that make analytics and AI development easier? As a Senior Data Platform Engineer, you will be hands-on in building, operating, and improving our core data platform and AI/LLM enablement tooling. 


You’ll focus on infrastructure, orchestration, CI/CD, and reusable frameworks that support analytics engineering and AI-driven use cases. You’ll work closely with Analytics Engineering and Insights teams and support other departments as they integrate with our data systems. 


What You'll Do 


Data Platform & Infrastructure

  • Build, deploy, and operate cloud infrastructure for data and AI workloads using Infrastructure as Code (Terraform).
  • Provision and manage cloud resources across development, staging, and production environments.
  • Develop and maintain CI/CD pipelines for data transformations, orchestration workflows, and platform services.
  • Operate and scale containerized workloads on Kubernetes, including Airflow, internal APIs, and AI/LLM services.
  • Troubleshoot and resolve infrastructure, pipeline, and orchestration failures to ensure platform reliability.
  • Maintain and support existing ML services and pipelines to ensure stability and reliability (No expectation to design or develop new ML models or training pipelines).
  • Continuously monitor and optimize platform performance and cost. 

Framework, Tooling and Enablement

  • Build and maintain reusable frameworks and patterns for dbt, Airflow, Cloud data warehouses (Snowflake, BigQuery, Redshift, Databricks, etc.), Internal data and AI APIs
  • Build and support infrastructure and pipelines for AI/LLM-based use cases, including orchestration, integration, and serving.
  • Improve developer experience for Analytics Engineering and Insights teams by reducing friction in local development, deployments, and production workflows.
  • Create and maintain technical documentation and examples to support self-service analytics and data development. 


What You’ll Need 

Technical Skills & Experience

  • 5+ years of experience in data engineering, platform engineering, or similar hands-on roles.
  • Strong programming skills in Python and SQL.
  • Hands-on experience with:
  • Terraform
  • Airflow
  • dbt
  • Kubernetes
  • Cloud platforms (AWS, Google Cloud, or Microsoft Azure)
  • CI/CD pipelines (GitHub Actions, GitLab CI, CircleCI, etc.)
  • Cloud data warehouses (Snowflake, BigQuery, Redshift, Databricks, etc.) 
  • Strong understanding of analytical data models and how analytics teams consume data.
  • Experience integrating and operating LLM-based pipelines and services (not model training). 


Soft Skills & Collaboration

  • Strong problem-solving skills and ability to debug complex platform issues.
  • Strong preference for declarative development, with the ability to clearly separate what a system should do from how it is implemented.
  • Clear communicator who can work effectively with both technical and non-technical stakeholders.
  • Pragmatic, ownership-driven mindset with a focus on reliability and simplicity. 


Why Join Us? 


We welcome people from all backgrounds who seek the opportunity to help build a future where we connect the dots for international property payments. If you have the curiosity, passion, and collaborative spirit, work with us, and let’s move the world of PropTech forward, together. 

Redpin, Currencies Direct and TorFX are proud to be an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to sex, gender identity, sexual orientation, race, colour, religion, national origin, disability, protected veteran status, age, or any other characteristic protected by law.

Read more
EaseMyTrip.com

at EaseMyTrip.com

1 recruiter
Zainab Siddiqui
Posted by Zainab Siddiqui
Noida
2 - 3 yrs
₹3L - ₹5L / yr
skill iconAmazon Web Services (AWS)
Google Cloud Platform (GCP)
skill iconPython
skill iconNodeJS (Node.js)
skill iconGitHub
+5 more

Key Responsibilities:

  • ☁️ Manage cloud infrastructure and automation on AWS, Google Cloud (GCP), and Azure.
  • 🖥️ Deploy and maintain Windows Server environments, including Internet Information Services (IIS).
  • 🐧 Administer Linux servers and ensure their security and performance.
  • 🚀 Deploy .NET applications (ASP.Net, MVC, Web API, WCF, etc.) using Jenkins CI/CD pipelines.
  • 🔗 Manage source code repositories using GitLab or GitHub.
  • 📊 Monitor and troubleshoot cloud and on-premises server performance and availability.
  • 🤝 Collaborate with development teams to support application deployments and maintenance.
  • 🔒 Implement security best practices across cloud and server environments.



Required Skills:

  • ☁️ Hands-on experience with AWS, Google Cloud (GCP), and Azure cloud services.
  • 🖥️ Strong understanding of Windows Server administration and IIS.
  • 🐧 Proficiency in Linux server management.
  • 🚀 Experience in deploying .NET applications and working with Jenkins for CI/CD automation.
  • 🔗 Knowledge of version control systems such as GitLab or GitHub.
  • 🛠️ Good troubleshooting skills and ability to resolve system issues efficiently.
  • 📝 Strong documentation and communication skills.



Preferred Skills:

  • 🖥️ Experience with scripting languages (PowerShell, Bash, or Python) for automation.
  • 📦 Knowledge of containerization technologies (Docker, Kubernetes) is a plus.
  • 🔒 Understanding of networking concepts, firewalls, and security best practices.


Read more
Inflectionio

at Inflectionio

1 candid answer
Renu Philip
Posted by Renu Philip
Bengaluru (Bangalore)
3 - 5 yrs
₹20L - ₹30L / yr
skill iconAmazon Web Services (AWS)
skill iconKubernetes
skill iconJenkins
Chef
CI/CD
+6 more

We are looking for a DevOps Engineer with hands-on experience in managing production infrastructure using AWS, Kubernetes, and Terraform. The ideal candidate will have exposure to CI/CD tools and queueing systems, along with a strong ability to automate and optimize workflows.


Responsibilities: 

* Manage and optimize production infrastructure on AWS, ensuring scalability and reliability.

* Deploy and orchestrate containerized applications using Kubernetes.

* Implement and maintain infrastructure as code (IaC) using Terraform.

* Set up and manage CI/CD pipelines using tools like Jenkins or Chef to streamline deployment processes.

* Troubleshoot and resolve infrastructure issues to ensure high availability and performance.

* Collaborate with cross-functional teams to define technical requirements and deliver solutions.

* Nice-to-have: Manage queueing systems like Amazon SQS, Kafka, or RabbitMQ.



Requirements: 

* 4+ years of experience with AWS, including practical exposure to its services in production environments.

* Demonstrated expertise in Kubernetes for container orchestration.

* Proficiency in using Terraform for managing infrastructure as code.

* Exposure to at least one CI/CD tool, such as Jenkins or Chef.

* Nice-to-have: Experience managing queueing systems like SQS, Kafka, or RabbitMQ.

Read more
NonStop io Technologies Pvt Ltd
Kalyani Wadnere
Posted by Kalyani Wadnere
Pune
4 - 7 yrs
Best in industry
DevOps
skill iconAmazon Web Services (AWS)
Terraform
Windows Azure
Google Cloud Platform (GCP)
+9 more

About NonStop io Technologies:

NonStop io Technologies is a value-driven company with a strong focus on process-oriented software engineering. We specialize in Product Development and have a decade's worth of experience in building web and mobile applications across various domains. NonStop io Technologies follows core principles that guide its operations and believes in staying invested in a product's vision for the long term. We are a small but proud group of individuals who believe in the 'givers gain' philosophy and strive to provide value in order to seek value. We are committed to and specialize in building cutting-edge technology products and serving as trusted technology partners for startups and enterprises. We pride ourselves on fostering innovation, learning, and community engagement. Join us to work on impactful projects in a collaborative and vibrant environment.


Brief Description:

We are looking for a skilled and proactive DevOps Engineer to join our growing engineering team. The ideal candidate will have hands-on experience in building, automating, and managing scalable infrastructure and CI CD pipelines. You will work closely with development, QA, and product teams to ensure reliable deployments, performance, and system security.


Roles and Responsibilities:

● Design, implement, and manage CI CD pipelines for multiple environments

● Automate infrastructure provisioning using Infrastructure as Code tools

● Manage and optimize cloud infrastructure on AWS, Azure, or GCP

● Monitor system performance, availability, and security

● Implement logging, monitoring, and alerting solutions

● Collaborate with development teams to streamline release processes

● Troubleshoot production issues and ensure high availability

● Implement containerization and orchestration solutions such as Docker and Kubernetes

● Enforce DevOps best practices across the engineering lifecycle

● Ensure security compliance and data protection standards are maintained


Requirements:

● 4 to 7 years of experience in DevOps or Site Reliability Engineering

● Strong experience with cloud platforms such as AWS, Azure, or GCP - Relevant Certifications will be a great advantage

● Hands-on experience with CI CD tools like Jenkins, GitHub Actions, GitLab CI, or Azure DevOps

● Experience working in microservices architecture

● Exposure to DevSecOps practices

● Experience in cost optimization and performance tuning in cloud environments

● Experience with Infrastructure as Code tools such as Terraform, CloudFormation, or ARM

● Strong knowledge of containerization using Docker

● Experience with Kubernetes in production environments

● Good understanding of Linux systems and shell scripting

● Experience with monitoring tools such as Prometheus, Grafana, ELK, or Datadog

● Strong troubleshooting and debugging skills

● Understanding of networking concepts and security best practices


Why Join Us?

● Opportunity to work on a cutting-edge healthcare product

● A collaborative and learning-driven environment

● Exposure to AI and software engineering innovations

● Excellent work ethic and culture


If you're passionate about technology and want to work on impactful projects, we'd love to hear from you!

Read more
Recruiting Bond

at Recruiting Bond

2 candid answers
Pavan Kumar
Posted by Pavan Kumar
Bengaluru (Bangalore), Mumbai
10 - 16 yrs
₹75L - ₹130L / yr
Distributed Systems
Microservices
Enterprise architecture
System Design & Architecture
Event-Driven Architecture
+29 more

🚨 We’re Building a “Top 1% Engineering Org”


We’re building a high-talent-density, AI-first R&D organization from scratch — inside a publicly listed company undergoing a full-scale transformation.

Think:

→ Rewriting legacy systems into AI-native architectures

→ Embedding LLMs + Agentic AI into core workflows

→ Reimagining platforms, infra, and data systems for the next decade

This is the kind of shift you’d expect from Google, Microsoft, or Meta —

Except you get to build it from day 0 → scale it globally.


About the Role / Team

We are building a next-generation AI-first R&D organization in Bengaluru, focused on solving complex problems across LLMs, Agentic AI systems, distributed computing, and enterprise-scale architectures.


This initiative is part of a publicly listed global company investing heavily in AI-driven transformation, re-architecting its platforms into intelligent, autonomous systems powered by large language models, workflows, and decision engines.


You will be working on:

  • Agentic AI systems & LLM-powered workflows
  • Distributed, scalable backend systems
  • Enterprise-grade AI platforms
  • Automation-first engineering environments


🚀 The Mandate

Own and evolve the technical backbone of an AI-first enterprise platform.


You will define architecture across LLM-powered systems, distributed services, and data platforms — and lead critical transformations from legacy → AI-native systems.


🧩 What You’ll Do

  • Architect large-scale distributed systems powering AI-driven workflows
  • Lead 0→1 and 1→N platform builds (LLM integrations, agentic systems, orchestration layers)
  • Redesign legacy systems into scalable, modular, AI-native architectures
  • Drive system design excellence across teams (APIs, infra, observability, reliability)
  • Make high-stakes decisions on trade-offs (latency, cost, scalability, model performance)
  • Mentor senior engineers and influence engineering culture/org standards
  • Partner with product, data, and leadership on long-term technical strategy


🧠 What We’re Looking For

  • Proven track record building high-scale backend or platform systems
  • Deep expertise in distributed systems, microservices, cloud (AWS/GCP/Azure)
  • Strong exposure to data systems/infra / Data / real-time architectures
  • Experience or strong interest in LLMs, GenAI, or AI system design
  • Exceptional system design, abstraction, and problem-solving ability
  • High ownership mindset — you think in terms of systems, not tickets
  • Strong coding skills in Python / Java / Go / Node.js
  • Solid understanding of data structures, system design basics, and backend architecture
  • Experience building scalable APIs and services
  • Familiarity or curiosity around AI/LLMs, async systems, or event-driven design
  • Strong debugging, problem-solving, and ownership mindset
  • Solve hard system problems (latency, scale, reliability)
  • Drive cross-team technical decisions and standards
  • Mentor senior engineers and influence org-wide architecture 
  • Design large-scale distributed systems and backend platforms
  • Mentorship & Technical Leadership 
  • Expertise in system design, scalability, and performance optimization


Nice to Have

  • Experience integrating LLMs, vector databases, or AI pipelines
  • Contributions to architecture at scale
  • Experience with Agentic AI / LLM orchestration frameworks
  • Background in product engineering or platform companies
  • Exposure to global-scale systems (millions of users / high throughput)


🔥 What Sets You Apart

  • Built platforms used by millions of users / high-throughput systems
  • Experience with event-driven systems, stream processing, or infra platforms
  • Prior work on AI/ML platforms, model serving, or intelligent systems
Read more
SAAS Industry

SAAS Industry

Agency job
via Peak Hire Solutions by Dhara Thakkar
Bengaluru (Bangalore)
5 - 8 yrs
₹20L - ₹25L / yr
skill iconAmazon Web Services (AWS)
skill iconNodeJS (Node.js)
RESTful APIs
NOSQL Databases
Systems design
+39 more

Job Details

Job Title: Senior Backend Engineer

Industry: SAAS

Function – Information Technology

Experience Required: 5-8 years

- Working Days: 6 days a week, (5 days-in-office, Saturdays WFH)

Employment Type: Full Time

Job Location: Bangalore

CTC Range: Best in Industry

 

Preferred Skills: AWS, NodeJS, RESTful APIs, NoSQL

 

Criteria

· Minimum 5+ years in backend engineering with strong system design expertise

· Experience building scalable systems from scratch

· Expert-level proficiency in Node.js

· Deep understanding of distributed systems

· Strong NoSQL design skills

· Hands-on AWS cloud experience

· Proven leadership and mentoring capability

· Preferred candidates from SAAS/Software/IT Services based startups or scaleup companies

 

Job Description

The Role:

What You’ll Build:

1. System Architecture & Design

● Architect highly scalable backend systems from the ground up

● Define technology choices: frameworks, databases, queues, caching layers

● Evaluate microservices vs monoliths based on product stage

● Design REST, GraphQL, and real-time WebSocket APIs

● Build event-driven systems for asynchronous processing

● Architect multi-tenant systems with strict data isolation

● Maintain architectural documentation and technical specs

2. Core Backend Services

● Build high-performance APIs for 3D content, XR experiences, analytics, and user interactions

● Create 3D asset processing pipelines for uploads, conversions, and optimization

● Develop distributed job workers for CPU/GPU-intensive tasks

● Build authentication/authorization systems (RBAC)

● Implement billing, subscription, and usage metering

● Build secure webhook systems and third-party integration APIs

● Create real-time collaboration features via WebSockets/SSE

3. Data Architecture & Databases

● Design scalable schemas for 3D metadata, XR sessions, and analytics

● Model complex product catalogs with variants and hierarchies

● Implement Redis-based caching strategies

● Build search and indexing systems (Elasticsearch/Algolia)

● Architect ETL pipelines and data warehouses

● Implement sharding, partitioning, and replication strategies

● Design backup, restore, and disaster recovery workflows

4. Scalability & Performance

● Build systems designed for 10x–100x traffic growth

● Implement load balancing, autoscaling, and distributed processing

● Optimize API response times and database performance

● Implement global CDN delivery for heavy 3D assets

● Build rate limiting, throttling, and backpressure mechanisms

● Optimize storage and retrieval of large 3D files

● Profile and improve CPU, memory, and network performance

5. Infrastructure & DevOps

● Architect AWS infrastructure (EC2, S3, Lambda, RDS, ElastiCache)

● Build CI/CD pipelines for automated deployments and rollbacks

● Use IaC tools (Terraform/CloudFormation) for infra provisioning

● Set up monitoring, logging, and alerting systems

● Use Docker + Kubernetes for container orchestration

● Implement security best practices for data, networks, and secrets

● Define disaster recovery and business continuity plans

6. Integration & APIs

● Build integrations with Shopify, WooCommerce, Magento

● Design webhook systems for real-time events

● Build SDKs, client libraries, and developer tools

● Integrate payment gateways (Stripe, Razorpay)

● Implement SSO and OAuth for enterprise customers

● Define API versioning and lifecycle/deprecation strategies

7. Data Processing & Analytics

● Build analytics pipelines for engagement, conversions, and XR performance

● Process high-volume event streams at scale

● Build data warehouses for BI and reporting

● Develop real-time dashboards and insights systems

● Implement analytics export pipelines and platform integrations

● Enable A/B testing and experimentation frameworks

● Build personalization and recommendation systems

 

Technical Stack:

1. Backend Languages & Frameworks 

●  Primary: Node.js (Express, NestJS), Python (FastAPI, Django)

●  Secondary: Go, Java/Kotlin (Spring)

●  APIs: REST, GraphQL, gRPC


2. Databases & Storage

● SQL: PostgreSQL, MySQL

● NoSQL: MongoDB, DynamoDB

● Caching: Redis, Memcached

● Search: Elasticsearch, Algolia

● Storage/CDN: AWS S3, CloudFront

● Queues: Kafka, RabbitMQ, AWS SQS

 

3. Cloud & Infrastructure: 

● Cloud: AWS (primary), GCP/Azure (nice to have)

● Compute: EC2, Lambda, ECS, EKS

● Infrastructure: Terraform, CloudFormation

● CI/CD: GitHub Actions, Jenkins, CircleCI

● Containers: Docker, Kubernetes

 

4. Monitoring & Operations 

● Monitoring: Datadog, New Relic, CloudWatch

● Logging: ELK Stack, CloudWatch Logs

● Error Tracking: Sentry, Rollbar

● APM tools

 

5. Security & Auth

● Auth: JWT, OAuth 2.0, SAML

● Secrets: AWS Secrets Manager, Vault

● Security: Encryption (at rest/in transit), TLS/SSL, IAM

 


What We’re Looking For:

1. Must-Haves

● 5+ years in backend engineering with strong system design expertise

● Experience building scalable systems from scratch

● Expert-level proficiency in at least one backend stack (Node, Python, Go, Java)

● Deep understanding of distributed systems and microservices

● Strong SQL/NoSQL design skills with performance optimization

● Hands-on AWS cloud experience

● Ability to write high-quality production code daily

● Experience building and scaling RESTful APIs

● Strong understanding of caching, sharding, horizontal scaling

● Solid security and best-practice implementation experience

● Proven leadership and mentoring capability


2. Highly Desirable

● Experience with large file processing (3D, video, images)

● Background in SaaS, multi-tenancy, or e-commerce

● Experience with real-time systems (WebSockets, streams)

● Knowledge of ML/AI infrastructure

● Experience with HA systems, DR planning

● Familiarity with GraphQL, gRPC, event-driven systems

● DevOps/infrastructure engineering background

● Experience with XR/AR/VR backend systems

● Open-source contributions or technical writing

● Prior senior technical leadership experience

 

Technical Challenges You’ll Solve:

● Designing large-scale 3D asset processing pipelines

● Serving XR content globally with ultra-low latency

● Scaling from thousands to millions of daily requests

● Efficiently handling CPU/GPU-heavy workloads

● Architecting multi-tenancy with complete data isolation

● Managing billions of analytics events at scale

● Building future-proof APIs with backward compatibility

 

Why company:

● Architectural Ownership: Build foundational systems from scratch

● Deep Technical Work: Solve distributed systems and scaling challenges

● Hands-On Impact: Design and code mission-critical infrastructure

● Diverse Problems: APIs, infra, data, ML, XR, asset processing

● Massive Scale Opportunity: Build systems for exponential growth

● Modern Stack and best practices

● Product Impact: Your architecture directly powers millions of users

● Leadership Opportunity: Shape engineering culture and direction

● Learning Environment: Stay at the forefront of backend engineering

● Backed by AWS, Microsoft, Google

 

Location & Work Culture:

● Location: Bengaluru

● Schedule: 6 days a week, (5 days-in-office, Saturdays WFH)

● Culture: Builder mindset, strong ownership, technical excellence

● Team: Small, highly skilled backend and infra team

● Resources: AWS credits, latest tooling, learning budget

 

Read more
One2n

at One2n

3 candid answers
Avinash Poojari
Posted by Avinash Poojari
Pune
3 - 6 yrs
₹20L - ₹30L / yr
DevOps
skill iconKubernetes
skill iconDocker
skill icongrafana
prometheus
+2 more

About the role:

We are looking for a Site Reliability Engineer who understands the nuances of production systems. If you care about building and running reliable software systems in production, you'll like working at One2N.

You will primarily work with our startups and mid-size clients. We work on One-to-N kind problems (hence the name One2N), those where Proof of concept is done and the work revolves around scalability, maintainability, and reliability. In this role, you will be responsible for architecting and optimizing our observability and infrastructure to provide actionable insights into performance and reliability.


Key responsibilities:

  • Conceptualise, think, and build platform engineering solutions with a self-serve model to enable product engineering teams.
  • Provide technical guidance and mentorship to young engineers.
  • Participate in code reviews and contribute to best practices for development and operations.
  • Design and implement comprehensive monitoring, logging, and alerting solutions to collect, analyze, and visualize data (metrics, logs, traces) from diverse sources.
  • Develop custom monitoring metrics, dashboards, and reports to track key performance indicators (KPIs), detect anomalies, and troubleshoot issues proactively.
  • Improve Developer Experience (DX) to help engineers improve their productivity.
  • Design and implement CI/CD solutions to optimize velocity and shorten the delivery time.
  • Help SRE teams set up on-call rosters and coach them for effective on-call management.
  • Automating repetitive manual tasks from CI/CD pipelines, operations tasks, and infrastructure as code (IaC) practices.
  • Stay up-to-date with emerging technologies and industry trends in cloud-native, observability, and platform engineering space.


About you:

  • 3 to 6 years of professional experience in DevOps practices or software engineering roles, with a focus on Kubernetes on an AWS platform.
  • Expertise in observability and telemetry tools and practices, including hands-on experience with some of Datadog, Honeycomb, ELK, Grafana, and Prometheus.
  • Working knowledge of programming using Golang, Python, Java, or equivalent.
  • Skilled in diagnosing and resolving Linux operating system issues.
  • Strong proficiency in scripting and automation to build monitoring and analytics solutions.
  • Solid understanding of microservices architecture, containerization (Docker, Kubernetes), and cloud-native technologies.
  • Experience with infrastructure as code (IaC) tools such as Terraform, Pulumi.
  • Excellent analytical and problem-solving skills, keen attention to detail, and a passion for continuous improvement.
  • Strong written, communication, and collaboration skills, with the ability to work effectively in a fast-paced, agile environment.


Read more
Service based company

Service based company

Agency job
via Codemind Staffing Solutions by Krishna kumar
Chennai
4 - 7 yrs
₹10L - ₹18L / yr
DevOps
Microsoft Windows Azure
Windows Azure
skill iconDocker
skill iconKubernetes
+4 more

Key responsibilities

• Design, build, and maintain robust CI/CD pipelines using Azure DevOps Services (Azure Pipelines) and Git-based workflows.

• Implement and manage infrastructure as code (IaC) using ARM templates, Bicep, and/or Terraform for repeatable environment provisioning.

• Containerize applications (Docker) and manage container orchestration platforms such as AKS (Azure Kubernetes Service).

• Automate build, test, release, and rollback processes; integrate automated testing and quality gates into pipelines.

• Monitor and improve platform reliability and observability using logging and monitoring tools (e.g., Azure Monitor, Application Insights, Prometheus, Grafana).

• Drive platform security and compliance through pipeline controls, secrets management (Key Vault / Vault), and secure configuration practices.

• Implement cost-optimization and governance for Azure resources (tags, policies, budgets).

• Troubleshoot build/release failures, production incidents, and performance bottlenecks; perform root-cause analysis and implement permanent fixes.

• Mentor developers in Git workflows, pipeline authoring, best practices for IaC, and cloud-native design.

• Maintain clear documentation: runbooks, deployment playbooks, architecture diagrams, and pipeline templates. 

Required skills & experience

• 4+ years hands-on experience working with Azure and cloud-native application delivery.

• Deep experience with Azure DevOps (Repos, Pipelines, Artifacts, Boards).

• Strong IaC skills with Terraform, ARM templates, or Bicep.

• Solid experience with CI/CD design and YAML pipeline authoring.

• Practical knowledge of containerization (Docker) and Kubernetes — preferably AKS.

• Scripting skills: PowerShell, Bash, and/or Python for automation.

• Experience with Git workflows (branching strategies, PRs, code reviews).

• Familiarity with configuration management and secrets management (Azure Key Vault, HashiCorp Vault).

• Understanding of networking, identity (Azure AD), and security fundamentals in Azure.

• Strong troubleshooting, debugging, and incident response skills.

• Good collaboration and communication skills; ability to work across teams.

Certification

AZ-400: Microsoft Certified: DevOps Engineer Expert or AZ-104 or AZ 305 or Terraform Associate.

 


Read more
Timble Technologies

at Timble Technologies

1 recruiter
Shefali Gupta
Posted by Shefali Gupta
Delhi, Gurugram, Noida, Ghaziabad, Faridabad
1 - 4 yrs
₹2L - ₹5L / yr
Advanced Linux Admin
Ansible
Terraform
skill iconDocker
skill iconJenkins
+7 more

Job Title: Devops Engineer

Location: Delhi, Arjan Garh

Job Type: Full-Time

IMMEDIATE JOINERS REQUIRED

 

About Us:

Timble is a forward-thinking organization dedicated to leveraging cutting-edge technology to solve real-world problems. Our mission is to drive innovation and create impactful solutions through artificial intelligence and machine learning.


About the Role

We are looking for a high-ownership Senior DevOps Engineer to architect and maintain the mission-critical infrastructure supporting our global algorithmic trading operations. You will be the bridge between development and live trading, ensuring zero-latency performance and 100% system availability.

Key Responsibilities

  • Infrastructure Architecture: Design scalable, fault-tolerant systems for high-frequency trading environments.
  • Performance Optimization: Tune Linux servers and Python environments for maximum speed and efficiency.
  • Incident Management: Lead real-time response for live trading systems, performing RCA and preventive fixes.
  • Automation & CI/CD: Build and enhance robust pipelines using Docker, Jenkins, and Ansible.
  • Proactive Monitoring: Implement advanced logging and alerting (Prometheus/Grafana) to ensure high uptime.
  • Database Admin: Manage relational databases and write optimized SQL for operational reporting.
  • Mentorship: Guide junior DevOps members and maintain rigorous system documentation.

Technical Requirements

  • OS/Scripting: Advanced Linux Admin and expert-level Python scripting.
  • IaC & Tools: Hands-on experience with Ansible, Terraform, and Docker.
  • CI/CD: Proficiency in Jenkins or GitLab CI.
  • Data: Strong SQL skills with experience in performance tuning.
  • Education: B.Tech/M.Tech in Computer Science or related engineering field.
Read more
NeoGenCode Technologies Pvt Ltd
Mumbai
5 - 10 yrs
₹12L - ₹24L / yr
DevOps
skill iconAmazon Web Services (AWS)
Windows Azure
Google Cloud Platform (GCP)
skill iconKubernetes
+12 more

Job Title : Senior DevOps Engineer (Only Mumbai Candidates)

Experience : 5+ Years

Location : Mumbai (On-site)

Notice Period : Immediate to 15 Days

Interview Process : 1 Internal Round + 1 Client Round


Mandatory Skills :

Multi-Cloud (AWS/GCP/Azure – any two), Kubernetes, Terraform, Helm (writing Helm Charts), CI/CD (GitLab CI/Jenkins/GitHub Actions), GitOps (ArgoCD/FluxCD), Multi-tenant deployments, Stateful microservices on Kubernetes, Enterprise Linux.


Role Overview :

We are looking for a Senior DevOps Engineer to design, build, and manage scalable cloud infrastructure and DevOps pipelines for product-based platforms.

The ideal candidate should have strong experience with Kubernetes, Terraform, Helm Charts, CI/CD, and GitOps practices.


Key Responsibilities :

  • Design and manage scalable cloud infrastructure across AWS/GCP/Azure.
  • Deploy and manage microservices on Kubernetes clusters.
  • Build and maintain Infrastructure as Code using Terraform and Helm.
  • Implement CI/CD pipelines using GitLab CI, Jenkins, or GitHub Actions.
  • Implement GitOps workflows using ArgoCD or FluxCD.
  • Ensure secure, scalable, and reliable DevOps architecture.
  • Implement monitoring and logging using Prometheus, Grafana, or ELK.

Good to Have :

  • Packer, OpenShift/Rancher/K3s, On-prem deployments, PaaS experience, scripting (Bash/Python), Terraform modules.
Read more
Tradelab Technologies

at Tradelab Technologies

1 candid answer
Aakanksha Yadav
Posted by Aakanksha Yadav
Mumbai
10 - 15 yrs
₹30L - ₹50L / yr
CI/CD
skill iconAmazon Web Services (AWS)
Terraform
skill icongrafana

Key Responsibilities

DevOps Strategy & Leadership

  • Define and execute the end-to-end DevOps strategy for high-frequency trading and fintech platforms.
  • Lead, mentor, and scale a high-performing DevOps team focused on automation, reliability, and performance.
  • Partner closely with engineering and product leaders to ensure infrastructure strategy supports business and technical goals.

CI/CD & Infrastructure Automation

  • Architect, implement, and optimize enterprise-grade CI/CD pipelines for ultra-low-latency trading systems.
  • Drive Infrastructure as Code (IaC) adoption using Terraform, Helm, Kubernetes, and advanced automation toolsets.
  • Establish robust release management, deployment workflows, and versioning best practices for mission‑critical environments.

Cloud & On‑Prem Infrastructure Management

  • Design and manage hybrid infrastructures across AWS, GCP, and on-premise data centers ensuring high availability and fault tolerance.
  • Implement sophisticated networking strategies for low-latency workloads including routing optimization and performance tuning.
  • Lead multi‑cloud scalability, cost optimization, and environment standardization initiatives.

Performance Monitoring & Optimization

  • Oversee large-scale monitoring systems using Prometheus, Grafana, ELK, and related observability tools.
  • Implement predictive alerting, automated remediation, and system‑wide health checks for zero‑downtime operations.
  • Conduct root-cause analyses and performance tuning for systems processing millions of transactions per second.

Security & Compliance

  • Champion DevSecOps practices and embed security across the entire development and deployment lifecycle.
  • Ensure adherence to financial regulatory standards (SEBI and global frameworks) with strong audit and compliance mechanisms.
  • Lead security automation efforts, vulnerability management, and advanced IAM policy implementation.


Required Skills & Qualifications

  • 10+ years of DevOps experience, with 5+ years in a leadership capacity.
  • Deep hands-on expertise in CI/CD tools such as Jenkins, GitLab CI/CD, and ArgoCD.
  • Strong command of AWS, GCP, and hybrid cloud infrastructures.
  • Expert-level knowledge of Kubernetes, Docker, and large-scale container orchestration.
  • Advanced proficiency in Terraform, Helm, and overall IaC workflows.
  • Strong Linux administration, networking fundamentals (TCP/IP, DNS, Firewalls), and system internals.
  • Experience with monitoring and observability platforms (Prometheus, Grafana, ELK).
  • Excellent scripting skills in Python, Bash, or Go for automation and tooling.
  • Deep understanding of security principles, encryption, IAM, and compliance frameworks.


Good to Have

  • Experience with ultra-low-latency or high-frequency trading systems.
  • Knowledge of FIX protocol, FPGA acceleration, or network‑level optimizations.
  • Familiarity with Redis, Nginx, or other high‑throughput systems.
  • Exposure to micro‑second‑level performance tuning or network acceleration technologies.


Why Join Us?

  • Be part of a team that consistently raises the bar and delivers exceptional engineering outcomes.
  • A culture where innovation, ownership, and bold thinking are valued.
  • Exceptional growth opportunities—ideal for someone who thrives in fast-paced, high-impact environments.
  • Build systems that influence markets and redefine the fintech landscape.


This isn’t just a role—it’s a challenge, a platform, and a proving ground.

Ready to step up? Apply now.

Read more
PhotonMatters
Human Resource
Posted by Human Resource
Remote only
2 - 11 yrs
₹4L - ₹12L / yr
CI/CD
skill iconAmazon Web Services (AWS)
Terraform
Ansible
skill iconDocker
+4 more

Role Overview:

We are looking for a skilled DevOps Engineer to join our team. You will be responsible for managing and automating the deployment, monitoring, and scaling of our applications, ensuring high availability, security, and performance. The ideal candidate is passionate about automation, CI/CD, and cloud infrastructure.

Key Responsibilities:

  • Design, implement, and maintain CI/CD pipelines for development, testing, and production environments.
  • Manage cloud infrastructure (AWS, Azure, GCP, or others) and ensure scalability, reliability, and security.
  • Automate deployment, configuration management, and infrastructure provisioning using tools like Terraform, Ansible, or Chef.
  • Monitor application performance and infrastructure health using tools like Prometheus, Grafana, ELK Stack, or Datadog.
  • Collaborate with development and QA teams to streamline workflows and resolve deployment issues.
  • Implement security best practices in pipelines, infrastructure, and cloud environments.
  • Maintain version control and manage release cycles.
  • Troubleshoot and resolve production issues efficiently.

Required Skills & Qualifications:

  • Bachelor’s degree in Computer Science, IT, or related field.
  • Proven experience in DevOps, system administration, or cloud engineering.
  • Strong knowledge of CI/CD tools (Jenkins, GitLab CI/CD, CircleCI, etc.).
  • Hands-on experience with containerization (Docker, Kubernetes).
  • Experience with cloud platforms (AWS, Azure, or GCP).
  • Scripting skills (Python, Bash, or PowerShell).
  • Knowledge of infrastructure as code (Terraform, CloudFormation).
  • Familiarity with monitoring and logging tools.
  • Strong problem-solving, communication, and teamwork skills.

Preferred Qualifications:

  • Experience with microservices architecture.
  • Knowledge of networking, load balancing, and firewalls.
  • Exposure to Agile/Scrum methodologies.

What We Offer:

  • Competitive salary
  • Flexible working hours and remote options.
  • Learning and development opportunities.
  • Collaborative and inclusive work environment.


Read more
Thinqor
sai patel
Posted by sai patel
Bengaluru (Bangalore)
5 - 8 yrs
₹15L - ₹20L / yr
MLOps
Windows Azure
skill iconKubernetes
aks
aro
+3 more

 Hiring: Cloud Engineer – MLOps Platform 🚨

📍 Location: Bangalore

🧠 Experience: 5–8 Years

We are looking for an experienced Cloud Engineer to support ML teams and drive end-to-end automation for model deployment across modern cloud platforms.

🔹 Tech Stack:

Azure | Databricks | AKS | ARO | Terraform | MLflow | CI/CD

🔹 Key Responsibilities:

• Build and maintain CI/CD and Continuous Training (CT) pipelines using Azure DevOps, GitHub Actions, or Jenkins.

• Deploy Databricks jobs, MLflow models, and microservices on AKS / ARO environments.

• Automate infrastructure using Terraform and GitOps practices.

• Manage Databricks workspaces, AKS clusters, and networking configurations.

• Implement monitoring, logging, and alerting systems for ML workloads.

• Ensure cloud security, governance, and cost optimization best practices.

🔹 Required Skills:

✔ Strong hands-on experience with Azure, AKS, ARO, and Databricks

✔ Experience with MLflow and Kubernetes-based deployments

✔ Proficiency in Python and Bash / PowerShell scripting

✔ Strong understanding of cloud security, infrastructure automation, and distributed systems

Read more
LogIQ Labs Pvt.Ltd.

at LogIQ Labs Pvt.Ltd.

2 recruiters
HR eShipz
Posted by HR eShipz
Remote only
6 - 8 yrs
₹8L - ₹16L / yr
Terraform
skill iconAmazon Web Services (AWS)
Iac
yaml

Key Responsibilities

  • Design, implement, and maintain highly available infrastructure on AWS.
  • Automate infrastructure provisioning using Terraform (Infrastructure as Code).
  • Define and monitor SLIs, SLOs, and error budgets to improve service reliability.
  • Build and manage CI/CD pipelines to enable safe and frequent deployments.
  • Implement robust monitoring, alerting, and logging solutions.
  • Perform incident response, root cause analysis (RCA), and postmortems.
  • Improve system resilience through automation and self-healing mechanisms.
  • Optimize cloud resource utilization and cost (FinOps awareness).
  • Collaborate with development teams to improve application reliability.
  • Manage containerized workloads using Docker and Kubernetes (EKS preferred).
  • Implement security and compliance best practices across infrastructure.
  • Maintain operational runbooks and documentation.

Required Qualifications

  • Bachelor’s degree in Computer Science, Engineering, or related field.
  • 7–8 years of experience in SRE, DevOps, or Production Engineering.
  • Strong hands-on experience with AWS services.
  • Proven experience with Terraform for infrastructure automation.
  • Experience building CI/CD pipelines (GitHub Actions, Jenkins, or similar).
  • Strong scripting skills (Python, Bash, or Shell).
  • Experience with Linux system administration.
  • Hands-on experience with monitoring and observability tools.
  • Good understanding of networking and cloud security fundamentals.
  • Experience with Git and branching strategies


Read more
Siemens
Bengaluru (Bangalore)
3 - 5 yrs
₹5L - ₹15L / yr
skill iconReact.js
skill iconPython
CI/CD
DevOps
Windows Azure
+4 more

Job opportunity for Developer -Python Full Stack with Siemens at Bangalore.


Interview Process:

 

1st round of interview - F2F (in-Person)-Technical

2nd round of interview – F2F /Virtual Interview - Technical

3rd round of interview – Virtual Interview – Technical + HR


Job Title / Designation: Developer -Python Full Stack

Employment Type: Full Time, Permanent

Location: Bangalore

Experience: 3-5 Years Job Description: : Developer -Python Full Stack

 

We are looking for a python full stack expert who has proven 5+ years of experience in developing automating solutions on Linux based environments. You should be capable of developing python-based web applications or automation solutions and have with excellent knowledge on DB handling and decent knowledge of the K8-based deployment environment.

 

Required Skills:

 

  • Solid experience in Python back-end technology
  • Sound experience in web application development
  • Decent knowledge and experience in UI development using JavaScript, React/Angular or related tech stack.
  • Strong understanding of software design patterns and testing principles
  • Ability to learn and adapt to working with multiple programming languages.
  • Experience Docker, ArgoCD, Kubernetes and Terraform
  • Understanding of ETL processes to extract data from different data sources is a plus.
  • Proven experience in Linux development environments using Python.
  • Excellent knowledge in interacting with database systems (SQL, NoSQL) and webservices (REST)
  • Experienced in establishing an optimized CI / CD environment relevant to the project.
  • Good knowledge on repository management tools like Git, Bit Bucket, etc.
  • Excellent debugging skills/strategies.
  • Excellent communication skills
  • Experienced in working in an Agile environment.

 

Nice to have

 

  • Good Knowledge in eclipse IDE, developed add-ons/ plugins on eclipse Platform.
  • Knowledge of 93K Semiconductor test platforms
  • Good know-how of agile management tools like Jira, Azure DevOps.
  • Good knowledge of RHEL
  • Knowledge of JIRA administration 


Read more
LogIQ Labs Pvt.Ltd.

at LogIQ Labs Pvt.Ltd.

2 recruiters
HR eShipz
Posted by HR eShipz
Remote only
7 - 9 yrs
₹8L - ₹16L / yr
skill iconAmazon Web Services (AWS)
Terraform
skill iconJenkins

Key Responsibilities

  • Design, implement, and maintain highly available infrastructure on AWS.
  • Automate infrastructure provisioning using Terraform (Infrastructure as Code).
  • Define and monitor SLIs, SLOs, and error budgets to improve service reliability.
  • Build and manage CI/CD pipelines to enable safe and frequent deployments.
  • Implement robust monitoring, alerting, and logging solutions.
  • Perform incident response, root cause analysis (RCA), and postmortems.
  • Improve system resilience through automation and self-healing mechanisms.
  • Optimize cloud resource utilization and cost (FinOps awareness).
  • Collaborate with development teams to improve application reliability.
  • Manage containerized workloads using Docker and Kubernetes (EKS preferred).
  • Implement security and compliance best practices across infrastructure.
  • Maintain operational runbooks and documentation.

Required Qualifications

  • Bachelor’s degree in Computer Science, Engineering, or related field.
  • 7–8 years of experience in SRE, DevOps, or Production Engineering.
  • Strong hands-on experience with AWS services.
  • Proven experience with Terraform for infrastructure automation.
  • Experience building CI/CD pipelines (GitHub Actions, Jenkins, or similar).
  • Strong scripting skills (Python, Bash, or Shell).
  • Experience with Linux system administration.
  • Hands-on experience with monitoring and observability tools.
  • Good understanding of networking and cloud security fundamentals.
  • Experience with Git and branching strategies


Read more
WITS Innovation Lab
Prabhnoor Kaur
Posted by Prabhnoor Kaur
Delhi, Gurugram, Noida, Ghaziabad, Faridabad
2 - 5 yrs
₹3L - ₹7L / yr
Terraform
skill iconKubernetes
skill iconJenkins
Ansible
skill iconAmazon Web Services (AWS)
+8 more

We are looking for a skilled DevOps Engineer with hands-on experience in cloud platforms, CI/CD pipelines, container orchestration, and infrastructure automation. The ideal candidate is someone who loves solving reliability challenges, automating everything, and ensuring seamless delivery across environments.

Key Responsibilities

  • Design, implement, and maintain CI/CD pipelines using GitHub Actions, Jenkins, and GitHub.
  • Manage and optimize infrastructure on AWS/GCP, ensuring scalability, security, and high availability.
  • Deploy and manage containerized applications using Docker and Kubernetes.
  • Build, automate, and manage infrastructure as code using Terraform.
  • Configure and manage automation tools and workflows using Ansible.
  • Monitor system performance, troubleshoot production issues, and ensure smooth operations.
  • Implement best practices for code management, release processes, and DevOps standards.
  • Collaborate closely with development teams to improve build pipelines and deployment workflows.
  • Write scripts in Python/Bash to automate operational tasks.

Required Skills & Experience

  • 2+ years of hands-on experience as a DevOps Engineer or in a similar role.
  • Strong expertise in AWS or GCP cloud services.
  • Solid understanding of Kubernetes (deployment, scaling, service mesh, packaging).
  • Proficiency with Terraform for infrastructure automation.
  • Experience with Git, GitHub, and GitHub Actions for source control and CI/CD.
  • Good knowledge of Jenkins pipelines and automation.
  • Hands-on experience with Ansible for configuration management.
  • Strong scripting skills using Python or Bash.
  • Understanding of monitoring, logging, and security best practices.


Read more
SPGConsulting
Anitha K
Posted by Anitha K
Bengaluru (Bangalore)
7 - 12 yrs
₹8L - ₹16L / yr
DevOps
skill iconAmazon Web Services (AWS)
cicd
skill iconKubernetes
skill iconDocker
+5 more

Hiring: AWS DevOps Developer

📍 Location: Bangalore

🧑‍💻 Experience: 4–7 Years

📌 Job Summary

We are looking for a skilled AWS DevOps Developer with strong experience in AWS cloud infrastructure, CI/CD automation, containerization, and Infrastructure as Code. The ideal candidate should have hands-on experience building scalable and secure cloud environments.

🛠 Required Technical Skills

☁️ AWS Services

  • Amazon EC2
  • Amazon S3
  • IAM
  • VPC
  • Amazon EKS
  • RDS
  • Route 53
  • CloudWatch
  • AWS Lambda

🔄 DevOps & CI/CD

  • Jenkins (Pipelines, Shared Libraries)
  • Git / GitHub
  • Maven / Build tools
  • CI/CD pipeline design & implementation

🐳 Containers & Orchestration

  • Docker
  • Kubernetes (EKS preferred)
  • Helm

🏗 Infrastructure as Code

  • Terraform
  • Ansible

📊 Monitoring & Logging

  • CloudWatch
  • Prometheus
  • Grafana

📋 Roles & Responsibilities

  • Design and implement scalable AWS infrastructure
  • Build and maintain CI/CD pipelines
  • Deploy containerized applications using Docker & Kubernetes
  • Automate infrastructure provisioning using Terraform
  • Implement monitoring and alerting solutions
  • Ensure security, compliance, and cost optimization
  • Troubleshoot production issues and improve system reliability

➕ Good to Have

  • AWS Certification (Solutions Architect / DevOps Engineer)
  • Experience with Microservices architecture
  • Knowledge of DevSecOps practices
  • Experience in Agile methodology


Read more
Technology Industry

Technology Industry

Agency job
via Peak Hire Solutions by Dhara Thakkar
Hyderabad
7 - 10 yrs
₹20L - ₹40L / yr
DevOps
skill iconAmazon Web Services (AWS)
CI/CD
Linux/Unix
skill iconGitHub
+19 more

Description

SRE Engineer


Role Overview 

As a Site Reliability Engineer, you will play a critical role in ensuring the availability and performance of our customer-facing platform. You will work closely with DevOps, DBA, and Development teams to provision and maintain infrastructure, deploy and monitor our applications, and automate workflows. Your contributions will have a direct impact on customer satisfaction and overall experience.


Responsibilities and Deliverables

• Manage, monitor, and maintain highly available systems (Windows and Linux)

• Analyze metrics and trends to ensure rapid scalability.

• Address routine service requests while identifying ways to automate and simplify.

• Create infrastructure as code using Terraform, ARM Templates, Cloud Formation.

• Maintain data backups and disaster recovery plans.

• Design and deploy CI/CD pipelines using GitHub Actions, Octopus, Ansible, Jenkins, Azure DevOps.

• Adhere to security best practices through all stages of the software development lifecycle

• Follow and champion ITIL best practices and standards.

• Become a resource for emerging and existing cloud technologies with a focus on AWS.


Organizational Alignment

• Reports to the Senior SRE Manager

• This role involves close collaboration with DevOps, DBA, and security teams.


Technical Proficiencies

• Hands-on experience with AWS is a must-have.

• Proficiency analyzing application, IIS, system, security logs and CloudTrail events

• Practical experience with CI/CD tools such as GitHub Actions, Jenkins, Octopus

• Experience with observability tools such as New Relic, Application Insights, AppDynamics, or DataDog.

• Experience maintaining and administering Windows, Linux, and Kubernetes.

• Experience in automation using scripting languages such as Bash, PowerShell, or Python.

• Configuration management experience using Ansible, Terraform, Azure Automation Run book or similar.

• Experience with SQL Server database maintenance and administration is preferred.

• Good Understanding of networking (VNET, subnet, private link, VNET peering).

• Familiarity with cloud concepts including certificates, Oauth, AzureAD, ASE, ASP, AKS, Azure Apps, 

Load Balancers, Application Gateway, Firewall, Load Balancer, API Management, SQL Server, Databases on Azure


Experience

• 7+ years of experience in SRE or System Administration role

• Demonstrated ability building and supporting high availability Windows/Linux servers, with emphasis on the WISA stack (Windows/IIS/SQL Server/ASP.net)

• 3+ years of experience working with cloud technologies including AWS, Azure.

• 1+ years of experience working with container technology including Docker and Kubernetes.

• Comfortable using Scrum, Kanban, or Lean methodologies.


Education

• Bachelor’s Degree or College Diploma in Computer Science, Information Systems, or equivalent 

experience.


Additional Job Details:

• Working hours: 2:00 PM / 3:00 PM to 11:30 PM IST

• Interview process: 3 technical rounds

• Work model: 3 days’ work from office


Read more
Consumer Internet, Technology & Travel and Tourism Platform

Consumer Internet, Technology & Travel and Tourism Platform

Agency job
via Peak Hire Solutions by Dhara Thakkar
Bengaluru (Bangalore)
7 - 10 yrs
₹45L - ₹60L / yr
DevOps
Cloud Computing
Infrastructure
skill iconKubernetes
skill iconDocker
+22 more

Job Details

Job Title: Lead DevOps Engineer

Industry: Consumer Internet, Technology & Travel and Tourism Platform

Function - IT

Experience Required: 7-10 years

Employment Type: Full Time

Job Location: Bengaluru

CTC Range: Best in Industry

 

Criteria:

  • Strong Lead DevOps / Infrastructure Engineer Profiles.
  • Must have 7+ years of hands-on experience working as a DevOps / Infrastructure Engineer.
  • Candidate’s current title must be Lead DevOps Engineer (or equivalent Lead role) in the current organization
  • Must have minimum 2+ years of team management / technical leadership experience, including mentoring engineers, driving infrastructure decisions, or leading DevOps initiatives.
  • Must have strong hands-on experience with Kubernetes (container orchestration) including deployment, scaling, and cluster management.
  • Must have experience with Infrastructure as Code (IaC) tools such as Terraform, Ansible, Chef, or Puppet.
  • Must have strong scripting and automation experience using Python, Go, Bash, or similar scripting languages.
  • Must have working experience with distributed databases or data systems such as MongoDB, Redis, Cassandra, Elasticsearch, or Puppet.
  • Must have strong hands-on experience in Observability & Monitoring, CI/CD architecture, and Networking concepts in production environments.
  • (Company) – Must be from B2C Product Companies only.
  • (Education) – B.E/ B.Tech

 

Preferred

  • Experience working in microservices architecture and event-driven systems.
  • Exposure to cloud infrastructure, scalability, reliability, and cost optimization practices.
  • (Skills) – Understanding of programming languages such as Go, Python, or Java.
  • (Environment) – Experience working in high-growth startup or large-scale production environments.

 

Job Description 

As a DevOps Engineer, you will be working on building and operating infrastructure at scale, designing and implementing a variety of tools to enable product teams to build and deploy their services independently, improving observability across the board, and designing for security, resiliency, availability, and stability. If the prospect of ensuring system reliability at scale and exploring cutting-edge technology to solve problems, excites you, then this is your fit.

 

Job Responsibilities:

  • Own end-to-end infrastructure right from non-prod to prod environment including self-managed DBs
  • Codify our infrastructure
  • Do what it takes to keep the uptime above 99.99%
  • Understand the bigger picture and sail through the ambiguities
  • Scale technology considering cost and observability and manage end-to-end processes
  • Understand DevOps philosophy and evangelize the principles across the organization
  • Strong communication and collaboration skills to break down the silos

 

Read more
Consumer Internet, Technology & Travel and Tourism Platform

Consumer Internet, Technology & Travel and Tourism Platform

Agency job
via Peak Hire Solutions by Dhara Thakkar
Bengaluru (Bangalore)
4 - 7 yrs
₹38L - ₹50L / yr
DevOps
Cloud Computing
Infrastructure
skill iconKubernetes
skill iconDocker
+23 more

Job Details

Job Title: Senior DevOps Engineer

Industry: Consumer Internet, Technology & Travel and Tourism Platform

Function - IT

Experience Required: 4-7 years

Employment Type: Full Time

Job Location: Bengaluru

CTC Range: Best in Industry

 

Criteria:

  • Strong DevOps / Infrastructure Engineer Profiles.
  • Must have 4+ years of hands-on experience working as a DevOps Engineer / Infrastructure Engineer / SRE / DevOps Consultant.
  • Must have hands-on experience with Kubernetes and Docker, including deployment, scaling, or containerized application management.
  • Must have experience with Infrastructure as Code (IaC) or configuration management tools such as Terraform, Ansible, Chef, or Puppet.
  • Must have strong automation and scripting experience using Python, Go, Bash, Shell, or similar scripting languages.
  • Must have working experience with distributed databases or data systems such as MongoDB, Redis, Cassandra, Elasticsearch, or Kafka.
  • Candidate must demonstrate strong expertise in at least one of the following areas - Databases / Distributed Data Systems, Observability & Monitoring, CI/CD Pipelines. Networking Concepts, Kubernetes / Container Platforms
  • Candidates must be from B2C Product-based companies only.
  • (Education) – BE / B.Tech or equivalent

 

Preferred

  • Experience working with microservices or event-driven architectures.
  • Exposure to cloud infrastructure, monitoring, reliability, and scalability practices.
  • (Skills) – Understanding of programming languages such as Go, Python, or Java.
  • Preferred (Environment) – Experience working in high-scale production or fast-growing product startups.

 

Job Description 

As a DevOps Engineer, you will be working on building and operating infrastructure at scale, designing and implementing a variety of tools to enable product teams to build and deploy their services independently, improving observability across the board, and designing for security, resiliency, availability, and stability. If the prospect of ensuring system reliability at scale and exploring cutting-edge technology to solve problems, excites you, then this is your fit.

 

Job Responsibilities:

  • Own end-to-end infrastructure right from non-prod to prod environment including self-managed DBs
  • Codify our infrastructure
  • Do what it takes to keep the uptime above 99.99%
  • Understand the bigger picture and sail through the ambiguities
  • Scale technology considering cost and observability and manage end-to-end processes
  • Understand DevOps philosophy and evangelize the principles across the organization
  • Strong communication and collaboration skills to break down the silos


Read more
Flipr
Arsalan Mobin
Posted by Arsalan Mobin
Bengaluru (Bangalore)
3 - 6 yrs
₹10L - ₹13L / yr
VAPT
Web application security
Cyber Security
DevSecOps
CI/CD
+13 more

About the role:

We are looking for a skilled and driven Security Engineer to join our growing security team. This role requires a hands-on professional who can evaluate and strengthen the security posture of our

applications and infrastructure across Web, Android, iOS, APIs, and cloud-native environments.


The ideal candidate will also lead technical triage from our bug bounty program, integrate security into the DevOps lifecycle, and contribute to building a security-first engineering culture.


Required Skills & Experience:

● 3 to 6 years of solid hands-on experience in the VAPT domain

● Solid understanding of Web, Android, and iOS application security

● Experience with DevSecOps tools and integrating security into CI/CD

● Strong knowledge of cloud platforms (AWS/GCP/Azure) and their security models

● Familiarity with bug bounty programs and responsible disclosure practices

● Familiarity with tools like Burp Suite, MobSF, OWASP ZAP, Terraform, Checkov..etc

● Good knowledge of API security

● Scripting experience (Python, Bash, or similar) for automation tasks

Preferred Qualifications:

● OSCP, CEH, AWS Security Specialty, or similar certifications

● Experience working in a regulated environment (e.g., FinTech, InsurTech)


Responsibilities:

● Perform Security reviews, Vulnerability Assessments & Penetration Testing for Web,

Android, iOS, and API endpoints

● Perform Threat Modelling & anticipate potential attack vectors and improve security

architecture on complex or cross-functional components

● Identify and remediate OWASP Top 10 and mobile-specific vulnerabilities

● Conduct secure code reviews and red team assessments

● Integrate SAST, DAST, SCA, and secret scanning tools into CI/CD pipelines

● Automate security checks using tools like SonarQube, Snyk, Trivy, etc.

● Maintain and manage vulnerability scanning infrastructure

● Perform security assessments of AWS, Azure, and GCP environments, with an emphasis

on container security, particularly for Docker and Kubernetes.

● Implement guardrails for IAM, network segmentation, encryption, and cloud monitoring

● Contribute to infrastructure hardening for containers, Kubernetes, and virtual machines

● Triage bug bounty reports and coordinate remediation with engineering teams

● Act as the primary responder for external security disclosures

● Maintain documentation and metrics related to bug bounty and penetration testing

activities

● Collaborate with developers and architects to ensure secure design decisions

● Lead security design reviews for new features and products

● Provide actionable risk assessments and mitigation plans to stakeholders

Read more
Global Digital Transformation Solutions Provider

Global Digital Transformation Solutions Provider

Agency job
via Peak Hire Solutions by Dhara Thakkar
Hyderabad
5 - 7 yrs
₹15L - ₹21L / yr
skill iconPython
Terraform
PySpark
skill iconAmazon Web Services (AWS)

Job Details

Job Title: Lead I - Data Engineering (Python, AWS Glue, Pyspark, Terraform)

Industry: Global digital transformation solutions provider

Domain - Information technology (IT)

Experience Required: 5-7 years

Employment Type: Full Time

Job Location: Hyderabad

CTC Range: Best in Industry

 

Job Description

Data Engineer with AWS, Python, Glue, Terraform, Step function and Spark

 

Skills: Python, AWS Glue, Pyspark, Terraform - All are mandatory

 

******

Notice period - 0 to 15 days only

Job stability is mandatory

Location: Hyderabad 

Read more
Global Digital Transformation Solutions Provider

Global Digital Transformation Solutions Provider

Agency job
via Peak Hire Solutions by Dhara Thakkar
Trivandrum, Thiruvananthapuram, Pune
3 - 5 yrs
₹15L - ₹25L / yr
Terraform
Splunk
DevOps
Windows Azure
SQL Azure
+12 more

JOB DETAILS:

* Job Title: Lead I - Azure, Terraform, GitLab CI 

* Industry: Global Digital Transformation Solutions Provider

* Salary: Best in Industry

* Experience: 3-5 years

* Location: Trivandrum/Pune

 

Job Description

Job Title: DevOps Engineer

Experience: 4–8 Years 

Location: Trivandrum & Pune 

Job Type: Full-Time

Mandatory skills: Azure, Terraform, GitLab CI, Splunk

 

Job Description

We are looking for an experienced and driven DevOps Engineer with 4 to 8 years of experience to join our team in Trivandrum or Pune. The ideal candidate will take ownership of automating cloud infrastructure, maintaining CI/CD pipelines, and implementing monitoring solutions to support scalable and reliable software delivery in a cloud-first environment.

 

Key Responsibilities

  • Design, manage, and automate Azure cloud infrastructure using Terraform.
  • Develop scalable, reusable, and version-controlled Infrastructure as Code (IaC) modules.
  • Implement monitoring and logging solutions using Splunk, Azure Monitor, and Dynatrace.
  • Build and maintain secure and efficient CI/CD pipelines using GitLab CI or Harness.
  • Collaborate with cross-functional teams to enable smooth deployment workflows and infrastructure updates.
  • Analyze system logs, performance metrics, and s to troubleshoot and optimize performance.
  • Ensure infrastructure security, compliance, and scalability best practices are followed.

 

Mandatory Skills

Candidates must have hands-on experience with the following technologies:

  • Azure – Cloud infrastructure management and deployment
  • Terraform – Infrastructure as Code for scalable provisioning
  • GitLab CI – Pipeline development, automation, and integration
  • Splunk – Monitoring, logging, and troubleshooting production systems

 

Preferred Skills

  • Experience with Harness (for CI/CD)
  • Familiarity with Azure Monitor and Dynatrace
  • Scripting proficiency in Python, Bash, or PowerShell
  • Understanding of DevOps best practices, containerization, and microservices architecture
  • Exposure to Agile and collaborative development environments

 

Skills Summary

Azure, Terraform, GitLab CI, Splunk (Mandatory) Additional: Harness, Azure Monitor, Dynatrace, Python, Bash, PowerShell

 

Skills: Azure, Splunk, Terraform, Gitlab Ci

 

******

Notice period - 0 to 15days only

Job stability is mandatory

Location: Trivandrum/Pune

Read more
MindInventory

at MindInventory

1 video
Uzer Khan
Posted by Uzer Khan
Ahmedabad
3 - 6 yrs
₹3L - ₹8L / yr
Windows Azure
skill iconKubernetes
skill iconDocker
skill icongrafana
Terraform
+1 more
  • 3+ years hands-on Azure cloud & automation experience.
  • Experience managing high-availability enterprise systems.
  • Microsoft Azure (AKS, VNets, App Gateway, Load Balancers).
  • Kubernetes (AKS) & Docker.
  • Networking (VPN, DNS, routing, firewalls, NSGs).
  • Infra-as-Code (Terraform / Bicep optional).
  • Monitoring tools: Azure Monitor, Grafana, Prometheus.
  • CI/CD: Azure DevOps, GitLab/Jenkins (added advantage).
  • Security: Key Vault, certificates, encryption, RBAC.
  • Understanding of PostgreSQL/PostGIS networking.
  • Design and manage Azure infrastructure (VMs, VNets, NSGs, Load Balancers, AKS, Storage).
  • Deploy and maintain AKS workloads for NiFi, PostGIS, and microservices.
  • Architect secure network topology including VNet peering, VPNs, Private Endpoints, DNS & Zero Trust policies.
  • Implement monitoring and alerting using Azure Monitor, Log Analytics, Grafana & Prometheus.
  • Ensure high uptime, DR planning, backup and failover strategies.
  • Automate deployments with Azure DevOps, Helm, ArgoCD & GitOps principles.
  • Enforce security, RBAC, compliance, and audit standards across environments.
  • Good to have knowledge/experince in Linux administration (Ubuntu/Debian).


Read more
Tradelab Technologies

at Tradelab Technologies

1 candid answer
Aakanksha Yadav
Posted by Aakanksha Yadav
Bengaluru (Bangalore)
3 - 8 yrs
₹7L - ₹25L / yr
CI/CD
skill iconJenkins
skill icongrafana
Terraform

Job Location: Bangalore/Mumbai

Exp: 3-10+ Yrs

Job Title: DevOps Engineer


About TradeLab:

TradeLab is a leading fintech technology provider, delivering cutting-edge solutions to brokers, banks, and fintech platforms. Our portfolio includes high-performance Order & Risk Management Systems (ORMS), seamless MetaTrader integrations, AI-driven customer engagement platforms such as PULSE LLaVA, and compliance-grade risk management solutions.


Key Responsibilities

  • DevOps Strategy & Leadership
  • Contribute to defining and executing the DevOps strategy for high-frequency trading and fintech platforms. Mentor junior engineers and collaborate with cross-functional teams to foster a culture of automation, scalability, and performance.
  • Work closely with engineering and product teams to align infrastructure initiatives with business objectives.
  • CI/CD and Infrastructure Automation Design and optimize CI/CD pipelines for ultra-low-latency trading systems.
  • Implement Infrastructure as Code (IaC) practices using Terraform, Helm, Kubernetes, and automation frameworks.
  • Establish best practices for release management and deployment in mission-critical environments.
  • Cloud & On-Prem Infrastructure Management Manage hybrid infrastructure across AWS, GCP, and on-prem data centers ensuring high availability and fault tolerance.
  • Implement networking strategies for low-latency trading, including routing and performance tuning.
  • Drive cost optimization and scalability initiatives across multi-cloud environments.
  • Performance Monitoring & Optimization Set up and maintain system performance monitoring using Prometheus, Grafana, and ELK stack. Implement alerting and automated remediation strategies for zero-downtime operations.
  • Conduct root-cause analysis and performance tuning for systems handling millions of transactions per second.
  • Security & Compliance Apply DevSecOps principles across all environments.
  • Ensure compliance with financial regulations (SEBI and global standards) and maintain audit trails.
  • Drive security automation, vulnerability management, and IAM policies.


Required Skills & Qualifications

  • 3–8 years of experience in DevOps, with exposure to leadership or team mentoring.
  • Strong expertise in CI/CD tools (Jenkins, GitLab CI/CD, ArgoCD). Hands-on experience with cloud platforms (AWS, GCP) and hybrid infrastructure.
  • Proficiency in Kubernetes, Docker, and container orchestration. Solid experience with Terraform, Helm, and IaC principles.
  • Strong Linux administration and networking fundamentals (TCP/IP, DNS, firewalls). Experience with monitoring tools (Prometheus, Grafana, ELK).
  • Proficiency in scripting languages (Python, Bash, Go) for automation. Understanding of security best practices, IAM, and compliance frameworks.


Good to Have

  • Exposure to ultra-low-latency trading infrastructure or high-frequency trading systems.
  • Knowledge of FIX protocol, FPGA acceleration, or network optimization techniques.
  • Familiarity with Redis, Nginx, or other real-time data handling technologies.
  • Experience in advanced performance tuning for microsecond-level execution.


Why Join Us?

Work with a team that expects and delivers excellence. A culture where innovation and speed are rewarded. Limitless opportunities for growth—if you can handle the pace. Build systems that move markets and redefine fintech

Read more
Talent Pro
Bengaluru (Bangalore)
10 - 14 yrs
₹70L - ₹100L / yr
Terraform
skill iconPython

10–14 years of experience in software engineering, with strong emphasis on backend and data architecture for large-scale systems.


Proven experience designing and deploying distributed, event-driven systems and streaming data pipelines.


Expert proficiency in Go/Python, including experience with microservices, APIs, and concurrency models.


Deep understanding of data flows across multi-sensor and multi-modal sources, including ingestion, transformation, and synchronization.


Experience building real-time APIs for interactive web applications and data-heavy workflows.


Familiarity with frontend ecosystems (React, TypeScript) and rendering frameworks leveraging WebGL/WebGPU.


Hands-on experience with CI/CD, Kubernetes, Docker, and Infrastructure as Code (Terraform, Helm).

Read more
Technology Industry

Technology Industry

Agency job
via Peak Hire Solutions by Dhara Thakkar
Bengaluru (Bangalore)
2 - 5 yrs
₹4L - ₹5L / yr
DevOps
Windows Azure
CI/CD
MySQL
skill iconPython
+12 more

JOB DETAILS:

* Job Title: DevOps Engineer (Azure)

* Industry: Technology

* Salary: Best in Industry

* Experience: 2-5 years

* Location: Bengaluru, Koramangala

Review Criteria

  • Strong Azure DevOps Engineer Profiles.
  • Must have minimum 2+ years of hands-on experience as an Azure DevOps Engineer with strong exposure to Azure DevOps Services (Repos, Pipelines, Boards, Artifacts).
  • Must have strong experience in designing and maintaining YAML-based CI/CD pipelines, including end-to-end automation of build, test, and deployment workflows.
  • Must have hands-on scripting and automation experience using Bash, Python, and/or PowerShell
  • Must have working knowledge of databases such as Microsoft SQL Server, PostgreSQL, or Oracle Database
  • Must have experience with monitoring, alerting, and incident management using tools like Grafana, Prometheus, Datadog, or CloudWatch, including troubleshooting and root cause analysis

 

Preferred

  • Knowledge of containerisation and orchestration tools such as Docker and Kubernetes.
  • Knowledge of Infrastructure as Code and configuration management tools such as Terraform and Ansible.
  • Preferred (Education) – BE/BTech / ME/MTech in Computer Science or related discipline

 

Role & Responsibilities

  • Build and maintain Azure DevOps YAML-based CI/CD pipelines for build, test, and deployments.
  • Manage Azure DevOps Repos, Pipelines, Boards, and Artifacts.
  • Implement Git branching strategies and automate release workflows.
  • Develop scripts using Bash, Python, or PowerShell for DevOps automation.
  • Monitor systems using Grafana, Prometheus, Datadog, or CloudWatch and handle incidents.
  • Collaborate with dev and QA teams in an Agile/Scrum environment.
  • Maintain documentation, runbooks, and participate in root cause analysis.

 

Ideal Candidate

  • 2–5 years of experience as an Azure DevOps Engineer.
  • Strong hands-on experience with Azure DevOps CI/CD (YAML) and Git.
  • Experience with Microsoft Azure (OCI/AWS exposure is a plus).
  • Working knowledge of SQL Server, PostgreSQL, or Oracle.
  • Good scripting, troubleshooting, and communication skills.
  • Bonus: Docker, Kubernetes, Terraform, Ansible experience.
  • Comfortable with WFO (Koramangala, Bangalore).


Read more
MyOperator - VoiceTree Technologies

at MyOperator - VoiceTree Technologies

1 video
2 recruiters
Vijay Muthu
Posted by Vijay Muthu
Remote only
3.5 - 5 yrs
₹14L - ₹20L / yr
skill iconPython
skill iconDjango
MySQL
skill iconPostgreSQL
FastAPI
+22 more

About Us:

MyOperator is a Business AI Operator and a category leader that unifies WhatsApp, Calls, and AI-powered chat & voice bots into one intelligent business communication platform.

Unlike fragmented communication tools, MyOperator combines automation, intelligence, and workflow integration to help businesses run WhatsApp campaigns, manage calls, deploy AI chatbots, and track performance — all from a single, no-code platform.Trusted by 12,000+ brands including Amazon, Domino’s, Apollo, and Razorpay, MyOperator enables faster responses, higher resolution rates, and scalable customer engagement — without fragmented tools or increased headcount.


Role Overview:

We’re seeking a passionate Python Developer with strong experience in backend development and cloud infrastructure. This role involves building scalable microservices, integrating AI tools like LangChain/LLMs, and optimizing backend performance for high-growth B2B products.


Key Responsibilities:

  • Develop robust backend services using Python, Django, and FastAPI
  • Design and maintain a scalable microservices architecture
  • Integrate LangChain/LLMs into AI-powered features
  • Write clean, tested, and maintainable code with pytest
  • Manage and optimize databases (MySQL/Postgres)
  • Deploy and monitor services on AWS
  • Collaborate across teams to define APIs, data flows, and system architecture

Must-Have Skills:

  • Python and Django
  • MySQL or Postgres
  • Microservices architecture
  • AWS (EC2, RDS, Lambda, etc.)
  • Unit testing using pytest
  • LangChain or Large Language Models (LLM)
  • Strong grasp of Data Structures & Algorithms
  • AI coding assistant tools (e.g., Chat GPT & Gemini)

Good to Have:

  • MongoDB or ElasticSearch
  • Go or PHP
  • FastAPI
  • React, Bootstrap (basic frontend support)
  • ETL pipelines, Jenkins, Terraform

Why Join Us?

  • 100% Remote role with a collaborative team
  • Work on AI-first, high-scale SaaS products
  • Drive real impact in a fast-growing tech company
  • Ownership and growth from day one


Read more
Global digital transformation solutions provider

Global digital transformation solutions provider

Agency job
via Peak Hire Solutions by Dhara Thakkar
Trivandrum, Thiruvananthapuram
5 - 7 yrs
₹18L - ₹26L / yr
skill iconKotlin
skill iconJava
skill iconAmazon Web Services (AWS)
skill iconSpring Boot
Microservices
+24 more

JOB DETAILS:

* Job Title: Lead I - Software Engineering-Kotlin, Java, Spring Boot, Aws

* Industry: Global digital transformation solutions provide

* Salary: Best in Industry

* Experience: 5 -7 years

* Location: Trivandrum, Thiruvananthapuram

Role Proficiency:

Act creatively to develop applications and select appropriate technical options optimizing application development maintenance and performance by employing design patterns and reusing proven solutions account for others' developmental activities

 

Skill Examples:

  1. Explain and communicate the design / development to the customer
  2. Perform and evaluate test results against product specifications
  3. Break down complex problems into logical components
  4. Develop user interfaces business software components
  5. Use data models
  6. Estimate time and effort required for developing / debugging features / components
  7. Perform and evaluate test in the customer or target environment
  8. Make quick decisions on technical/project related challenges
  9. Manage a Team mentor and handle people related issues in team
  10.  Maintain high motivation levels and positive dynamics in the team.
  11.  Interface with other teams’ designers and other parallel practices
  12.  Set goals for self and team. Provide feedback to team members
  13.  Create and articulate impactful technical presentations
  14.  Follow high level of business etiquette in emails and other business communication
  15.  Drive conference calls with customers addressing customer questions
  16.   Proactively ask for and offer help
  17.  Ability to work under pressure determine dependencies risks facilitate planning; handling multiple tasks.
  18.  Build confidence with customers by meeting the deliverables on time with quality.
  19.  Estimate time and effort resources required for developing / debugging features / components
  20.  Make on appropriate utilization of Software / Hardware’s.
  21.  Strong analytical and problem-solving abilities

 

Knowledge Examples:

  •     Appropriate software programs / modules
  1. Functional and technical designing
  2. Programming languages – proficient in multiple skill clusters
  3. DBMS
  4. Operating Systems and software platforms
  5. Software Development Life Cycle
  6. Agile – Scrum or Kanban Methods
  7. Integrated development environment (IDE)
  8. Rapid application development (RAD)
  9. Modelling technology and languages
  10. Interface definition languages (IDL)
  11. Knowledge of customer domain and deep understanding of sub domain where problem is solved

 

Additional Comments:

We are seeking an experienced Senior Backend Engineer with strong expertise in Kotlin and Java to join our dynamic engineering team.

The ideal candidate will have a deep understanding of backend frameworks, cloud technologies, and scalable microservices architectures, with a passion for clean code, resilience, and system observability.

You will play a critical role in designing, developing, and maintaining core backend services that power our high-availability e-commerce and promotion platforms.

 

Key Responsibilities

Design, develop, and maintain backend services using Kotlin (JVM, Coroutines, Serialization) and Java.

Build robust microservices with Spring Boot and related Spring ecosystem components (Spring Cloud, Spring Security, Spring Kafka, Spring Data).

Implement efficient serialization/deserialization using Jackson and Kotlin Serialization. Develop, maintain, and execute automated tests using JUnit 5, Mockk, and ArchUnit to ensure code quality.

Work with Kafka Streams (Avro), Oracle SQL (JDBC, JPA), DynamoDB, and Redis for data storage and caching needs. Deploy and manage services in AWS environment leveraging DynamoDB, Lambdas, and IAM.

Implement CI/CD pipelines with GitLab CI to automate build, test, and deployment processes.

Containerize applications using Docker and integrate monitoring using Datadog for tracing, metrics, and dashboards.

Define and maintain infrastructure as code using Terraform for services including GitLab, Datadog, Kafka, and Optimizely.

Develop and maintain RESTful APIs with OpenAPI (Swagger) and JSON API standards.

Apply resilience patterns using Resilience4j to build fault-tolerant systems.

Adhere to architectural and design principles such as Domain-Driven Design (DDD), Object-Oriented Programming (OOP), and Contract Testing (Pact).

Collaborate with cross-functional teams in an Agile Scrum environment to deliver high-quality features.

Utilize feature flagging tools like Optimizely to enable controlled rollouts.

 

Mandatory Skills & Technologies Languages:

Kotlin (JVM, Coroutines, Serialization),

Java Frameworks: Spring Boot (Spring Cloud, Spring Security, Spring Kafka, Spring Data)

Serialization: Jackson, Kotlin Serialization

Testing: JUnit 5, Mockk, ArchUnit

Data: Kafka (Avro) Streams Oracle SQL (JDBC, JPA) DynamoDB (NoSQL) Redis (Caching)

Cloud: AWS (DynamoDB, Lambda, IAM)

CI/CD: GitLab CI Containers: Docker

Monitoring & Observability: Datadog (Tracing, Metrics, Dashboards, Monitors)

Infrastructure as Code: Terraform (GitLab, Datadog, Kafka, Optimizely)

API: OpenAPI (Swagger), REST API, JSON API

Resilience: Resilience4j

Architecture & Practices: Domain-Driven Design (DDD) Object-Oriented Programming (OOP) Contract Testing (Pact) Feature Flags (Optimizely)

Platforms: E-Commerce Platform (CommerceTools), Promotion Engine (Talon.One)

Methodologies: Scrum, Agile

 

Skills: Kotlin, Java, Spring Boot, Aws

 

Must-Haves

Kotlin (JVM, Coroutines, Serialization), Java, Spring Boot (Spring Cloud, Spring Security, Spring Kafka, Spring Data), AWS (DynamoDB, Lambda, IAM), Microservices Architecture

 

 

******

Notice period - 0 to 15 days only

Job stability is mandatory

Location: Trivandrum

Virtual Weekend Interview on 7th Feb 2026 - Saturday

Read more
MNC

MNC

Agency job
via rekha by Rekja Gorle
Mumbai
5 - 10 yrs
₹10L - ₹25L / yr
Windows Azure
DevOps
Microsoft Windows Azure
skill iconKubernetes
Google Cloud Platform (GCP)
+6 more

We are hiring a Senior DevOps Engineer (5–10 years experience) with strong hands-on expertise in AWS, CI/CD, Docker, Kubernetes, and Linux. The role involves designing, automating, and managing scalable cloud infrastructure and deployment pipelines. Experience with Terraform/Ansible, monitoring tools, and security best practices is required. Immediate joiners preferred.

Read more
Deqode

at Deqode

1 recruiter
Samiksha Agrawal
Posted by Samiksha Agrawal
Pune
7 - 10 yrs
₹7L - ₹18L / yr
SRE
DevOps
Terraform
skill iconKubernetes
skill iconDocker

Role: DevOps Engineer

Experience: 7+ Years

Location: Pune / Trivandrum

Work Mode: Hybrid

𝐊𝐞𝐲 𝐑𝐞𝐬𝐩𝐨𝐧𝐬𝐢𝐛𝐢𝐥𝐢𝐭𝐢𝐞𝐬:

  • Drive CI/CD pipelines for microservices and cloud architectures
  • Design and operate cloud-native platforms (AWS/Azure)
  • Manage Kubernetes/OpenShift clusters and containerized applications
  • Develop automated pipelines and infrastructure scripts
  • Collaborate with cross-functional teams on DevOps best practices
  • Mentor development teams on continuous delivery and reliability
  • Handle incident management, troubleshooting, and root cause analysis

𝐌𝐚𝐧𝐝𝐚𝐭𝐨𝐫𝐲 𝐒𝐤𝐢𝐥𝐥𝐬:

  • 7+ years in DevOps/SRE roles
  • Strong experience with AWS or Azure
  • Hands-on with Docker, Kubernetes, and/or OpenShift
  • Proficiency in Jenkins, Git, Maven, JIRA
  • Strong scripting skills (Shell, Python, Perl, Ruby, JavaScript)
  • Solid networking knowledge and troubleshooting skills
  • Excellent communication and collaboration abilities

𝐏𝐫𝐞𝐟𝐞𝐫𝐫𝐞𝐝 𝐒𝐤𝐢𝐥𝐥𝐬:

  • Experience with Helm, monitoring tools (Splunk, Grafana, New Relic, Datadog)
  • Knowledge of Microservices and SOA architectures
  • Familiarity with database technologies


Read more
US based large Biotech company with WW operations.

US based large Biotech company with WW operations.

Agency job
Remote only
5 - 10 yrs
₹20L - ₹25L / yr
skill iconAmazon Web Services (AWS)
cicd
decsecops
Terraform
Ansible
+9 more

Senior Cloud Engineer Job Description

Position Title: Senior Cloud Engineer -- AWS [LONG TERM-CONTRACT POSITION]

Location: Remote [REQUIRES WORKING IN CST TIME ZONE]


Position Overview

The Senior Cloud Engineer will play a critical role in designing, deploying, and managing scalable, secure, and highly available cloud infrastructure across multiple platforms (AWS, Azure, Google Cloud). This role requires deep technical expertise, leadership in cloud

strategy, and hands-on experience with automation, DevOps practices, and cloud-native technologies. The ideal candidate will work collaboratively with cross-functional teams to deliver robust cloud solutions, drive best practices, and support business objectives

through innovative cloud engineering.


Key Responsibilities

Design, implement, and maintain cloud infrastructure and services, ensuring high availability, performance, and security across multi-cloud environments (AWS, Azure, GCP)


Develop and manage Infrastructure as Code (IaC) using tools such as Terraform, CloudFormation, and Ansible for automated provisioning and configuration


Lead the adoption and optimization of DevOps methodologies, including CI/CD pipelines, automated testing, and deployment processes


Collaborate with software engineers, architects, and stakeholders to architect cloud-native solutions that meet business and technical requirements


Monitor, troubleshoot, and optimize cloud systems for cost, performance, and reliability, using cloud monitoring and logging tools


Ensure cloud environments adhere to security best practices, compliance standards, and governance policies, including identity and access management, encryption, and vulnerability management

Mentor and guide junior engineers, sharing knowledge and fostering a culture of continuous improvement and innovation


Participate in on-call rotation and provide escalation support for critical cloud infrastructure issues

Document cloud architectures, processes, and procedures to ensure knowledge transfer and operational excellence


Stay current with emerging cloud technologies, trends, and best practices,

Required Qualifications

  • Bachelors or Masters degree in Computer Science, Engineering, Information Systems, or a related field, or equivalent work experience
  • 6–10 years of experience in cloud engineering or related roles, with a proven track record in large-scale cloud environments
  • Deep expertise in at least one major cloud platform (AWS, Azure, Google Cloud) and experience in multi-cloud environments
  • Strong programming and scripting skills (Python, Bash, PowerShell, etc.) for automation and cloud service integration
  • Proficiency with DevOps tools and practices, including CI/CD (Jenkins, GitLab CI), containerization (Docker, Kubernetes), and configuration management (Ansible, Chef)
  • Solid understanding of networking concepts (VPC, VPN, DNS, firewalls, load balancers), system administration (Linux/Windows), and cloud storage solutions
  • Experience with cloud security, governance, and compliance frameworks
  • Excellent analytical, troubleshooting, and root cause analysis skills
  • Strong communication and collaboration abilities, with experience working in agile, interdisciplinary teams
  • Ability to work independently, manage multiple priorities, and lead complex projects to completion


Preferred Qualifications

  • Relevant cloud certifications (e.g., AWS Certified Solutions Architect, AWS DevOps Engineer, Microsoft AZ-300/400/500, Google Professional Cloud Architect)
  • Experience with cloud cost optimization and FinOps practices
  • Familiarity with monitoring/logging tools (CloudWatch, Kibana, Logstash, Datadog, etc.)
  • Exposure to cloud database technologies (SQL, NoSQL, managed database services)
  • Knowledge of cloud migration strategies and hybrid cloud architectures


Read more
IT Services & Staffing Solutions Industry

IT Services & Staffing Solutions Industry

Agency job
via Peak Hire Solutions by Dhara Thakkar
Hyderabad
12 - 14 yrs
₹29L - ₹38L / yr
skill iconAmazon Web Services (AWS)
DevOps
Terraform
Troubleshooting
Amazon VPC
+16 more

REVIEW CRITERIA:

MANDATORY:

  • Strong Hands-On AWS Cloud Engineering / DevOps Profile
  • Mandatory (Experience 1): Must have 12+ years of experience in AWS Cloud Engineering / Cloud Operations / Application Support
  • Mandatory (Experience 2): Must have strong hands-on experience supporting AWS production environments (EC2, VPC, IAM, S3, ALB, CloudWatch)
  • Mandatory (Infrastructure as a code): Must have hands-on Infrastructure as Code experience using Terraform in production environments
  • Mandatory (AWS Networking): Strong understanding of AWS networking and connectivity (VPC design, routing, NAT, load balancers, hybrid connectivity basics)
  • Mandatory (Cost Optimization): Exposure to cost optimization and usage tracking in AWS environments
  • Mandatory (Core Skills): Experience handling monitoring, alerts, incident management, and root cause analysis
  • Mandatory (Soft Skills): Strong communication skills and stakeholder coordination skills


ROLE & RESPONSIBILITIES:

We are looking for a hands-on AWS Cloud Engineer to support day-to-day cloud operations, automation, and reliability of AWS environments. This role works closely with the Cloud Operations Lead, DevOps, Security, and Application teams to ensure stable, secure, and cost-effective cloud platforms.


KEY RESPONSIBILITIES:

  • Operate and support AWS production environments across multiple accounts
  • Manage infrastructure using Terraform and support CI/CD pipelines
  • Support Amazon EKS clusters, upgrades, scaling, and troubleshooting
  • Build and manage Docker images and push to Amazon ECR
  • Monitor systems using CloudWatch and third-party tools; respond to incidents
  • Support AWS networking (VPCs, NAT, Transit Gateway, VPN/DX)
  • Assist with cost optimization, tagging, and governance standards
  • Automate operational tasks using Python, Lambda, and Systems Manager


IDEAL CANDIDATE:

  • Strong hands-on AWS experience (EC2, VPC, IAM, S3, ALB, CloudWatch)
  • Experience with Terraform and Git-based workflows
  • Hands-on experience with Kubernetes / EKS
  • Experience with CI/CD tools (GitHub Actions, Jenkins, etc.)
  • Scripting experience in Python or Bash
  • Understanding of monitoring, incident management, and cloud security basics


NICE TO HAVE:

  • AWS Associate-level certifications
  • Experience with Karpenter, Prometheus, New Relic
  • Exposure to FinOps and cost optimization practices
Read more
Watsoo Express
Gurugram
8 - 11 yrs
₹18L - ₹25L / yr
skill iconDocker
skill iconKubernetes
helm
cicd
skill iconGitHub
+12 more

Profile: Devops Lead

Location: Gurugram

Experience: 08+ Years

Notice Period: can join Immediate to 1 week

Company: Watsoo

Required Skills & Qualifications

  • Bachelor’s degree in Computer Science, Engineering, or related field.
  • 5+ years of proven hands-on DevOps experience.
  • Strong experience with CI/CD tools (Jenkins, GitLab CI, GitHub Actions, etc.).
  • Expertise in containerization & orchestration (Docker, Kubernetes, Helm).
  • Hands-on experience with cloud platforms (AWS, Azure, or GCP).
  • Proficiency in Infrastructure as Code (IaC) tools (Terraform, Ansible, Pulumi, or CloudFormation).
  • Experience with monitoring and logging solutions (Prometheus, Grafana, ELK, CloudWatch, etc.).
  • Proficiency in scripting languages (Python, Bash, or Shell).
  • Knowledge of networking, security, and system administration.
  • Strong problem-solving skills and ability to work in fast-paced environments.
  • Troubleshoot production issues, perform root cause analysis, and implement preventive measures.

Advocate DevOps best practices, automation, and continuous improvement

Read more
Ride-hailing Industry

Ride-hailing Industry

Agency job
via Peak Hire Solutions by Dhara Thakkar
Bengaluru (Bangalore)
5 - 7 yrs
₹42L - ₹45L / yr
DevOps
skill iconPython
Shell Scripting
Infrastructure
Terraform
+16 more

JOB DETAILS:

- Job Title: Senior Devops Engineer 2

- Industry: Ride-hailing

- Experience: 5-7 years

- Working Days: 5 days/week

- Work Mode: ONSITE

- Job Location: Bangalore

- CTC Range: Best in Industry


Required Skills: Cloud & Infrastructure Operations, Kubernetes & Container Orchestration, Monitoring, Reliability & Observability, Proficiency with Terraform, Ansible etc., Strong problem-solving skills with scripting (Python/Go/Shell)

 

Criteria:

1.   Candidate must be from a product-based or scalable app-based start-ups company with experience handling large-scale production traffic.

2.   Minimum 5 yrs of experience working as a DevOps/Infrastructure Consultant

3.   Own end-to-end infrastructure right from non-prod to prod environment including self-managed

4.   Candidate must have experience in database migration from scratch 

5.   Must have a firm hold on the container orchestration tool Kubernetes

6.   Must have expertise in configuration management tools like Ansible, Terraform, Chef / Puppet

7.   Understanding programming languages like GO/Python, and Java

8.   Working on databases like Mongo/Redis/Cassandra/Elasticsearch/Kafka.

9.   Working experience on Cloud platform - AWS

10. Candidate should have Minimum 1.5 years stability per organization, and a clear reason for relocation.

 

Description 

Job Summary:

As a DevOps Engineer at company, you will be working on building and operating infrastructure at scale, designing and implementing a variety of tools to enable product teams to build and deploy their services independently, improving observability across the board, and designing for security, resiliency, availability, and stability. If the prospect of ensuring system reliability at scale and exploring cutting-edge technology to solve problems, excites you, then this is your fit.

 

Job Responsibilities:

● Own end-to-end infrastructure right from non-prod to prod environment including self-managed DBs

● Codify our infrastructure

● Do what it takes to keep the uptime above 99.99%

● Understand the bigger picture and sail through the ambiguities

● Scale technology considering cost and observability and manage end-to-end processes

● Understand DevOps philosophy and evangelize the principles across the organization

● Strong communication and collaboration skills to break down the silos

 

Job Requirements:

● B.Tech. / B.E. degree in Computer Science or equivalent software engineering degree/experience

● Minimum 5 yrs of experience working as a DevOps/Infrastructure Consultant

● Must have a firm hold on the container orchestration tool Kubernetes

● Must have expertise in configuration management tools like Ansible, Terraform, Chef / Puppet

● Strong problem-solving skills, and ability to write scripts using any scripting language

● Understanding programming languages like GO/Python, and Java

● Comfortable working on databases like Mongo/Redis/Cassandra/Elasticsearch/Kafka.

 

What’s there for you?

Company’s team handles everything – infra, tooling, and self-manages a bunch of databases, such as

● 150+ microservices with event-driven architecture across different tech stacks Golang/ java/ node

● More than 100,000 Request per second on our edge gateways

● ~20,000 events per second on self-managed Kafka

● 100s of TB of data on self-managed databases

● 100s of real-time continuous deployment to production

● Self-managed infra supporting

● 100% OSS

Read more
Ride-hailing Industry

Ride-hailing Industry

Agency job
via Peak Hire Solutions by Dhara Thakkar
Bengaluru (Bangalore)
6 - 9 yrs
₹47L - ₹50L / yr
DevOps
skill iconPython
Shell Scripting
skill iconKubernetes
Terraform
+15 more

JOB DETAILS:

- Job Title: Lead DevOps Engineer

- Industry: Ride-hailing

- Experience: 6-9 years

- Working Days: 5 days/week

- Work Mode: ONSITE

- Job Location: Bangalore

- CTC Range: Best in Industry


Required Skills: Cloud & Infrastructure Operations, Kubernetes & Container Orchestration, Monitoring, Reliability & Observability, Proficiency with Terraform, Ansible etc., Strong problem-solving skills with scripting (Python/Go/Shell)

 

Criteria:

1.   Candidate must be from a product-based or scalable app-based start-ups company with experience handling large-scale production traffic.

2.   Minimum 6 yrs of experience working as a DevOps/Infrastructure Consultant

3.   Candidate must have 2 years of experience as an lead (handling team of 3 to 4 members at least)

4.   Own end-to-end infrastructure right from non-prod to prod environment including self-managed

5.   Candidate must have Self experience in database migration from scratch 

6.   Must have a firm hold on the container orchestration tool Kubernetes

7.   Should have expertise in configuration management tools like Ansible, Terraform, Chef / Puppet

8.   Understanding programming languages like GO/Python, and Java

9.   Working on databases like Mongo/Redis/Cassandra/Elasticsearch/Kafka.

10.   Working experience on Cloud platform -AWS

11. Candidate should have Minimum 1.5 years stability per organization, and a clear reason for relocation.

 

Description

Job Summary:

As a DevOps Engineer at company, you will be working on building and operating infrastructure at scale, designing and implementing a variety of tools to enable product teams to build and deploy their services independently, improving observability across the board, and designing for security, resiliency, availability, and stability. If the prospect of ensuring system reliability at scale and exploring cutting-edge technology to solve problems, excites you, then this is your fit.

 

Job Responsibilities:

● Own end-to-end infrastructure right from non-prod to prod environment including self-managed DBs

● Codify our infrastructure

● Do what it takes to keep the uptime above 99.99%

● Understand the bigger picture and sail through the ambiguities

● Scale technology considering cost and observability and manage end-to-end processes

● Understand DevOps philosophy and evangelize the principles across the organization

● Strong communication and collaboration skills to break down the silos

 

Job Requirements:

● B.Tech. / B.E. degree in Computer Science or equivalent software engineering degree/experience

● Minimum 6 yrs of experience working as a DevOps/Infrastructure Consultant

● Must have a firm hold on the container orchestration tool Kubernetes

● Must have expertise in configuration management tools like Ansible, Terraform, Chef / Puppet

● Strong problem-solving skills, and ability to write scripts using any scripting language

● Understanding programming languages like GO/Python, and Java

● Comfortable working on databases like Mongo/Redis/Cassandra/Elasticsearch/Kafka.

 

What’s there for you?

Company’s team handles everything – infra, tooling, and self-manages a bunch of databases, such as

● 150+ microservices with event-driven architecture across different tech stacks Golang/ java/ node

● More than 100,000 Request per second on our edge gateways

● ~20,000 events per second on self-managed Kafka

● 100s of TB of data on self-managed databases

● 100s of real-time continuous deployment to production

● Self-managed infra supporting

● 100% OSS

Read more
Ride-hailing Industry

Ride-hailing Industry

Agency job
via Peak Hire Solutions by Dhara Thakkar
Bengaluru (Bangalore)
4 - 6 yrs
₹34L - ₹37L / yr
DevOps
skill iconPython
Shell Scripting
skill iconKubernetes
Monitoring
+18 more

JOB DETAILS:

- Job Title: Senior Devops Engineer 1

- Industry: Ride-hailing

- Experience: 4-6 years

- Working Days: 5 days/week

- Work Mode: ONSITE

- Job Location: Bangalore

- CTC Range: Best in Industry


Required Skills: Cloud & Infrastructure Operations, Kubernetes & Container Orchestration, Monitoring, Reliability & Observability, Proficiency with Terraform, Ansible etc., Strong problem-solving skills with scripting (Python/Go/Shell)

 

Criteria:

1. Candidate must be from a product-based or scalable app-based startups company with experience handling large-scale production traffic.

2. Candidate must have strong Linux expertise with hands-on production troubleshooting and working knowledge of databases and middleware (Mongo, Redis, Cassandra, Elasticsearch, Kafka).

3. Candidate must have solid experience with Kubernetes.

4. Candidate should have strong knowledge of configuration management tools like Ansible, Terraform, and Chef / Puppet. Add on- Prometheus & Grafana etc.

5. Candidate must be an individual contributor with strong ownership.

6. Candidate must have hands-on experience with DATABASE MIGRATIONS and observability tools such as Prometheus and Grafana.

7. Candidate must have working knowledge of Go/Python and Java.

8. Candidate should have working experience on Cloud platform - AWS

9. Candidate should have Minimum 1.5 years stability per organization, and a clear reason for relocation.

 

Description 

Job Summary:

As a DevOps Engineer at company, you will be working on building and operating infrastructure at scale, designing and implementing a variety of tools to enable product teams to build and deploy their services independently, improving observability across the board, and designing for security, resiliency, availability, and stability. If the prospect of ensuring system reliability at scale and exploring cutting-edge technology to solve problems, excites you, then this is your fit.

 

Job Responsibilities:

- Own end-to-end infrastructure right from non-prod to prod environment including self-managed DBs.

- Understanding the needs of stakeholders and conveying this to developers.

- Working on ways to automate and improve development and release processes.

- Identifying technical problems and developing software updates and ‘fixes’.

- Working with software developers to ensure that development follows established processes and works as intended.

- Do what it takes to keep the uptime above 99.99%.

- Understand DevOps philosophy and evangelize the principles across the organization.

- Strong communication and collaboration skills to break down the silos

 

Job Requirements:

- B.Tech. / B.E. degree in Computer Science or equivalent software engineering degree/experience.

- Minimum 4 yrs of experience working as a DevOps/Infrastructure Consultant.

- Strong background in operating systems like Linux.

- Understands the container orchestration tool Kubernetes.

- Proficient Knowledge of configuration management tools like Ansible, Terraform, and Chef / Puppet. Add on- Prometheus & Grafana etc.

- Problem-solving attitude, and ability to write scripts using any scripting language.

- Understanding programming languages like GO/Python, and Java.

- Basic understanding of databases and middlewares like Mongo/Redis/Cassandra/Elasticsearch/Kafka.

- Should be able to take ownership of tasks, and must be responsible. - Good communication skills

 

Read more
MyOperator - VoiceTree Technologies

at MyOperator - VoiceTree Technologies

1 video
2 recruiters
Vijay Muthu
Posted by Vijay Muthu
Remote only
3 - 5 yrs
₹8L - ₹12L / yr
skill iconKubernetes
skill iconAmazon Web Services (AWS)
Amazon EC2
AWS RDS
AWS opensearch
+22 more

About MyOperator

MyOperator is a Business AI Operator, a category leader that unifies WhatsApp, Calls, and AI-powered chat & voice bots into one intelligent business communication platform. Unlike fragmented communication tools, MyOperator combines automation, intelligence, and workflow integration to help businesses run WhatsApp campaigns, manage calls, deploy AI chatbots, and track performance — all from a single, no-code platform. Trusted by 12,000+ brands including Amazon, Domino's, Apollo, and Razorpay, MyOperator enables faster responses, higher resolution rates, and scalable customer engagement — without fragmented tools or increased headcount.


Job Summary

We are looking for a skilled and motivated DevOps Engineer with 3+ years of hands-on experience in AWS cloud infrastructure, CI/CD automation, and Kubernetes-based deployments. The ideal candidate will have strong expertise in Infrastructure as Code, containerization, monitoring, and automation, and will play a key role in ensuring high availability, scalability, and security of production systems.


Key Responsibilities

  • Design, deploy, manage, and maintain AWS cloud infrastructure, including EC2, RDS, OpenSearch, VPC, S3, ALB, API Gateway, Lambda, SNS, and SQS.
  • Build, manage, and operate Kubernetes (EKS) clusters and containerized workloads.
  • Containerize applications using Docker and manage deployments with Helm charts
  • Develop and maintain CI/CD pipelines using Jenkins for automated build and deployment processes
  • Provision and manage infrastructure using Terraform (Infrastructure as Code)
  • Implement and manage monitoring, logging, and alerting solutions using Prometheus and Grafana
  • Write and maintain Python scripts for automation, monitoring, and operational tasks
  • Ensure high availability, scalability, performance, and cost optimization of cloud resources
  • Implement and follow security best practices across AWS and Kubernetes environments
  • Troubleshoot production issues, perform root cause analysis, and support incident resolution
  • Collaborate closely with development and QA teams to streamline deployment and release processes

Required Skills & Qualifications

  • 3+ years of hands-on experience as a DevOps Engineer or Cloud Engineer.
  • Strong experience with AWS services, including:
  • EC2, RDS, OpenSearch, VPC, S3
  • Application Load Balancer (ALB), API Gateway, Lambda
  • SNS and SQS.
  • Hands-on experience with AWS EKS (Kubernetes)
  • Strong knowledge of Docker and Helm charts
  • Experience with Terraform for infrastructure provisioning and management
  • Solid experience building and managing CI/CD pipelines using Jenkins
  • Practical experience with Prometheus and Grafana for monitoring and alerting
  • Proficiency in Python scripting for automation and operational tasks
  • Good understanding of Linux systems, networking concepts, and cloud security
  • Strong problem-solving and troubleshooting skills

Good to Have (Preferred Skills)

  • Exposure to GitOps practices
  • Experience managing multi-environment setups (Dev, QA, UAT, Production)
  • Knowledge of cloud cost optimization techniques
  • Understanding of Kubernetes security best practices
  • Experience with log aggregation tools (e.g., ELK/OpenSearch stack)

Language Preference

  • Fluency in English is mandatory.
  • Fluency in Hindi is preferred.
Read more
Deqode

at Deqode

1 recruiter
Samiksha Agrawal
Posted by Samiksha Agrawal
Mumbai
3 - 6 yrs
₹5L - ₹15L / yr
DevOps
Google Cloud Platform (GCP)
Terraform
skill iconJenkins
CI/CD
+2 more

Role: Senior Platform Engineer (GCP Cloud)

Experience Level: 3 to 6 Years

Work location: Mumbai

Mode : Hybrid


Role & Responsibilities:

  • Build automation software for cloud platforms and applications
  • Drive Infrastructure as Code (IaC) adoption
  • Design self-service, self-healing monitoring and alerting tools
  • Automate CI/CD pipelines (Git, Jenkins, SonarQube, Docker)
  • Build Kubernetes container platforms
  • Introduce new cloud technologies for business innovation

Requirements:

  • Hands-on experience with GCP Cloud
  • Knowledge of cloud services (compute, storage, network, messaging)
  • IaC tools experience (Terraform/CloudFormation)
  • SQL & NoSQL databases (Postgres, Cassandra)
  • Automation tools (Puppet/Chef/Ansible)
  • Strong Linux administration skills
  • Programming: Bash/Python/Java/Scala
  • CI/CD pipeline expertise (Jenkins, Git, Maven)
  • Multi-region deployment experience
  • Agile/Scrum/DevOps methodology


Read more
E-Commerce Industry

E-Commerce Industry

Agency job
via Peak Hire Solutions by Dhara Thakkar
Bengaluru (Bangalore)
6 - 10 yrs
₹30L - ₹50L / yr
Security Information and Event Management (SIEM)
Information security governance
ISO/IEC 27001:2005
Systems Development Life Cycle (SDLC)
Software Development
+67 more

SENIOR INFORMATION SECURITY ENGINEER (DEVSECOPS)

Key Skills: Software Development Life Cycle (SDLC), CI/CD

About Company: Consumer Internet / E-Commerce

Company Size: Mid-Sized

Experience Required: 6 - 10 years

Working Days: 5 days/week

Office Location: Bengaluru [Karnataka]


Review Criteria:

Mandatory:

  • Strong DevSecOps profile
  • Must have 5+ years of hands-on experience in Information Security, with a primary focus on cloud security across AWS, Azure, and GCP environments.
  • Must have strong practical experience working with Cloud Security Posture Management (CSPM) tools such as Prisma Cloud, Wiz, or Orca along with SIEM / IDS / IPS platforms
  • Must have proven experience in securing Kubernetes and containerized environments including image security,runtime protection, RBAC, and network policies.
  • Must have hands-on experience integrating security within CI/CD pipelines using tools such as Snyk, GitHub Advanced Security,or equivalent security scanning solutions.
  • Must have solid understanding of core security domains including network security, encryption, identity and access management key management, and security governance including cloud-native security services like GuardDuty, Azure Security Center etc
  • Must have practical experience with Application Security Testing tools including SAST, DAST, and SCA in real production environments
  • Must have hands-on experience with security monitoring, incident response, alert investigation, root-cause analysis (RCA), and managing VAPT / penetration testing activities
  • Must have experience securing infrastructure-as-code and cloud deployments using Terraform, CloudFormation, ARM, Docker, and Kubernetes
  • B2B SaaS Product companies
  • Must have working knowledge of globally recognized security frameworks and standards such as ISO 27001, NIST, and CIS with exposure to SOC2, GDPR, or HIPAA compliance environments


Preferred:

  • Experience with DevSecOps automation, security-as-code, and policy-as-code implementations
  • Exposure to threat intelligence platforms, cloud security monitoring, and proactive threat detection methodologies, including EDR / DLP or vulnerability management tools
  • Must demonstrate strong ownership mindset, proactive security-first thinking, and ability to communicate risks in clear business language


Roles & Responsibilities:

We are looking for a Senior Information Security Engineer who can help protect our cloud infrastructure, applications, and data while enabling teams to move fast and build securely.


This role sits deep within our engineering ecosystem. You’ll embed security into how we design, build, deploy, and operate systems—working closely with Cloud, Platform, and Application Engineering teams. You’ll balance proactive security design with hands-on incident response, and help shape a strong, security-first culture across the organization.


If you enjoy solving real-world security problems, working close to systems and code, and influencing how teams build securely at scale, this role is for you.


What You’ll Do-

Cloud & Infrastructure Security:

  • Design, implement, and operate cloud-native security controls across AWS, Azure, GCP, and Oracle.
  • Strengthen IAM, network security, and cloud posture using services like GuardDuty, Azure Security Center and others.
  • Partner with platform teams to secure VPCs, security groups, and cloud access patterns.


Application & DevSecOps Security:

  • Embed security into the SDLC through threat modeling, secure code reviews, and security-by-design practices.
  • Integrate SAST, DAST, and SCA tools into CI/CD pipelines.
  • Secure infrastructure-as-code and containerized workloads using Terraform, CloudFormation, ARM, Docker, and Kubernetes.


Security Monitoring & Incident Response:

  • Monitor security alerts and investigate potential threats across cloud and application layers.
  • Lead or support incident response efforts, root-cause analysis, and corrective actions.
  • Plan and execute VAPT and penetration testing engagements (internal and external), track remediation, and validate fixes.
  • Conduct red teaming activities and tabletop exercises to test detection, response readiness, and cross-team coordination.
  • Continuously improve detection, response, and testing maturity.


Security Tools & Platforms:

  • Manage and optimize security tooling including firewalls, SIEM, EDR, DLP, IDS/IPS, CSPM, and vulnerability management platforms.
  • Ensure tools are well-integrated, actionable, and aligned with operational needs.


Compliance, Governance & Awareness:

  • Support compliance with industry standards and frameworks such as SOC2, HIPAA, ISO 27001, NIST, CIS, and GDPR.
  • Promote secure engineering practices through training, documentation, and ongoing awareness programs.
  • Act as a trusted security advisor to engineering and product teams.


Continuous Improvement:

  • Stay ahead of emerging threats, cloud vulnerabilities, and evolving security best practices.
  • Continuously raise the bar on a company's security posture through automation and process improvement.


Endpoint Security (Secondary Scope):

  • Provide guidance on endpoint security tooling such as SentinelOne and Microsoft Defender when required.


Ideal Candidate:

  • Strong hands-on experience in cloud security across AWS and Azure.
  • Practical exposure to CSPM tools (e.g., Prisma Cloud, Wiz, Orca) and SIEM / IDS / IPS platforms.
  • Experience securing containerized and Kubernetes-based environments.
  • Familiarity with CI/CD security integrations (e.g., Snyk, GitHub Advanced Security, or similar).
  • Solid understanding of network security, encryption, identity, and access management.
  • Experience with application security testing tools (SAST, DAST, SCA).
  • Working knowledge of security frameworks and standards such as ISO 27001, NIST, and CIS.
  • Strong analytical, troubleshooting, and problem-solving skills.


Nice to Have:

  • Experience with DevSecOps automation and security-as-code practices.
  • Exposure to threat intelligence and cloud security monitoring solutions.
  • Familiarity with incident response frameworks and forensic analysis.
  • Security certifications such as CISSP, CISM, CCSP, or CompTIA Security+.


Perks, Benefits and Work Culture:

A wholesome opportunity in a fast-paced environment that will enable you to juggle between concepts, yet maintain the quality of content, interact and share your ideas and have loads of learning while at work. Work with a team of highly talented young professionals and enjoy the comprehensive benefits that company offers.

Read more
Procedure

at Procedure

4 candid answers
3 recruiters
Adithya K
Posted by Adithya K
Remote only
5 - 10 yrs
₹40L - ₹60L / yr
Software Development
skill iconAmazon Web Services (AWS)
skill iconPython
TypeScript
skill iconPostgreSQL
+3 more

Procedure is hiring for Drover.


This is not a DevOps/SRE/cloud-migration role — this is a hands-on backend engineering and architecture role where you build the platform powering our hardware at scale.


About Drover

Ranching is getting harder. Increased labor costs and a volatile climate are placing mounting pressure to provide for a growing population. Drover is empowering ranchers to efficiently and sustainably feed the world by making it cheaper and easier to manage livestock, unlock productivity gains, and reduce carbon footprint with rotational grazing. Not only is this a $46B opportunity, you'll be working on a climate solution with the potential for real, meaningful impact.


We use patent-pending low-voltage electrical muscle stimulation (EMS) to steer and contain cows, replacing the need for physical fences or electric shock. We are building something that has never been done before, and we have hundreds of ranches on our waitlist.


Drover is founded by Callum Taylor (ex-Harvard), who comes from 5 generations of ranching, and Samuel Aubin, both of whom grew up in Australian ranching towns and have an intricate understanding of the problem space. We are well-funded and supported by Workshop Ventures, a VC firm with experience in building unicorn IoT companies.


We're looking to assemble a team of exceptional talent with a high eagerness to dive headfirst into understanding the challenges and opportunities within ranching.


About The Role

As our founding cloud engineer, you will be responsible for building and scaling the infrastructure that powers our IoT platform, connecting thousands of devices across ranches nationwide.


Because we are an early-stage startup, you will have high levels of ownership in what you build. You will play a pivotal part in architecting our cloud infrastructure, building robust APIs, and ensuring our systems can scale reliably. We are looking for someone who is excited about solving complex technical challenges at the intersection of IoT, agriculture, and cloud computing.


What You'll Do

  • Develop Drover IoT cloud architecture from the ground up (it’s a green field project)
  • Design and implement services to support wearable devices, mobile app, and backend API
  • Implement data processing and storage pipelines
  • Create and maintain Infrastructure-as-Code
  • Support the engineering team across all aspects of early-stage development -- after all, this is a startup


Requirements

  • 5+ years of experience developing cloud architecture on AWS
  • In-depth understanding of various AWS services, especially those related to IoT
  • Expertise in cloud-hosted, event-driven, serverless architectures
  • Expertise in programming languages suitable for AWS micro-services (eg: TypeScript, Python)
  • Experience with networking and socket programming
  • Experience with Kubernetes or similar orchestration platforms
  • Experience with Infrastructure-as-Code tools (e.g., Terraform, AWS CDK)
  • Familiarity with relational databases (PostgreSQL)
  • Familiarity with Continuous Integration and Continuous Deployment (CI/CD)


Nice To Have

  • Bachelor’s or Master’s degree in Computer Science, Software Engineering, Electrical Engineering, or a related field


Read more
AI-First Company

AI-First Company

Agency job
via Peak Hire Solutions by Dhara Thakkar
Bengaluru (Bangalore), Mumbai, Hyderabad, Gurugram
5 - 17 yrs
₹30L - ₹45L / yr
Data engineering
Data architecture
SQL
Data modeling
GCS
+47 more

ROLES AND RESPONSIBILITIES:

You will be responsible for architecting, implementing, and optimizing Dremio-based data Lakehouse environments integrated with cloud storage, BI, and data engineering ecosystems. The role requires a strong balance of architecture design, data modeling, query optimization, and governance enablement in large-scale analytical environments.


  • Design and implement Dremio lakehouse architecture on cloud (AWS/Azure/Snowflake/Databricks ecosystem).
  • Define data ingestion, curation, and semantic modeling strategies to support analytics and AI workloads.
  • Optimize Dremio reflections, caching, and query performance for diverse data consumption patterns.
  • Collaborate with data engineering teams to integrate data sources via APIs, JDBC, Delta/Parquet, and object storage layers (S3/ADLS).
  • Establish best practices for data security, lineage, and access control aligned with enterprise governance policies.
  • Support self-service analytics by enabling governed data products and semantic layers.
  • Develop reusable design patterns, documentation, and standards for Dremio deployment, monitoring, and scaling.
  • Work closely with BI and data science teams to ensure fast, reliable, and well-modeled access to enterprise data.


IDEAL CANDIDATE:

  • Bachelor’s or Master’s in Computer Science, Information Systems, or related field.
  • 5+ years in data architecture and engineering, with 3+ years in Dremio or modern lakehouse platforms.
  • Strong expertise in SQL optimization, data modeling, and performance tuning within Dremio or similar query engines (Presto, Trino, Athena).
  • Hands-on experience with cloud storage (S3, ADLS, GCS), Parquet/Delta/Iceberg formats, and distributed query planning.
  • Knowledge of data integration tools and pipelines (Airflow, DBT, Kafka, Spark, etc.).
  • Familiarity with enterprise data governance, metadata management, and role-based access control (RBAC).
  • Excellent problem-solving, documentation, and stakeholder communication skills.


PREFERRED:

  • Experience integrating Dremio with BI tools (Tableau, Power BI, Looker) and data catalogs (Collibra, Alation, Purview).
  • Exposure to Snowflake, Databricks, or BigQuery environments.
  • Experience in high-tech, manufacturing, or enterprise data modernization programs.
Read more
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Find more jobs
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort