Cutshort logo
Terraform Jobs in Bangalore (Bengaluru)

50+ Terraform Jobs in Bangalore (Bengaluru) | Terraform Job openings in Bangalore (Bengaluru)

Apply to 50+ Terraform Jobs in Bangalore (Bengaluru) on CutShort.io. Explore the latest Terraform Job opportunities across top companies like Google, Amazon & Adobe.

icon
DeepVidya AI Private Limited (OpenCV University)
Bengaluru (Bangalore)
2 - 5 yrs
₹5L - ₹10L / yr
skill iconPython
MySQL
skill iconAmazon Web Services (AWS)
Amazon EC2
Amazon S3
+6 more

About the job


Location: Bangalore, India

Job Type: Full-Time | On-Site


Job Description

We are looking for a highly skilled and motivated Python Backend Developer to join our growing team in Bangalore. The ideal candidate will have a strong background in backend development with Python, deep expertise in relational databases like MySQL, and hands-on experience with AWS cloud infrastructure.


Key Responsibilities

  • Design, develop, and maintain scalable backend systems using Python.
  • Architect and optimize relational databases (MySQL), including complex queries and indexing.
  • Manage and deploy applications on AWS cloud services (EC2, S3, RDS, DynamoDB, API Gateway, Lambda).
  • Automate cloud infrastructure using CloudFormation or Terraform.
  • Collaborate with cross-functional teams to define, design, and ship new features.
  • Mentor junior developers and contribute to a culture of technical excellence.
  • Proactively identify issues and provide solutions to challenging backend problems.


Mandatory Requirements

  • Minimum 3 years of professional experience in Python backend development.
  • Expert-level knowledge in MySQL database creation, optimization, and query writing.
  • Strong experience with AWS services, particularly EC2, S3, RDS, DynamoDB, API Gateway, and Lambda.
  • Hands-on experience with infrastructure as code using CloudFormation or Terraform.
  • Proven problem-solving skills and the ability to work independently.
  • Demonstrated leadership abilities and team collaboration skills.
  • Excellent verbal and written communication.
Read more
Thinqor
Ravikanth Dangeti
Posted by Ravikanth Dangeti
Bengaluru (Bangalore)
5 - 20 yrs
₹20L - ₹22L / yr
skill iconAmazon Web Services (AWS)
eks
Terraform
Splunk

General Description:Owns all technical aspects of software development for assigned applications

Participates in the design and development of systems & application programs

Functions as Senior member of an agile team and helps drive consistent development practices – tools, common components, and documentation



Required Skills:

In depth experience configuring and administering EKS clusters in AWS.

In depth experience in configuring **Splunk SaaS** in AWS environments especially in **EKS**

In depth understanding of OpenTelemetry and configuration of **OpenTelemetry Collectors**

In depth knowledge of observability concepts and strong troubleshooting experience.

Experience in implementing comprehensive monitoring and logging solutions in AWS using **CloudWatch**.

Experience in **Terraform** and Infrastructure as code.

Experience in **Helm**Strong scripting skills in Shell and/or python.

Experience with large-scale distributed systems and architecture knowledge (Linux/UNIX and Windows operating systems, networking, storage) in a cloud computing or traditional IT infrastructure environment.

Must have a good understanding of cloud concepts (Storage /compute/network).

Experience collaborating with several cross functional teams to architect observability pipelines for various GCP services like GKE, cloud run Big Query etc.

Experience with Git and GitHub.Experience with code build and deployment using GitHub actions, and Artifact Registry.

Proficient in developing and maintaining technical documentation, ADRs, and runbooks.


Read more
Thinqor
Ravikanth Dangeti
Posted by Ravikanth Dangeti
Bengaluru (Bangalore)
5 - 20 yrs
₹20L - ₹22L / yr
skill iconAmazon Web Services (AWS)
eks
Terraform
DataDog
EKS
+3 more

General Description:


Owns all technical aspects of software development for assigned applications.

Participates in the design and development of systems & application programs.

Functions as Senior member of an agile team and helps drive consistent development practices – tools, common components, and documentation.


Required Skills:


In depth experience configuring and administering EKS clusters in AWS.

In depth experience in configuring **DataDog** in AWS environments especially in **EKS**

In depth understanding of OpenTelemetry and configuration of **OpenTelemetry Collectors**

In depth knowledge of observability concepts and strong troubleshooting experience.

Experience in implementing comprehensive monitoring and logging solutions in AWS using **CloudWatch**.

Experience in **Terraform** and Infrastructure as code.

Experience in **Helm**

Strong scripting skills in Shell and/or python.

Experience with large-scale distributed systems and architecture knowledge (Linux/UNIX and Windows operating systems, networking, storage) in a cloud computing or traditional IT infrastructure environment.

Must have a good understanding of cloud concepts (Storage /compute/network).

Experience in Collaborating with several cross functional teams to architect observability pipelines for various GCP services like GKE, cloud run Big Query etc.

Experience with Git and GitHub.

Proficient in developing and maintaining technical documentation, ADRs, and runbooks.


Read more
Gruve
Reshika Mendiratta
Posted by Reshika Mendiratta
Bengaluru (Bangalore), Pune
8yrs+
Upto ₹50L / yr (Varies
)
DevOps
CI/CD
skill iconGit
skill iconKubernetes
Ansible
+7 more

About the Company:

Gruve is an innovative Software Services startup dedicated to empowering Enterprise Customers in managing their Data Life Cycle. We specialize in Cyber Security, Customer Experience, Infrastructure, and advanced technologies such as Machine Learning and Artificial Intelligence. Our mission is to assist our customers in their business strategies utilizing their data to make more intelligent decisions. As an well-funded early-stage startup, Gruve offers a dynamic environment with strong customer and partner networks.

 

Why Gruve:

At Gruve, we foster a culture of innovation, collaboration, and continuous learning. We are committed to building a diverse and inclusive workplace where everyone can thrive and contribute their best work. If you’re passionate about technology and eager to make an impact, we’d love to hear from you.

Gruve is an equal opportunity employer. We welcome applicants from all backgrounds and thank all who apply; however, only those selected for an interview will be contacted.

 

Position summary:

We are seeking a Staff Engineer – DevOps with 8-12 years of experience in designing, implementing, and optimizing CI/CD pipelines, cloud infrastructure, and automation frameworks. The ideal candidate will have expertise in Kubernetes, Terraform, CI/CD, Security, Observability, and Cloud Platforms (AWS, Azure, GCP). You will play a key role in scaling and securing our infrastructure, improving developer productivity, and ensuring high availability and performance. 

Key Roles & Responsibilities:

  • Design, implement, and maintain CI/CD pipelines using tools like Jenkins, GitLab CI/CD, ArgoCD, and Tekton.
  • Deploy and manage Kubernetes clusters (EKS, AKS, GKE) and containerized workloads.
  • Automate infrastructure provisioning using Terraform, Ansible, Pulumi, or CloudFormation.
  • Implement observability and monitoring solutions using Prometheus, Grafana, ELK, OpenTelemetry, or Datadog.
  • Ensure security best practices in DevOps, including IAM, secrets management, container security, and vulnerability scanning.
  • Optimize cloud infrastructure (AWS, Azure, GCP) for performance, cost efficiency, and scalability.
  • Develop and manage GitOps workflows and infrastructure-as-code (IaC) automation.
  • Implement zero-downtime deployment strategies, including blue-green deployments, canary releases, and feature flags.
  • Work closely with development teams to optimize build pipelines, reduce deployment time, and improve system reliability. 


Basic Qualifications:

  • A bachelor’s or master’s degree in computer science, electronics engineering or a related field
  • 8-12 years of experience in DevOps, Site Reliability Engineering (SRE), or Infrastructure Automation.
  • Strong expertise in CI/CD pipelines, version control (Git), and release automation.
  •  Hands-on experience with Kubernetes (EKS, AKS, GKE) and container orchestration.
  • Proficiency in Terraform, Ansible for infrastructure automation.
  • Experience with AWS, Azure, or GCP services (EC2, S3, IAM, VPC, Lambda, API Gateway, etc.).
  • Expertise in monitoring/logging tools such as Prometheus, Grafana, ELK, OpenTelemetry, or Datadog.
  • Strong scripting and automation skills in Python, Bash, or Go.


Preferred Qualifications  

  • Experience in FinOps Cloud Cost Optimization) and Kubernetes cluster scaling.
  • Exposure to serverless architectures and event-driven workflows.
  • Contributions to open-source DevOps projects. 
Read more
IT Solutions

IT Solutions

Agency job
via HR Central by Melrose Savia Pinto
Bengaluru (Bangalore)
4 - 8 yrs
₹15L - ₹28L / yr
DevOps
skill iconAmazon Web Services (AWS)
skill iconPython
Deployment management
skill iconJenkins
+4 more

Location: Malleshwaram/MG Road

Work: Initially Onsite and later Hybrid



We are committed to becoming a true DevOps house and want your help. The role will


require close liaison with development and test teams to increase effectiveness of


current dev processes. Participation in an out-of-hours emergency support rotationally


will be required. You will be shaping the way that we use our DevOps tools and


innovating to deliver business value and improve the cadence of the entire dev team.


Required Skills:


Good knowledge of Amazon Web Services suite (EC2, ECS, Loadbalancing, VPC,


S3, RDS, Lambda, Cloudwatch, IAM etc)


• Hands on knowledge on container orchestration tools – Must have: AWS ECS and Good to have: AWS EKS


• Good knowledge on creating and maintaining the infrastructure as code using Terraform


• Solid experience with CI-CD tools like Jenkins, git and Ansible


• Working experience on supporting Microservices (Deploying, maintaining and


monitoring Java web-based production applications using docker container)


• Strong knowledge on debugging production issues across the services and


technology stack and application monitoring (we use Splunk & Cloudwatch)


• Experience with software build tools (maven, and node)


• Experience with scripting and automation languages (Bash, groovy,


JavaScript, python)


• Experience with Linux administration and CVEs scan - Amz Linux, Ubuntu


• 4+ years in AWS DevOps Engineer


Optional skills:


• Oracle/SQL database maintenance experience


• Elastic Search


• Serverless/container based approach


• Automated testing of infrastructure deployments


• Experience of performance testing & JVM tuning


• Experience of a high-volume distributed eCommerce environment


• Experience working closely with Agile development teams


• Familiarity with load testing tools & process


• Experience with nginx, tomcat and apache


• Experience with Cloudflare


Personal attributes


The successful candidate will be comfortable working autonomously and


independently.


They will be keen to bring an entire team to the next level of delivering business value.


A proactive approach to problem

Read more
Gruve
Reshika Mendiratta
Posted by Reshika Mendiratta
Bengaluru (Bangalore)
5yrs+
Best in industry
skill iconAmazon Web Services (AWS)
Microsoft Windows Azure
Terraform
Ansible
AWS CloudFormation
+6 more

About the Company:

Gruve is an innovative Software Services startup dedicated to empowering Enterprise Customers in managing their Data Life Cycle. We specialize in Cyber Security, Customer Experience, Infrastructure, and advanced technologies such as Machine Learning and Artificial Intelligence. Our mission is to assist our customers in their business strategies utilizing their data to make more intelligent decisions. As a well-funded early-stage startup, Gruve offers a dynamic environment with strong customer and partner networks.

 

Why Gruve:

At Gruve, we foster a culture of innovation, collaboration, and continuous learning. We are committed to building a diverse and inclusive workplace where everyone can thrive and contribute their best work. If you’re passionate about technology and eager to make an impact, we’d love to hear from you.

Gruve is an equal opportunity employer. We welcome applicants from all backgrounds and thank all who apply; however, only those selected for an interview will be contacted.

 

Position summary:

Our Professional Services team seeks a Cloud Engineer with a focus on Public Clouds for professional services engagements. In this role, the candidate will be ensuring the success of our engagements by providing deployment, configuration, and operationalization of Cloud infrastructure as well as various other cloud technologies such as On-Prem, Openshift, and Hybrid Environments.

A successful candidate for this position does require a good understanding of the Public Cloud systems (AWS, Azure) as well as a working knowledge of systems technologies, common enterprise software (Linux, Windows, Active Directory), Cloud Technologies (Kubernetes, VMware ESXi ), and a good understand of cloud automation (Ansible, CDK, Terraform, CloudFormation). The ideal candidate has industry experience and is confident working in a cross-functional team environment that is global in reach.

Key Roles & Responsibilities:

  • Public Cloud: Lead Public Cloud deployments for our Cloud Engineering customers including setup, automation, configuration, documentation and troubleshooting. Redhat Openshift on AWS/Azure experience is preferred.
  • Automation: Develop and maintain automated testing systems to ensure uniform and reproducible deployments for common infrastructure elements using tools like Ansible, Terraform, and CDK.
  • Support: In this role the candidate may need to support the environment as part of the engagement through hand-off. Requisite knowledge of operations will be required
  • Documentation: The role can require significant documentation of the deployment and steps to maintain the system. The Cloud Engineer will also be responsible for all required documentation needed as required for customer hand-off.
  • Customer Skills: This position is customer facing and effective communication and customer service is essential. 

 

Basic Qualifications:

  • Bachelor's or Master's degree in computer programming or quality assurance.
  • 5-8 years as an IT Engineer or DevOps Engineer with automation skills and AWS or Azure Expierence. Preferably in Professional Services.
  • Proficiency in enterprise tools (Grafana, Splunk etc.), software (Windows, Linux, Databases, Active Directory, VMware ESXi, Kubernetes) and techniques (Knowledge of Best Practices).
  • Demonstratable Proficiency with Automation Packages (Ansible, Git, CDK, Terraform, Cloudformation, Python)


Preferred Qualifications  

  • Exceptional communication and interpersonal skills.
  • Strong ownership abilities, attention to detail.
Read more
OnActive
Mansi Gupta
Posted by Mansi Gupta
Gurugram, Pune, Bengaluru (Bangalore), Chennai, Bhopal, Hyderabad, Jaipur
5 - 8 yrs
₹6L - ₹12L / yr
skill iconPython
Spark
SQL
AWS CloudFormation
skill iconMachine Learning (ML)
+3 more

Level of skills and experience:


5 years of hands-on experience in using Python, Spark,Sql.

Experienced in AWS Cloud usage and management.

Experience with Databricks (Lakehouse, ML, Unity Catalog, MLflow).

Experience using various ML models and frameworks such as XGBoost, Lightgbm, Torch.

Experience with orchestrators such as Airflow and Kubeflow.

Familiarity with containerization and orchestration technologies (e.g., Docker, Kubernetes).

Fundamental understanding of Parquet, Delta Lake and other data file formats.

Proficiency on an IaC tool such as Terraform, CDK or CloudFormation.

Strong written and verbal English communication skill and proficient in communication with non-technical stakeholderst

Read more
Gipfel & Schnell Consultings Pvt Ltd
Bengaluru (Bangalore)
5 - 12 yrs
Best in industry
DevOps
azure
Terraform
Powershell
Apache Kafka
+1 more

Mandatory Skills:


  • AZ-104 (Azure Administrator) experience
  • CI/CD migration expertise
  • Proficiency in Windows deployment and support
  • Infrastructure as Code (IaC) in Terraform
  • Automation using PowerShell
  • Understanding of SDLC for C# applications (build/ship/run strategy)
  • Apache Kafka experience
  • Azure web app


Good to Have Skills:


  • AZ-400 (Azure DevOps Engineer Expert)
  • AZ-700 Designing and Implementing Microsoft Azure Networking Solutions
  • Apache Pulsar
  • Windows containers
  • Active Directory and DNS
  • SAST and DAST tool understanding
  • MSSQL database
  • Postgres database
  • Azure security
Read more
VyTCDC
Gobinath Sundaram
Posted by Gobinath Sundaram
Bengaluru (Bangalore), Mumbai, Delhi, Gurugram, Noida, Faridabad, Hyderabad, Pune, Chennai
3 - 12 yrs
₹3L - ₹26L / yr
Infrastructure
Azure infrastructure
IaaS
Platform as a Service (PaaS)
Terraform

Key Responsibilities:

  • Azure Cloud Infrastructure Management:
  • Design, deploy, and manage IaaS, PaaS, and SaaS components on Microsoft Azure.
  • Implement and maintain virtual networks, storage accounts, virtual machines (VMs), and other cloud resources.
  • Automate and optimize provisioning of Azure resources using Azure CLI, PowerShell, and ARM templates.
  • Terraform and Infrastructure-as-Code (IaC):
  • Design and implement Infrastructure as Code (IaC) solutions using Terraform to automate resource provisioning.
  • Write reusable and maintainable Terraform modules to manage Azure infrastructure.
  • Use Terraform for continuous integration and deployment (CI/CD) pipelines for infrastructure management.
  • Cloud Security and Governance:
  • Implement best practices for cloud security in Azure, including role-based access control (RBAC), encryption, and network security.
  • Ensure that infrastructure adheres to company policies and industry standards for compliance.
  • Automation and Scripting:
  • Automate routine tasks to enhance operational efficiency.
  • Write and maintain scripts for deployment, monitoring, and management of Azure resources.
  • Collaboration and Documentation:
  • Work with cross-functional teams to identify and implement cloud solutions.
  • Provide technical guidance and training on Azure best practices.
  • Document cloud infrastructure, processes, and troubleshooting guides.

Required Skills and Qualifications:

  • Proven experience with Microsoft Azure cloud services and infrastructure management.
  • Strong hands-on experience with IaaS (Virtual Machines, Networking, Storage) and PaaS (App Services, SQL Databases, Azure Functions).
  • Expertise in Terraform for building and maintaining cloud infrastructure.
  • Proficiency in Azure Resource Manager (ARM) templates, Azure CLI, and PowerShell.
  • Understanding of cloud security principles and tools in Azure, including encryption, firewalls, and RBAC.
  • Experience with CI/CD pipeline tools (e.g., Jenkins, Azure DevOps) for automated deployments.
  • Familiarity with Azure Active Directory and role-based access control (RBAC).
  • Strong knowledge of networking, including VNETs, VPNs, subnets, and load balancers.
  • Excellent problem-solving skills, with the ability to troubleshoot and resolve complex issues.
  • Good understanding of monitoring and logging tools in Azure (Azure Monitor, Log Analytics).
  • Strong written and verbal communication skills for documentation and teamwork.

Preferred Qualifications:

  • Azure certifications such as Azure Solutions Architect Expert or Azure Administrator Associate.
  • Familiarity with other IaC tools like Ansible or Chef.
  • Experience with containerization technologies (e.g., Docker, Kubernetes).
  • Knowledge of serverless architectures and Azure Functions.
  • Experience working in agile environments.

Why [Company Name]?

  • Competitive salary and benefits package.
  • Opportunities for professional growth and career advancement.
  • Dynamic, collaborative work environment.
  • Flexible work hours and remote work options.
  • Exposure to cutting-edge cloud technologies.


Read more
NeoGenCode Technologies Pvt Ltd
Akshay Patil
Posted by Akshay Patil
Bengaluru (Bangalore)
3 - 8 yrs
₹15L - ₹30L / yr
Product engineering
skill iconRuby on Rails (ROR)
skill iconPostgreSQL
Google Cloud Platform (GCP)
skill iconAmazon Web Services (AWS)
+2 more

Position Name : Product Engineer (Backend Heavy)

Experience : 3 to 5 Years

Location : Bengaluru (Work From Office, 5 Days a Week)

Positions : 2

Notice Period : Immediate joiners or candidates serving notice (within 30 days)


Role Overview :

We’re looking for Product Engineers who are passionate about building scalable backend systems in the FinTech & payments domain. If you enjoy working on complex challenges, contributing to open-source projects, and driving impactful innovations, this role is for you!


What You’ll Do :

  • Develop scalable APIs and backend services.
  • Design and implement core payment systems.
  • Take end-to-end ownership of product development from zero to one.
  • Work on database design, query optimization, and system performance.
  • Experiment with new technologies and automation tools.
  • Collaborate with product managers, designers, and engineers to drive innovation.

What We’re Looking For :

  • 3+ Years of professional backend development experience.
  • Proficiency in any backend programming language (Ruby on Rails experience is a plus but not mandatory).
  • Experience in building APIs and working with relational databases.
  • Strong communication skills and ability to work in a team.
  • Open-source contributions (minimum 50 stars on GitHub preferred).
  • Experience in building and delivering 0 to 1 products.
  • Passion for FinTech & payment systems.
  • Familiarity with CI/CD, DevOps practices, and infrastructure management.
  • Knowledge of payment protocols and financial regulations (preferred but not mandatory)

Main Technical Skills :

  • Backend : Ruby on Rails, PostgreSQL
  • Infrastructure : GCP, AWS, Terraform (fully automated infrastructure)
  • Security : Zero-trust security protocol managed via Teleport
Read more
Client based at Pune location.

Client based at Pune location.

Agency job
Bengaluru (Bangalore), Mumbai, Pune, Hyderabad, Delhi, Gurugram, Noida, Ghaziabad, Faridabad
9 - 12 yrs
₹26L - ₹27L / yr
Bash
Shell Scripting
circileci
skill iconPython
skill iconDocker
+6 more

Senior DevOps Engineer 

Shift timing: Rotational shift (15 days in same Shift)

Relevant Experience: 7 Years relevant experience in DevOps

Education Required: Bachelor’s / Masters / PhD – Any Graduate

Must have skills:

BASH Shell-script, CircleCI pipeline, Python, Docker, Kubernetes, Terraform, Github, Postgresql Server, DataDog, Jira

Good to have skills:

AWS, Serverless architecture, static-tools like flake8, black, mypy, isort, Argo CD

Candidate Roles and Responsibilities

Experience: 8+ years in DevOps, with a strong focus on automation, cloud infrastructure, and CI/CD practices.

Terraform: Advanced knowledge of Terraform, with experience in writing, testing, and deploying modules.

AWS: Extensive experience with AWS services (EC2, S3, RDS, Lambda, VPC, etc.) and best practices in cloud architecture.

Docker & amp; Kubernetes: Proven experience in containerization with Docker and orchestration with Kubernetes in production environments.

CI/CD: Strong understanding of CI/CD processes, with hands-on experience in CircleCI or similar tools.

Scripting: Proficient in Python and Linux Shell scripting for automation and process improvement.

Monitoring & amp, Logging: Experience with Datadog or similar tools for monitoring and alerting in large-scale environments.

Version Control: Proficient with Git, including branching, merging, and collaborative workflows.

Configuration Management: Experience with Kustomize or similar tools for managing Kubernetes configurations

Read more
Client based at Bangalore location.

Client based at Bangalore location.

Agency job
Bengaluru (Bangalore)
4 - 7 yrs
₹25L - ₹30L / yr
DevOps
skill iconAmazon Web Services (AWS)
skill iconDocker
skill iconKubernetes
Bitbucket
+14 more

 

DevOps Engineer

Bangalore / Full-Time

 

Job Description

At, we build Enterprise-Scale AI/ML-powered products for Manufacturing, Sustainability and Supply Chain. We are looking for a DevOps Engineer to help us deploying product updates, identifying production issues and implement integrations that meet customer needs. By joining our team, you will take part in various projects, involves working with clients to successfully implement and integrate a  products, software, or systems into their existing infrastructure or  cloud. 

 

What You'll Do

•      Collaborate with stakeholders to gather and analyze customer needs, ensuring that DevOps strategies align with business objectives.

•      Deploy and manage various development, testing, and automation tools, alongside robust IT infrastructure to support our software lifecycle.

•      Configure and maintain the necessary tools and infrastructure to support continuous integration, continuous deployment (CI/CD), and other DevOps processes.

•      Establish and document processes for development, testing, release, updates, and support to streamline DevOps operations.

•      Manage the deployment of software updates and bug fixes, ensuring minimal downtime and seamless integration.

•      Develop and implement tools aimed at minimizing errors and enhancing the overall customer experience.

•      Promote and develop automated solutions wherever possible to increase efficiency and reduce manual intervention.

•      Evaluate, select, and deploy appropriate CI/CD tools that best fit the project requirements and organizational goals.

•      Drive ongoing enhancements by building and maintaining robust CI/CD pipelines, ensuring seamless integration, development, and deployment cycles.

•      Integrate requested features and services as per customer requirements to enhance product functionality.

•      Conduct thorough root cause analyses for production issues, implementing effective solutions to prevent recurrence.

•      Investigate and resolve technical problems promptly to maintain system stability and performance.

•      Offer expert technical support, including GitOps for automated Kubernetes deployments, Jenkins pipeline automation, VPS setup, and more, ensuring smooth and reliable operations.

 

 

Requirements & Skills

•      Bachelor’s degree in computer science, MCA or equivalent practical experience

•      4 to 6 years of hands-on experience as a DevOps Engineer 

 

•      Proven experience with cloud platforms such as AWS or Azure, including services like EC2, S3, RDS and Kubernetes Service (EKS).

•      Strong understanding of networking concepts, including VPNs, firewalls, and load balancers.

•      Proficiency in setting up and managing CI/CD pipelines using tools like Jenkins, Bitbucket Pipeline, or similar

•      Experience with configuration management tools such as Ansible, Chef, or Puppet.

•      Skilled in using IaC tools like Terraform,  AWS CloudFormation, or similar.

•      Strong knowledge of Docker and Kubernetes for container management and orchestration.

•      Expertise in using Git and managing repositories on platforms like GitHub, GitLab, or Bitbucket.

•      Ability to build and maintain automated scripts and tools to enhance DevOps processes.

•      Experience with monitoring tools (e.g., Prometheus, Grafana, ELK Stack) to ensure system reliability and performance.

•      Experience with GitOps practices for managing Kubernetes deployments using Flux2 or similar.

•      Proficiency in scripting languages such as Python, Yaml, Bash, or PowerShell.

•      Strong analytical skills to perform root cause analysis and troubleshoot complex technical issues.

•      Excellent teamwork and communication skills to work effectively with cross-functional teams and stakeholders.

•      Ability to thrive in a fast-paced environment and adapt to changing priorities and technologies.

•      Eagerness to stay updated with the latest DevOps trends, tools, and best practices.

 

Nice to have:

•      AWS Certified DevOps Engineer

•      Azure DevOps Engineer Expert

•      Certified Kubernetes Administrator (CKA)

•      Understanding of security compliance standards (e.g., GDPR, HIPAA) and best practices in DevOps.

•      Experience with cost management and optimization strategies in cloud environments.

•      Knowledge of incident management and response tools and processes.

  


 

Read more
Bengaluru (Bangalore)
5 - 12 yrs
Best in industry
skill iconDocker
skill iconKubernetes
DevOps
skill iconAmazon Web Services (AWS)
Google Cloud Platform (GCP)
+10 more

Company Overview 

Adia Health revolutionizes clinical decision support by enhancing diagnostic accuracy and personalizing care. It modernizes the diagnostic process by automating optimal lab test selection and interpretation, utilizing a combination of expert medical insights, real-world data, and artificial intelligence. This approach not only streamlines the diagnostic journey but also ensures precise, individualized patient care by integrating comprehensive medical histories and collective platform knowledge. 

  

Position Overview 

We are seeking a talented and experienced Site Reliability Engineer/DevOps Engineer to join our dynamic team. The ideal candidate will be responsible for ensuring the reliability, scalability, and performance of our infrastructure and applications. You will collaborate closely with development, operations, and product teams to automate processes, implement best practices, and improve system reliability. 

  

Key Responsibilities 

  • Design, implement, and maintain highly available and scalable infrastructure solutions using modern DevOps practices. 
  • Automate deployment, monitoring, and maintenance processes to streamline operations and increase efficiency. 
  • Monitor system performance and troubleshoot issues, ensuring timely resolution to minimize downtime and impact on users. 
  • Implement and manage CI/CD pipelines to automate software delivery and ensure code quality. 
  • Manage and configure cloud-based infrastructure services to optimize performance and cost. 
  • Collaborate with development teams to design and implement scalable, reliable, and secure applications. 
  • Implement and maintain monitoring, logging, and alerting solutions to proactively identify and address potential issues. 
  • Conduct periodic security assessments and implement appropriate measures to ensure the integrity and security of systems and data. 
  • Continuously evaluate and implement new tools and technologies to improve efficiency, reliability, and scalability. 
  • Participate in on-call rotation and respond to incidents promptly to ensure system uptime and availability. 

  

Qualifications 

  • Bachelor's degree in Computer Science, Engineering, or related field 
  • Proven experience (5+ years) as a Site Reliability Engineer, DevOps Engineer, or similar role 
  • Strong understanding of cloud computing principles and experience with AWS 
  • Experience of building and supporting complex CI/CD pipelines using Github 
  • Experience of building and supporting infrastructure as a code using Terraform 
  • Proficiency in scripting and automating tools 
  • Solid understanding of networking concepts and protocols 
  • Understanding of security best practices and experience implementing security controls in cloud environments 
  • Knowing modern security requirements like SOC2, HIPAA, HITRUST will be a solid advantage. 
Read more
VyTCDC
Gobinath Sundaram
Posted by Gobinath Sundaram
Chennai, Bengaluru (Bangalore), Hyderabad, Kolkata, Pune, Mumbai, Delhi, Noida, Kochi (Cochin), Bhubaneswar
8 - 12 yrs
₹8L - ₹26L / yr
skill iconDocker
skill iconKubernetes
DevOps
cicd
skill iconJenkins
+5 more

6+ years of experience with deployment and management of Kubernetes clusters in production environment as DevOps engineer.

• Expertise in Kubernetes fundamentals like nodes, pods, services, deployments etc., and their interactions with the underlying infrastructure.

• Hands-on experience with containerization technologies such as Docker or RKT to package applications for use in a distributed system managed by Kubernetes.

• Knowledge of software development cycle including coding best practices such as CI/CD pipelines and version control systems for managing code changes within a team environment.

• Must have Deep understanding on different aspects related to Cloud Computing and operations processes needed when setting up workloads on top these platforms

• Experience with Agile software development and knowledge of best practices for agile Scrum team.

• Proficient with GIT version control

• Experience working with Linux and cloud compute platforms.

• Excellent problem-solving skills and ability to troubleshoot complex issues in distributed systems.

• Excellent communication & interpersonal skills, effective problem-solving skills and logical thinking ability and strong commitment to professional and client service excellence.

Read more
Rigel Networks Pvt Ltd
Minakshi Soni
Posted by Minakshi Soni
Bengaluru (Bangalore), Pune, Mumbai, Chennai
8 - 12 yrs
₹8L - ₹10L / yr
skill iconAmazon Web Services (AWS)
Terraform
Amazon Redshift
Redshift
Snowflake
+16 more

Dear Candidate,


We are urgently Hiring AWS Cloud Engineer for Bangalore Location.

Position: AWS Cloud Engineer

Location: Bangalore

Experience: 8-11 yrs

Skills: Aws Cloud

Salary: Best in Industry (20-25% Hike on the current ctc)

Note:

only Immediate to 15 days Joiners will be preferred.

Candidates from Tier 1 companies will only be shortlisted and selected

Candidates' NP more than 30 days will get rejected while screening.

Offer shoppers will be rejected.


Job description:

 

Description:

 

Title: AWS Cloud Engineer

Prefer BLR / HYD – else any location is fine

Work Mode: Hybrid – based on HR rule (currently 1 day per month)


Shift Timings 24 x 7 (Work in shifts on rotational basis)

Total Experience in Years- 8+ yrs, 5 yrs of relevant exp is required.

Must have- AWS platform, Terraform, Redshift / Snowflake, Python / Shell Scripting



Experience and Skills Requirements:


Experience:

8 years of experience in a technical role working with AWS


Mandatory

Technical troubleshooting and problem solving

AWS management of large-scale IaaS PaaS solutions

Cloud networking and security fundamentals

Experience using containerization in AWS

Working Data warehouse knowledge Redshift and Snowflake preferred

Working with IaC – Terraform and Cloud Formation

Working understanding of scripting languages including Python and Shell

Collaboration and communication skills

Highly adaptable to changes in a technical environment

 

Optional

Experience using monitoring and observer ability toolsets inc. Splunk, Datadog

Experience using Github Actions

Experience using AWS RDS/SQL based solutions

Experience working with streaming technologies inc. Kafka, Apache Flink

Experience working with a ETL environments

Experience working with a confluent cloud platform


Certifications:


Minimum

AWS Certified SysOps Administrator – Associate

AWS Certified DevOps Engineer - Professional



Preferred


AWS Certified Solutions Architect – Associate


Responsibilities:


Responsible for technical delivery of managed services across NTT Data customer account base. Working as part of a team providing a Shared Managed Service.


The following is a list of expected responsibilities:


To manage and support a customer’s AWS platform

To be technical hands on

Provide Incident and Problem management on the AWS IaaS and PaaS Platform

Involvement in the resolution or high priority Incidents and problems in an efficient and timely manner

Actively monitor an AWS platform for technical issues

To be involved in the resolution of technical incidents tickets

Assist in the root cause analysis of incidents

Assist with improving efficiency and processes within the team

Examining traces and logs

Working with third party suppliers and AWS to jointly resolve incidents


Good to have:


Confluent Cloud

Snowflake




Best Regards,

Minakshi Soni

Executive - Talent Acquisition (L2)

Rigel Networks

Worldwide Locations: USA | HK | IN 

Read more
Bengaluru (Bangalore)
5 - 7 yrs
₹11L - ₹20L / yr
skill iconPython
skill iconAmazon Web Services (AWS)
DynamoDB
opensearch
Terraform
+2 more

🚀 We're Hiring: Python AWS Fullstack Developer at InfoGrowth! 🚀

Join InfoGrowth as a Python AWS Fullstack Developer and be a part of our dynamic team driving innovative cloud-based solutions!

Job Role: Python AWS Fullstack Developer

Location: Bangalore & Pune

Mandatory Skills:

  • Proficiency in Python programming.
  • Hands-on experience with AWS services and migration.
  • Experience in developing cloud-based applications and pipelines.
  • Familiarity with DynamoDB, OpenSearch, and Terraform (preferred).
  • Solid understanding of front-end technologies: ReactJS, JavaScript, TypeScript, HTML, and CSS.
  • Experience with Agile methodologies, Git, CI/CD, and Docker.
  • Knowledge of Linux (preferred).

Preferred Skills:

  • Understanding of ADAS (Advanced Driver Assistance Systems) and automotive technologies.
  • AWS Certification is a plus.

Why Join InfoGrowth?

  • Work on cutting-edge technology in a fast-paced environment.
  • Collaborate with talented professionals passionate about driving change in the automotive and tech industries.
  • Opportunities for professional growth and development through exciting projects.

🔗 Apply Now to elevate your career with InfoGrowth and make a difference in the automotive sector!



Read more
Scoutflo
Mumbai, Bengaluru (Bangalore)
2 - 6 yrs
₹8L - ₹15L / yr
skill iconAngularJS (1.x)
skill iconAngular (2+)
skill iconReact.js
skill iconNodeJS (Node.js)
skill iconMongoDB
+7 more

Scoutflo is a platform that automates complex infrastructure requirements for Kubernetes Infrastructure.


Job Description:


  1. In-depth knowledge of full-stack development principles and best practices.
  2. Expertise in building web applications with strong proficiency in languages like
  3. Node.js, React, and Go.
  4. Experience developing and consuming RESTful & gRPC API Protocols.
  5. Familiarity with CI/CD workflows and DevOps processes.
  6. Solid understanding of cloud platforms and container orchestration
  7. technologies
  8. Experience with Kubernetes pipelines and workflows using tools like Argo CD.
  9. Experience with designing and building user-friendly interfaces.
  10. Excellent understanding of distributed systems, databases, and APIs.
  11. A passion for writing clean, maintainable, and well-documented code.
  12. Strong problem-solving skills and the ability to work independently as well as
  13. collaboratively.
  14. Excellent communication and interpersonal skills.
  15. Experience with building self-serve platforms or user onboarding experiences.
  16. Familiarity with Infrastructure as Code (IaC) tools like Terraform.
  17. A strong understanding of security best practices for Kubernetes deployments.
  18. Grasp on setting up Network Architecture for distributed systems.

Must have:

1) Experience with managing Infrastructure on AWS/GCP or Azure

2) Managed Infrastructure on Kubernetes

Read more
Cargill Business Services
Paramjit Kaur
Posted by Paramjit Kaur
Bengaluru (Bangalore)
2 - 6 yrs
Best in industry
Apache Kafka
Kerberos
Zookeeper
Terraform
Linux administration

As a Kafka Administrator at Cargill you will work across the full set of data platform technologies spanning on-prem and SAS solutions empowering highly performant modern data centric solutions. Your work will play a critical role in enabling analytical insights and process efficiencies for Cargill’s diverse and complex business environments. You will work in a small team who shares your passion for building, configuring, and supporting platforms while sharing, learning and growing together.  


  • Develop and recommend improvements to standard and moderately complex application support processes and procedures. 
  • Review, analyze and prioritize incoming incident tickets and user requests. 
  • Perform programming, configuration, testing and deployment of fixes or updates for application version releases. 
  • Implement security processes to protect data integrity and ensure regulatory compliance. 
  • Keep an open channel of communication with users and respond to standard and moderately complex application support requests and needs. 


MINIMUM QUALIFICATIONS

  • 2-4 year of minimum experience
  • Knowledge of Kafka cluster management, alerting/monitoring, and performance tuning
  • Full ecosystem Kafka administration (kafka, zookeeper, kafka-rest, connect)
  • Experience implementing Kerberos security
  • Preferred:
  • Experience in Linux system administration
  • Authentication plugin experience such as basic, SSL, and Kerberos
  • Production incident support including root cause analysis
  • AWS EC2
  • Terraform
Read more
Cargill Business Services
Paramjit Kaur
Posted by Paramjit Kaur
Bengaluru (Bangalore)
6 - 13 yrs
Best in industry
Azure
Terraform
skill iconGo Programming (Golang)

The Cloud Engineer will design and develop the capabilities of the company cloud platform and the automation of application deployment pipelines to the cloud platform. In this role, you will be an essential partner and technical specialist for cloud platform development and Operations. 


Key Accountabilities 


Participate in a dynamic development environment to engineer evolving customer solutions on Azure. Support SAP Application teams for their requirements related to Application and release management. Develop automation capabilities in the cloud platform to enable provisioning and upgrades of cloud services. 

Design continuous integration delivery pipelines with infrastructure as code, automation, and testing capabilities to facilitate automated deployment of applications. 

Develop testable code to automate Cloud platform capabilities and Cloud platform observability tools. Engineering support to implementation/ POC of new tools and techniques. 

Independently handle support of critical SAP Applications Infrastructure deployed on Azure. Other duties as assigned. 

Qualifications 


Minimum Qualifications 

Bachelor’s degree in a related field or equivalent experience 

Minimum of 5 years of related work experience 


PREFERRED QUALIFICATIONS 

Supporting complex application development activities in DevOps environment. 

Building and supporting fully automated cloud platform solutions as Infrastructure as Code. Working with cloud services platform primarily on Azure and automating the Cloud infrastructure life cycle with tools such as Terraform, GitHub Actions. 

With scripting and programming languages such as Python, Go, PowerShell. 

Good knowledge of applying Azure Cloud adoption framework and Implementing Microsoft well architected framework. 

Experience with Observability Tools, Cloud Infrastructure security services on Azure, Azure networking topologies and Azure Virtual WAN. 

Experience automating Windows and Linux operating system deployments and management in automatically scaling deployments. 


Good to have:


1) Managing cloud infra using IAC methods- terraform and go/golang

2) Knowledge about complex enterprise networking- LAN, WAN, VNET, VLAN

3) Good understanding of application architecture- databases, tiered architecture

Read more
codersbrain

at codersbrain

1 recruiter
Tanuj Uppal
Posted by Tanuj Uppal
Bengaluru (Bangalore), Mumbai, Pune, Chennai, Noida, Gurugram, Ahmedabad
8 - 15 yrs
₹10L - ₹15L / yr
skill iconJenkins
prometheus
Terraform
skill iconKubernetes
Splunk
+1 more

Devops Engineer(Permanent)


Experience: 8 to 12 yrs

Location: Remote for 2-3 months (Any Mastek Location- Chennai/Mumbai/Pune/Noida/Gurgaon/Ahmedabad/Bangalore)

Max Salary = 28 LPA (including 10% variable)

Notice Period: Immediate/ max 10days

Mandatory Skills: Either Splunk/Datadog, Gitlab, Retail Domain




· Bachelor’s degree in Computer Science/Information Technology, or in a related technical field or equivalent technology experience.

· 10+ years’ experience in software development

· 8+ years of experience in DevOps

· Mandatory Skills: Either Splunk/Datadog,Gitalb,EKS,Retail domain experience

· Experience with the following Cloud Native tools: Git, Jenkins, Grafana, Prometheus, Ansible, Artifactory, Vault, Splunk, Consul, Terraform, Kubernetes

· Working knowledge of Containers, i.e., Docker Kubernetes, ideally with experience transitioning an organization through its adoption

· Demonstrable experience with configuration, orchestration, and automation tools such as Jenkins, Puppet, Ansible, Maven, and Ant to provide full stack integration

· Strong working knowledge of enterprise platforms, tools and principles including Web Services, Load Balancers, Shell Scripting, Authentication, IT Security, and Performance Tuning

· Demonstrated understanding of system resiliency, redundancy, failovers, and disaster recovery

· Experience working with a variety of vendor APIs including cloud, physical and logical infrastructure devices

· Strong working knowledge of Cloud offerings & Cloud DevOps Services (EC2, ECS, IAM, Lambda, Cloud services, AWS CodeBuild, CodeDeploy, Code Pipeline etc or Azure DevOps, API management, PaaS)

· Experience managing and deploying Infrastructure as Code, using tools like Terraform Helm charts etc.

· Manage and maintain standards for Devops tools used by the team

Read more
Bengaluru (Bangalore)
3 - 10 yrs
Best in industry
skill iconPython
skill iconAmazon Web Services (AWS)
Terraform

We have an exciting and rewarding opportunity for you to take your software engineering career to the next level. 

As a Software Engineer III at JPMorgan Chase within the Asset & Wealth Management, you serve as a seasoned member of an agile team to design and deliver trusted market-leading technology products in a secure, stable, and scalable way. You are responsible for carrying out critical technology solutions across multiple technical areas within various business functions in support of the firm’s business objectives.

Job responsibilities

 

  • Executes software solutions, design, development, and technical troubleshooting with ability to think beyond routine or conventional approaches to build solutions or break down technical problems
  • Creates secure and high-quality production code and maintains algorithms that run synchronously with appropriate systems
  • Produces architecture and design artifacts for complex applications while being accountable for ensuring design constraints are met by software code development
  • Gathers, analyzes, synthesizes, and develops visualizations and reporting from large, diverse data sets in service of continuous improvement of software applications and systems
  • Proactively identifies hidden problems and patterns in data and uses these insights to drive improvements to coding hygiene and system architecture
  • Contributes to software engineering communities of practice and events that explore new and emerging technologies
  • Adds to team culture of diversity, equity, inclusion, and respect

 

 

Required qualifications, capabilities, and skills

 

  • Formal training or certification on software engineering concepts and 3+ years applied experience
  • Expert level in the programming on Python. Experience designing and building APIs using popular frameworks such as Flask, Fast API
  • Familiar with site reliability concepts, principles, and practices
  • Experience maintaining a Cloud-base infrastructure
  • Familiar with observability such as white and black box monitoring, service level objective alerting, and telemetry collection using tools such as Grafana, Dynatrace, Prometheus, Datadog, Splunk, and others
  • Emerging knowledge of software, applications and technical processes within a given technical discipline (e.g., Cloud, artificial intelligence, Android, etc.)
  • Emerging knowledge of continuous integration and continuous delivery tools (e.g., Jenkins, Jules, Spinnaker, BitBucket, GitLab, Terraform, etc.)
  • Emerging knowledge of common networking technologies

 

Preferred qualifications, capabilities, and skills

 

  • General knowledge of financial services industry
  • Experience working on public cloud environment using wrappers and practices that are in use at JPMC
  • Knowledge on Terraform, containers and container orchestration, especially Kubernetes preferred
Read more
CodeCraft Technologies Private Limited
Chandana B
Posted by Chandana B
Bengaluru (Bangalore)
5 - 8 yrs
Best in industry
skill iconNodeJS (Node.js)
TypeScript
skill iconMongoDB
NOSQL Databases
MySQL
+9 more

Position: Senior Backend Developer (NodeJS)

Experience: 5+ Years

Location: Bengaluru


CodeCraft Technologies multi-award-winning creative engineering company offering design and technology solutions on mobile, web and cloud platforms.

We are looking for an enthusiastic and self-driven Backend Engineer to join our team.


Roles and Responsibilities:


● Develop high-quality software design and architecture

● Identify, prioritize and execute tasks in the software development life cycle.

● Develop tools and applications by producing clean, efficient code

● Automate tasks through appropriate tools and scripting

● Review and debug code

● Perform validation and verification testing

● Collaborate with cross-functional teams to fix and improve products

● Document development phases and monitor systems

● Ensure software is up-to-date with the latest technologies


Desired Profile:


● NodeJS [Typescript]

● MongoDB [NoSQL DB]

● MySQL, PostgreSQL

● AWS - S3, Lambda, API Gateway, Cloud Watch, ECR, ECS, Fargate, SQS / SNS

● Terraform, Kubernetes, Docker

● Good Understanding of Serverless Architecture

● Proven experience as a Senior Software Engineer

● Extensive experience in software development, scripting and project management

● Experience using system monitoring tools (e.g. New Relic) and automated testing frameworks

● Familiarity with various operating systems (Linux, Mac OS, Windows)

● Analytical mind with problem-solving aptitude

● Ability to work independently


Good to Have:


● Actively contribute to relevant open-source projects, demonstrating a commitment to community collaboration and continuous learning.

● Share knowledge and insights gained from open-source contributions with the development team

● AWS Solutions Architect Professional Certification

● AWS DevOps Professional Certification

● Multi-Cloud/ hybrid cloud experience

● Experience in building CI/CD pipelines using AWS services

Read more
Toast

at Toast

2 candid answers
1 video
Sandeep Dhara
Posted by Sandeep Dhara
Remote, Bengaluru (Bangalore)
7 - 10 yrs
Best in industry
DevOps
skill iconKubernetes
skill iconDocker
skill iconAmazon Web Services (AWS)
Windows Azure
+4 more

Now, more than ever, the Toast team is committed to our customers. We’re taking steps to help restaurants navigate these unprecedented times with technology, resources, and community. Our focus is on building a restaurant platform that helps restaurants adapt, take control, and get back to what they do best: building the businesses they love. And because our technology is purpose-built for restaurants by restaurant people, restaurants can trust that we’ll deliver on their needs for today while investing in experiences that will power their restaurant of the future.


At Toast, our Site Reliability Engineers (SREs) are responsible for keeping all customer-facing services and other Toast production systems running smoothly. SREs are a blend of pragmatic operators and software craftspeople who apply sound software engineering principles, operational discipline, and mature automation to our environments and our codebase. Our decisions are based on instrumentation and continuous observability, as well as predictions and capacity planning.


About this roll* (Responsibilities) 

  • Gather and analyze metrics from both operating systems and applications to assist in performance tuning and fault finding
  • Partner with development teams to improve services through rigorous testing and release procedures
  • Participate in system design consulting, platform management, and capacity planning
  • Create sustainable systems and services through automation and uplift
  • Balance feature development speed and reliability with well-defined service level objectives


Troubleshooting and Supporting Escalations:

  • Gather and analyze metrics from both operating systems and applications to assist in performance tuning and fault finding
  • Diagnose performance bottlenecks and implement optimizations across infrastructure, databases, web, and mobile applications
  • Implement strategies to increase system reliability and performance through on-call rotation and process optimization
  • Perform and run blameless RCAs on incidents and outages aggressively, looking for answers that will prevent the incident from ever happening again


Do you have the right ingredients? (Requirements)


  • Extensive industry experience with at least 7+ years in SRE and/or DevOps roles
  • Polyglot technologist/generalist with a thirst for learning
  • Deep understanding of cloud and microservice architecture and the JVM
  • Experience with tools such as APM, Terraform, Ansible, GitHub, Jenkins, and Docker
  • Experience developing software or software projects in at least four languages, ideally including two of Go, Python, and Java
  • Experience with cloud computing technologies ( AWS cloud provider preferred)



Bread puns are encouraged but not required

Read more
porter
Agency job
via UPhill HR by Ingit Pandey
Bengaluru (Bangalore)
4 - 6 yrs
₹24L - ₹34L / yr
DevOps
skill iconKubernetes
skill iconDocker
skill iconAmazon Web Services (AWS)
CI/CD
+4 more

Job Title: DevOps SDE llI

Job Summary

Porter seeks an experienced cloud and DevOps engineer to join our infrastructure platform team. This team is responsible for the organization's cloud platform, CI/CD, and observability infrastructure. As part of this team, you will be responsible for providing a scalable, developer-friendly cloud environment by participating in the design, creation, and implementation of automated processes and architectures to achieve our vision of an ideal cloud platform.

Responsibilities and Duties

In this role, you will

  • Own and operate our application stack and AWS infrastructure to orchestrate and manage our applications.
  • Support our application teams using AWS by provisioning new infrastructure and contributing to the maintenance and enhancement of existing infrastructure.
  • Build out and improve our observability infrastructure.
  • Set up automated auditing processes and improve our applications' security posture.
  • Participate in troubleshooting infrastructure issues and preparing root cause analysis reports.
  • Develop and maintain our internal tooling and automation to manage the lifecycle of our applications, from provisioning to deployment, zero-downtime and canary updates, service discovery, container orchestration, and general operational health.
  • Continuously improve our build pipelines, automated deployments, and automated testing.
  • Propose, participate in, and document proof of concept projects to improve our infrastructure, security, and observability.

Qualifications and Skills

Hard requirements for this role:

  • 5+ years of experience as a DevOps / Infrastructure engineer on AWS.
  • Experience with git, CI / CD, and Docker. (We use GitHub, GitHub actions, Jenkins, ECS and Kubernetes).
  • Experience in working with infrastructure as code (Terraform/CloudFormation).
  • Linux and networking administration experience.
  • Strong Linux Shell scripting experience.
  • Experience with one programming language and cloud provider SDKs. (Python + boto3 is preferred)
  • Experience with configuration management tools like Ansible and Packer.
  • Experience with container orchestration tools. (Kubernetes/ECS).
  • Database administration experience and the ability to write intermediate-level SQL queries. (We use Postgres)
  • AWS SysOps administrator + Developer certification or equivalent knowledge

Good to have:

  • Experience working with ELK stack.
  • Experience supporting JVM applications.
  • Experience working with APM tools is good to have. (We use datadog)
  • Experience working in a XaaC environment. (Packer, Ansible/Chef, Terraform/Cloudformation, Helm/Kustomise, Open policy agent/Sentinel)
  • Experience working with security tools. (AWS Security Hub/Inspector/GuardDuty)
  • Experience with JIRA/Jira help desk.

 

Read more
Limechat

at Limechat

1 recruiter
Sundhar Murali
Posted by Sundhar Murali
Bengaluru (Bangalore)
3 - 6 yrs
₹15L - ₹25L / yr
DevOps
skill iconKubernetes
skill iconDocker
Windows Azure
Microsoft Windows Azure
+7 more

Responsibilities:


- Design, implement, and maintain cloud infrastructure solutions on Microsoft Azure, with a focus on scalability, security, and cost optimization.

- Collaborate with development teams to streamline the deployment process, ensuring smooth and efficient delivery of software applications.

- Develop and maintain CI/CD pipelines using tools like Azure DevOps, Jenkins, or GitLab CI to automate build, test, and deployment processes.

- Utilize infrastructure-as-code (IaC) principles to create and manage infrastructure deployments using Terraform, ARM templates, or similar tools.

- Manage and monitor containerized applications using Azure Kubernetes Service (AKS) or other container orchestration platforms.

- Implement and maintain monitoring, logging, and alerting solutions for cloud-based infrastructure and applications.

- Troubleshoot and resolve infrastructure and deployment issues, working closely with development and operations teams.

- Ensure high availability, performance, and security of cloud infrastructure and applications.

- Stay up-to-date with the latest industry trends and best practices in cloud infrastructure, DevOps, and automation.


Requirements:


- Bachelor's degree in Computer Science, Engineering, or a related field (or equivalent work experience).

- Minimum of four years of proven experience working as a DevOps Engineer or similar role, with a focus on cloud infrastructure and deployment automation.

- Strong expertise in Microsoft Azure services, including but not limited to Azure Virtual Machines, Azure App Service, Azure Storage, Azure Networking, Azure Security, and Azure Monitor.

- Proficiency in infrastructure-as-code (IaC) tools such as Terraform or ARM templates.

- Hands-on experience with containerization and orchestration platforms, preferably Azure Kubernetes Service (AKS) or Docker Swarm.

- Solid understanding of CI/CD principles and experience with relevant tools such as Azure DevOps, Jenkins, or GitLab CI.

- Experience with scripting languages like PowerShell, Bash, or Python for automation tasks.

- Strong problem-solving and troubleshooting skills with a proactive and analytical mindset.

- Excellent communication and collaboration skills, with the ability to work effectively in a team environment.

- Azure certifications (e.g., Azure Administrator, Azure DevOps Engineer, Azure Solutions Architect) are a plus.

Read more
Upswing Financial Technologies Private Limited

at Upswing Financial Technologies Private Limited

2 candid answers
4 recruiters
Simran Bindra
Posted by Simran Bindra
Bengaluru (Bangalore)
3 - 6 yrs
Best in industry
Linux/Unix
Linux administration
Information security
Network Security
skill iconDocker
+4 more

At Upswing, we are committed to building a robust, scalable & secure API platform to power the world of Open Finance.

We are a passionate and self-driven team of thinkers who aspire to build the rails to connect the legacy financial sector with financial innovators through a simple and powerful banking-as-a-service (BaaS) platform.

We are looking for motivated engineers who will be working in a highly creative and cutting-edge technology environment to build a world-class financial services suite.

 

About the role

As part of the DevSecOps team at Upswing, you will get to work on building state-of-the-art infrastructure for the future. You will also be –

  • Managing security aspects of the Cloud Infrastructure 
  • Designing and Implementing Security measures, Incident Response guidelines 
  • Conducting Security Awareness Training
  • Developing SIEM tooling and pipelines end to end for vulnerability/security/incident reporting 
  • Developing automation and performing routine VAPT for Network and Applications
  • Integrating with 3rd party vendors for the services required to improve security posture 
  • Mentoring people across the teams to enable best practices 

What will you do if you join us?

  • Engage in a lot of cross-team collaboration to independently drive forward DevSecOps practices across the org 
  • Take Ownership of existing, ongoing, and future DevSecOps initiatives 
  • Plan and Engage in Architecture discussions to bring in different angles (especially security angles) to the table
  • Build Automation stack and tools for security pipeline 
  • Integrate different security measures and pipelines with the SIEM tool
  • Conducting routine VAPT using manual and automated workflows, generating and maintaining the report for the same
  • Introduce and Implement best practices across teams for a great security posture in the org

 

You should have

  • Curiosity for on-the-job learning and experimenting with new technologies and ideas
  • A strong background in Linux environment
  • Proven experience in Architecting networks with security first implementation
  • Experience with VAPT tooling for Networks and Applications is required 
  • Strong experience in Cloud technologies, multi-cloud environments, and best practices in Cloud 
  • Experience with at least one scripting language (Ruby/Python/Groovy)
  • Experience in Terraform is highly desirable but not mandatory
  • Some experience with Kubernetes, and Docker is required 
  • Understanding Java web applications and monitoring them for security vulnerabilities would be a plus 
  • Any other DevSecOps-related experience will be considered


Read more
Upswing Financial Technologies Private Limited

at Upswing Financial Technologies Private Limited

2 candid answers
4 recruiters
Simran Bindra
Posted by Simran Bindra
Bengaluru (Bangalore)
3 - 6 yrs
Best in industry
skill iconDocker
skill iconKubernetes
DevOps
skill iconAmazon Web Services (AWS)
Terraform
+5 more

As part of the Cloud Platform / Devops team at Upswing, you will get to work on building state-of-the-art infrastructure for the future. You will also be –

  • Building Infrastructure on AWS driven through terraform and building automation tools for deployment, infrastructure management, and observability stack 
  • Building and Scaling on Kubernetes
  • Ensuring the Security of Upswing Cloud Infra
  • Building Security Checks and automation to improve overall security posture 
  • Building automation stack for components like JVM-based applications, Apache Pulsar, MongoDB, PostgreSQL, Reporting Infra, etc.
  • Mentoring people across the teams to enable best practices 
  • Mentoring and guiding team members to upskill and helm them develop work class Fintech Infrastructure

 

What will you do if you join us?

  • Write a lot of code
  • Engage in a lot of cross-team collaboration to independently drive forward infrastructure initiatives and Devops practices across the org 
  • Taking Ownership of existing, ongoing, and future initiatives 
  • Plan Architecture- for upcoming infrastructure
  • Build for Scale, Resiliency & Security 
  • Introduce best practices wrt Devops & Cloud in the team
  • Mentor new/junior team members and eventually build your own team

 

You should have

  • Curiosity for on-the-job learning and experimenting with new technologies and ideas
  • A strong background in Linux environment
  • Must have Programming skills and Experience
  • Strong experience in Cloud technologies, Security and Networking concepts, Multi-cloud environments, etc.
  • Experience with at least one scripting language (GoLang/Python/Ruby/Groovy)
  • Experience in Terraform is highly desirable but not mandatory
  • Experience with Kubernetes and Docker is required 
  • Understanding of the Java Technologies and Stack
  • Any other Devops related experience will be considered
Read more
HappyFox

at HappyFox

1 video
6 products
Lindsey A
Posted by Lindsey A
Chennai, Bengaluru (Bangalore)
5 - 10 yrs
₹10L - ₹15L / yr
DevOps
skill iconKubernetes
skill iconDocker
skill iconAmazon Web Services (AWS)
Windows Azure
+12 more

About us:

HappyFox is a software-as-a-service (SaaS) support platform. We offer an enterprise-grade help desk ticketing system and intuitively designed live chat software.

 

We serve over 12,000 companies in 70+ countries. HappyFox is used by companies that span across education, media, e-commerce, retail, information technology, manufacturing, non-profit, government and many other verticals that have an internal or external support function.

 

To know more, Visit! - https://www.happyfox.com/

 

Responsibilities:

  • Build and scale production infrastructure in AWS for the HappyFox platform and its products.
  • Research, Build/Implement systems, services and tooling to improve uptime, reliability and maintainability of our backend infrastructure. And to meet our internal SLOs and customer-facing SLAs.
  • Proficient in managing/patching servers with Unix-based operating systems like Ubuntu Linux.
  • Proficient in writing automation scripts or building infrastructure tools using Python/Ruby/Bash/Golang
  • Implement consistent observability, deployment and IaC setups
  • Patch production systems to fix security/performance issues
  • Actively respond to escalations/incidents in the production environment from customers or the support team
  • Mentor other Infrastructure engineers, review their work and continuously ship improvements to production infrastructure.
  • Build and manage development infrastructure, and CI/CD pipelines for our teams to ship & test code faster.
  • Participate in infrastructure security audits

 

Requirements:

  • At least 5 years of experience in handling/building Production environments in AWS.
  • At least 2 years of programming experience in building API/backend services for customer-facing applications in production.
  • Demonstrable knowledge of TCP/IP, HTTP and DNS fundamentals.
  • Experience in deploying and managing production Python/NodeJS/Golang applications to AWS EC2, ECS or EKS.
  • Proficient in containerised environments such as Docker, Docker Compose, Kubernetes
  • Proficient in managing/patching servers with Unix-based operating systems like Ubuntu Linux.
  • Proficient in writing automation scripts using any scripting language such as Python, Ruby, Bash etc.,
  • Experience in setting up and managing test/staging environments, and CI/CD pipelines.
  • Experience in IaC tools such as Terraform or AWS CDK
  • Passion for making systems reliable, maintainable, scalable and secure.
  • Excellent verbal and written communication skills to address, escalate and express technical ideas clearly
  • Bonus points – if you have experience with Nginx, Postgres, Redis, and Mongo systems in production.

 

Read more
Accion Labs

at Accion Labs

14 recruiters
Pooja Singh
Posted by Pooja Singh
Bengaluru (Bangalore)
5 - 20 yrs
₹3L - ₹27L / yr
skill iconJava
skill iconJavascript
skill iconReact.js
skill iconAngular (2+)
skill iconAngularJS (1.x)
+13 more

Company- Accionlabs Technologies[www.accionlabs.com]

Location- Bengluru

Work Type- Permanent

Salary- Open


Its work from office job


Key Aspects of Role



 Leverage deep knowledge of the full technology stack to help achieve business objectives and

customer outcomes

 Collaborate with Product Management to validate the technical feasibility of and establish non-

functional requirements

 Collaborate with Architecture to evolve architecture to solve technical challenges, support

future requirements, scale effectively, continually meet/exceed SLAs and resolve tech debt

 Technical advisor to internal or external stakeholders on complex technical components

 Technical leader working with the team to help remove blockers and act as a tie breaker

 Adjust the team processes, listening to feedback and guiding the team through change and

driving continuous improvement

 Guide, teach, and mentor team, providing feedback and moderating discussions

 Represent the interests of the team in cross functional meetings

 Maintain and proactively share knowledge of current technology and industry trends

 Work closely with peers to ensure the team is aligning with cloud native, lean/Agile/DevOps &

12 Factor Application best practices ensuring rapid value delivery and with quality

 Collaborate with other Principal Engineer’s to drive engineering best practices around testing,

CI/CD, GitOps, TDD, architectural alignment, and relentless automation

 Excellent understanding and familiarity with Cloud Native and 12 Factor Principles,

Microservices, Lean Principles, DevOps, Test Driven Development (TDD), Extreme Programming

(XP), Observability / Monitoring



Required Skills


Coding experience in Java

 Extensive hands-on experience working with AWS cloud products and services

 Experience with popular open-source software such as Postgres, RabbitMQ, Elasticsearch, Redis

and Couchbase

 Experience working with NodeJS, React/Redux, Docker Swarm, Kubernetes

 Experience with development frameworks such as the Spring/Spring Boot framework, Hibernate

and knowledge of advanced SQL

 Proficiency with modern object-oriented languages/frameworks, Terraform, Kubernetes, AWS,

Data Streaming

 Knowledge of containers and container orchestration platforms, preferably Kubernetes

 Experience delivering services using distributed architectures: Microservices, SOA, RESTful APIs

and data integration architectures

 Knowledge of containers and container orchestration platforms, preferably Kubernetes

 Advanced Architecture and system design skills and principles

 Excellent organizational skills and can drive a cross-team strategic project or initiative

 Solid coaching, mentorship and technical leadership to help others grow

 Able to drive consensus/commitment within and across teams and departments

 Advanced critical thinking and problem solving on complex issues and customer concerns.

 Strategic thinker beyond immediate needs, considering the longer-term

 Excellent communication skills, with ability to communicate highly complex technical concepts

 Demonstrate high level of empathy with internal colleagues, stakeholders and customers

Read more
Digitalshakha
Saurabh Deshmukh
Posted by Saurabh Deshmukh
Bengaluru (Bangalore), Mumbai, Hyderabad
1 - 5 yrs
₹2L - ₹10L / yr
skill iconDocker
skill iconKubernetes
DevOps
skill iconAmazon Web Services (AWS)
Windows Azure
+9 more

Main tasks

  • Supervision of the CI/CD process for the automated builds and deployments of web services and web applications as well as desktop tool in the cloud and container environment
  • Responsibility of the operations part of a DevOps organization especially for development in the environment of container technology and orchestration, e.g. with Kubernetes
  • Installation, operation and monitoring of web applications in cloud data centers for the purpose of development of the test as well as for the operation of an own productive cloud
  • Implementation of installations of the solution especially in the container context
  • Introduction, maintenance and improvement of installation solutions for development in the desktop and server environment as well as in the cloud and with on-premise Kubernetes
  • Maintenance of the system installation documentation and implementation of trainings

Execution of internal software tests and support of involved teams and stakeholders

  • Hands on Experience with Azure DevOps.

Qualification profile

  • Bachelor’s or master’s degree in communications engineering, electrical engineering, physics or comparable qualification
  • Experience in software
  • Installation and administration of Linux and Windows systems including network and firewalling aspects
  • Experience with build and deployment automation with tools like Jenkins, Gradle, Argo, AnangoDB or similar as well as system scripting (Bash, Power-Shell, etc.)
  • Interest in operation and monitoring of applications in virtualized and containerized environments in cloud and on-premise
  • Server environments, especially application, web-and database servers
  • Knowledge in VMware/K3D/Rancer is an advantage
  • Good spoken and written knowledge of English


Read more
Gipfel & Schnell Consultings Pvt Ltd
Aravind Kumar
Posted by Aravind Kumar
Bengaluru (Bangalore)
6 - 12 yrs
₹20L - ₹40L / yr
skill iconDocker
skill iconKubernetes
DevOps
skill iconAmazon Web Services (AWS)
Windows Azure
+13 more

Job Description:


• Drive end-to-end automation from GitHub/GitLab/BitBucket to Deployment,

Observability and Enabling the SRE activities

• Guide operations support (setup, configuration, management, troubleshooting) of

digital platforms and applications

• Solid understanding of DevSecOps Workflows that support CI, CS, CD, CM, CT.

• Deploy, configure, and manage SaaS and PaaS cloud platform and applications

• Provide Level 1 (OS, patching) and Level 2 (app server instance troubleshooting)

• DevOps programming: writing scripts, building operations/server instance/app/DB

monitoring tools Set up / manage continuous build and dev project management

environment: JenkinX/GitHub Actions/Tekton, Git, Jira Designing secure networks,

systems, and application architectures

• Collaborating with cross-functional teams to ensure secure product development

• Disaster recovery, network forensics analysis, and pen-testing solutions

• Planning, researching, and developing security policies, standards, and procedures

• Awareness training of the workforce on information security standards, policies, and

best practices

• Installation and use of firewalls, data encryption and other security products and

procedures

• Maturity in understanding compliance, policy and cloud governance and ability to

identify and execute automation.

• At Wesco, we discuss more about solutions than problems. We celebrate innovation

and creativity.

Read more
CodeCraft Technologies Private Limited
Agency job
via Bullhorn Consultants by Sai Kiran R
Bengaluru (Bangalore)
7 - 12 yrs
₹1L - ₹15L / yr
Shell Scripting
skill iconPython
Ansible
Terraform
DevOps

Roles and Responsibilities:

• Gather and analyse cloud infrastructure requirements

• Automating system tasks and infrastructure using a scripting language (Shell/Python/Ruby

preferred), with configuration management tools (Ansible/ Puppet/Chef), service registry and

discovery tools (Consul and Vault, etc), infrastructure orchestration tools (Terraform,

CloudFormation), and automated imaging tools (Packer)

• Support existing infrastructure, analyse problem areas and come up with solutions

• An eye for monitoring – the candidate should be able to look at complex infrastructure and be

able to figure out what to monitor and how.

• Work along with the Engineering team to help out with Infrastructure / Network automation needs.

• Deploy infrastructure as code and automate as much as possible

• Manage a team of DevOps


Desired Profile:

• Understanding of provisioning of Bare Metal and Virtual Machines

• Working knowledge of Configuration management tools like Ansible/ Chef/ Puppet, Redfish.

• Experience in scripting languages like Ruby/ Python/ Shell Scripting

• Working knowledge of IP networking, VPN's, DNS, load balancing, firewalling & IPS concepts

• Strong Linux/Unix administration skills.

• Self-starter who can implement with minimal guidance

• Hands-on experience setting up CICD from SCRATCH in Jenkins

• Experience with Managing K8s infrastructure

Read more
Devkraft Technologies Private Limited
Gurugram, Bengaluru (Bangalore)
3 - 7 yrs
₹5L - ₹10L / yr
DevOps
skill iconKubernetes
skill iconDocker
Terraform
Scripting

Responsibilities

 Provisioning and de-provisioning AWS accounts for internal customers

 Work alongside systems and development teams to support the transition and operation of client websites/applications in and out of AWS.

 Deploying, managing, and operating AWS environments

 Identifying appropriate use of AWS operational best practices

 Estimating AWS costs and identifying operational cost control mechanisms

 Keep technical documentation up to date

 Proactively keep up to date on AWS services and developments

 Create (where appropriate) automation, in order to streamline provisioning and de-provisioning processes

 Lead certain data/service migration projects


Job Requirements

 Experience provisioning, operating, and maintaining systems running on AWS

 Experience with Azure/AWS.

 Capabilities to provide AWS operations and deployment guidance and best practices throughout the lifecycle of a project

 Experience with application/data migration to/from AWS

 Experience with NGINX and the HTTP protocol.

 Experience with configuration and management software such as GIT  Strong analytical and problem-solving skills

 Deployment experience using common AWS technologies like VPC, and regionally distributed EC2 instances, Docker, and more.

 Ability to work in a collaborative environment

 Detail-oriented, strong work ethic and high standard of excellence

 A fast learner, the Achiever, sets high personal goals

 Must be able to work on multiple projects and consistently meet project deadlines

Read more
Basik Marketing PVT LTD

at Basik Marketing PVT LTD

2 candid answers
Naveen G
Posted by Naveen G
Bengaluru (Bangalore)
6 - 10 yrs
₹15L - ₹22L / yr
skill iconDocker
skill iconKubernetes
DevOps
skill iconAmazon Web Services (AWS)
Automation
+4 more

As a DevOps Engineer with experience in Kubernetes, you will be responsible for leading and managing a team of DevOps engineers in the design, implementation, and maintenance of the organization's infrastructure. You will work closely with software developers, system administrators, and other IT professionals to ensure that the organization's systems are efficient, reliable, and scalable. 

Specific responsibilities will include: 

  • Leading the team in the development and implementation of automation and continuous delivery pipelines using tools such as Jenkins, Terraform, and Ansible. 
  • Managing the organization's infrastructure using Kubernetes, including deployment, scaling, and monitoring of applications. 
  • Ensuring that the organization's systems are secure and compliant with industry standards. 
  • Collaborating with software developers to design and implement infrastructure as code. 
  • Providing mentorship and technical guidance to team members. 
  • Troubleshooting and resolving technical issues in collaboration with other IT professionals. 
  • Participating in the development and maintenance of the organization's disaster recovery and incident response plans. 

To be successful in this role, you should have strong leadership skills and experience with a variety of DevOps and infrastructure tools and technologies. You should also have excellent communication and problem-solving skills, and be able to work effectively in a fast-paced, dynamic environment. 

Read more
Kwalee

at Kwalee

3 candid answers
1 video
Zoheb Ahmed
Posted by Zoheb Ahmed
Bengaluru (Bangalore)
1 - 7 yrs
Best in industry
DevOps
Nginx
skill iconPython
Perl
Chef
+3 more
  • Job Title - DevOps Engineer

  • Reports Into - Lead DevOps Engineer

  • Location - India


A Little Bit about Kwalee….

Kwalee is one of the world’s leading multiplatform game developers and publishers, with well over 900 million downloads worldwide for mobile hits such as Draw It, Teacher Simulator, Let’s Be Cops 3D, Airport Security and Makeover Studio 3D. We also have a growing PC and Console team of incredible pedigree that is on the hunt for great new titles to join TENS!, Eternal Hope, Die by the Blade and Scathe.


What’s In It For You?

  • Hybrid working - 3 days in the office, 2 days remote/ WFH is the norm

  • Flexible working hours - we trust you to choose how and when you work best

  • Profit sharing scheme - we win, you win 

  • Private medical cover - delivered through BUPA

  • Life Assurance - for long term peace of mind

  • On site gym - take care of yourself

  • Relocation support - available

  • Quarterly Team Building days - we’ve done Paintballing, Go Karting & even Robot Wars

  • Pitch and make your own games on https://www.kwalee.com/blog/inside-kwalee/what-are-creative-wednesdays/">Creative Wednesdays! 


Are You Up To The Challenge?

As a DevOps Engineer you have a passion for automation, security and building reliable expandable systems. You develop scripts and tools to automate deployment tasks and monitor critical aspects of the operation, resolve engineering problems and incidents. Collaborate with architects and developers to help create platforms for the future.


Your Team Mates

The DevOps team works closely with game developers, front-end and back-end server developers making, updating and monitoring application stacks in the cloud.Each team member has specific responsibilities with their own projects to manage and bring their own ideas to how the projects should work.  Everyone strives for the most efficient, secure and automated delivery of application code and supporting infrastructure.


What Does The Job Actually Involve?

  • Find ways to automate tasks and monitoring systems to continuously  improve our systems.

  • Develop scripts and tools to make our infrastructure resilient and efficient.

  • Understand our applications and services and keep them running smoothly.


Your Hard Skills

  • Minimum 1 years of experience on a dev ops engineering role

  • Deep experience with Linux and Unix systems

  • Networking basics knowledge (named, nginx, etc)

  • Some coding experience  (Python, Ruby, Perl, etc.)

  • Experience with common automation tools (Ex. Chef, Terraform, etc)

  • AWS experience is a plus

  • A creative mindset motivated by challenges and constantly striving for the best 


Your Soft Skills

Kwalee has grown fast in recent years but we’re very much a family of colleagues. We welcome people of all ages, races, colours, beliefs, sexual orientations, genders and circumstances, and all we ask is that you collaborate, work hard, ask questions and have fun with your team and colleagues. 

We don’t like egos or arrogance and we love playing games and celebrating success together. If that sounds like you, then please apply.


A Little More About Kwalee

Founded in 2011 by David Darling CBE, a key architect of the UK games industry who previously co-founded and led Codemasters, our team also includes legends such as Andrew Graham (creator of Micro Machines series) and Jason Falcus (programmer of classics including NBA Jam) alongside a growing and diverse team of global gaming experts.

Everyone contributes creatively to Kwalee’s success, with all employees eligible to pitch their own game ideas on Creative Wednesdays, and we’re proud to have built our success on this inclusive principle.

We have an amazing team of experts collaborating daily between our studios in Leamington Spa, Lisbon, Bangalore and Beijing, or on a remote basis from Turkey, Brazil, Cyprus, the Philippines and many more places around the world. We’ve recently acquired our first external studio, TicTales, which is based in France. 

We have a truly global team making games for a global audience, and it’s paying off: - Kwalee has been voted the Best Large Studio and Best Leadership Team at the TIGA Awards (Independent Game Developers’ Association) and our games have been downloaded in every country on earth - including Antarctica!

Read more
Bengaluru (Bangalore)
2 - 6 yrs
₹8L - ₹20L / yr
skill iconDocker
skill iconKubernetes
DevOps
skill iconAmazon Web Services (AWS)
Windows Azure
+3 more
Job Description:

About BootLabs

https://www.google.com/url?q=https://www.bootlabs.in/&;sa=D&source=calendar&ust=1667803146567128&usg=AOvVaw1r5g0R_vYM07k6qpoNvvh6" target="_blank">https://www.bootlabs.in/

-We are a Boutique Tech Consulting partner, specializing in Cloud Native Solutions. 
-We are obsessed with anything “CLOUD”. Our goal is to seamlessly automate the development lifecycle, and modernize infrastructure and its associated applications.
-With a product mindset, we enable start-ups and enterprises on the cloud
transformation, cloud migration, end-to-end automation and managed cloud services. 
-We are eager to research, discover, automate, adapt, empower and deliver quality solutions on time.
-We are passionate about customer success. With the right blend of experience and exuberant youth in our in-house team, we have significantly impacted customers.




Technical Skills:


Expertise in any one hyper scaler (AWS/AZURE/GCP), including basic services like networking,
data and workload management.
  • AWS 

              Networking: VPC, VPC Peering, Transit Gateway, Route Tables, SecuritGroups, etc.
              Data: RDS, DynamoDB, Elastic Search
Workload: EC2, EKS, Lambda, etc.
  •  Azure
                Networking: VNET, VNET Peering,
               Data: Azure MySQL, Azure MSSQL, etc.
               Workload: AKS, Virtual Machines, Azure Functions
  • GCP
               Networking: VPC, VPC Peering, Firewall, Flowlogs, Routes, Static and External IP Addresses
                Data: Cloud Storage, DataFlow, Cloud SQL, Firestore, BigTable, BigQuery
               Workload: GKE, Instances, App Engine, Batch, etc.

Experience in any one of the CI/CD tools (Gitlab/Github/Jenkins) including runner setup,
templating and configuration.
Kubernetes experience or Ansible Experience (EKS/AKS/GKE), basics like pod, deployment,
networking, service mesh. Used any package manager like helm.
Scripting experience (Bash/python), automation in pipelines when required, system service.
Infrastructure automation (Terraform/pulumi/cloud formation), write modules, setup pipeline and version the code.

Optional:

Experience in any programming language is not required but is appreciated.
Good experience in GIT, SVN or any other code management tool is required.
DevSecops tools like (Qualys/SonarQube/BlackDuck) for security scanning of artifacts, infrastructure and code.
Observability tools (Opensource: Prometheus, Elasticsearch, Open Telemetry; Paid: Datadog,
24/7, etc)
Read more
EnterpriseMinds

at EnterpriseMinds

2 recruiters
phani kalyan
Posted by phani kalyan
Bengaluru (Bangalore)
7 - 9 yrs
₹10L - ₹35L / yr
DevOps
skill iconKubernetes
skill iconDocker
skill iconAmazon Web Services (AWS)
Windows Azure
+5 more
  • Candidate should have good Platform experience on Azure with Terraform.
  • The devops engineer  needs to help developers, create the Pipelines and K8s Deployment Manifests.
  • Good to have experience on migrating data from (AWS) to Azure. 
  • To manage/automate infrastructure automatically using Terraforms. Jenkins is the key CI/CD tool which we uses and it will be used to run these Terraforms.
  • VMs to be provisioned on Azure Cloud and managed.
  • Good hands on experience of Networking on Cloud is required.
  • Ability to setup Database on VM as well as managed DB and Proper set up of cloud hosted microservices needs to be done to communicate with the db services.
  • Kubernetes, Storage, KeyValult, Networking(load balancing and routing) and VMs are the key infrastructure expertise which are essential. 
  • Requirement is  to administer Kubernetes cluster end to end. (Application deployment, managing namespaces, load balancing, policy setup, using blue-green/canary deployment models etc).
  • The experience in AWS is desirable 
  • Python experience is optional however Power shell is mandatory
  • Know-how on the use of GitHub
Administration of Azure Kubernetes services
Read more
Bengaluru (Bangalore), Hyderabad, Pune, Chennai, Jaipur
10 - 14 yrs
₹1L - ₹15L / yr
Ant
Maven
CI/CD
skill iconJenkins
skill iconGitHub
+16 more

DevOps Architect 

Experience:  10 - 12+ year relevant experience on DevOps
Locations : Bangalore, Chennai, Pune, Hyderabad, Jaipur.

Qualification:
• Bachelors or advanced degree in Computer science, Software engineering or equivalent is required.
• Certifications in specific areas are desired

Technical Skillset: Skills Proficiency level

  • Build tools (Ant or Maven) - Expert
  • CI/CD tool (Jenkins or Github CI/CD) - Expert
  • Cloud DevOps (AWS CodeBuild, CodeDeploy, Code Pipeline etc) or Azure DevOps. - Expert
  • Infrastructure As Code (Terraform, Helm charts etc.) - Expert
  • Containerization (Docker, Docker Registry) - Expert
  • Scripting (linux) - Expert
  • Cluster deployment (Kubernetes) & maintenance - Expert
  • Programming (Java) - Intermediate
  • Application Types for DevOps (Streaming like Spark, Kafka, Big data like Hadoop etc) - Expert
  • Artifactory (JFrog) - Expert
  • Monitoring & Reporting (Prometheus, Grafana, PagerDuty etc.) - Expert
  • Ansible, MySQL, PostgreSQL - Intermediate


• Source Control (like Git, Bitbucket, Svn, VSTS etc)
• Continuous Integration (like Jenkins, Bamboo, VSTS )
• Infrastructure Automation (like Puppet, Chef, Ansible)
• Deployment Automation & Orchestration (like Jenkins, VSTS, Octopus Deploy)
• Container Concepts (Docker)
• Orchestration (Kubernetes, Mesos, Swarm)
• Cloud (like AWS, Azure, GoogleCloud, Openstack)

Roles and Responsibilities

• DevOps architect should automate the process with proper tools.
• Developing appropriate DevOps channels throughout the organization.
• Evaluating, implementing and streamlining DevOps practices.
• Establishing a continuous build environment to accelerate software deployment and development processes.
• Engineering general and effective processes.
• Helping operation and developers teams to solve their problems.
• Supervising, Examining and Handling technical operations.
• Providing a DevOps Process and Operations.
• Capacity to handle teams with leadership attitude.
• Must possess excellent automation skills and the ability to drive initiatives to automate processes.
• Building strong cross-functional leadership skills and working together with the operations and engineering teams to make sure that systems are scalable and secure.
• Excellent knowledge of software development and software testing methodologies along with configuration management practices in Unix and Linux-based environment.
• Possess sound knowledge of cloud-based environments.
• Experience in handling automated deployment CI/CD tools.
• Must possess excellent knowledge of infrastructure automation tools (Ansible, Chef, and Puppet).
• Hand on experience in working with Amazon Web Services (AWS).
• Must have strong expertise in operating Linux/Unix environments and scripting languages like Python, Perl, and Shell.
• Ability to review deployment and delivery pipelines i.e., implement initiatives to minimize chances of failure, identify bottlenecks and troubleshoot issues.
• Previous experience in implementing continuous delivery and DevOps solutions.
• Experience in designing and building solutions to move data and process it.
• Must possess expertise in any of the coding languages depending on the nature of the job.
• Experience with containers and container orchestration tools (AKS, EKS, OpenShift, Kubernetes, etc)
• Experience with version control systems a must (GIT an advantage)
• Belief in "Infrastructure as a Code"(IaaC), including experience with open-source tools such as terraform
• Treats best practices for security as a requirement, not an afterthought
• Extensive experience with version control systems like GitLab and their use in release management, branching, merging, and integration strategies
• Experience working with Agile software development methodologies
• Proven ability to work on cross-functional Agile teams
• Mentor other engineers in best practices to improve their skills
• Creating suitable DevOps channels across the organization.
• Designing efficient practices.
• Delivering comprehensive best practices.
• Managing and reviewing technical operations.
• Ability to work independently and as part of a team.
• Exceptional communication skills, be knowledgeable about the latest industry trends, and highly innovative
Read more
Hiringhut

Hiringhut

Agency job
Bengaluru (Bangalore)
3 - 7 yrs
₹12L - ₹18L / yr
skill iconJava
J2EE
skill iconSpring Boot
Hibernate (Java)
skill iconAmazon Web Services (AWS)
+3 more
·         Degree in Computer Science/Engineer.
·         3+ years of Java backed application development and implementation experience.
·         Minimum of one year of experience in Cloud, AWS Engineering & Development .
·         Demonstrated knowledge of Distributed and Scalable systems.
·         Experience on AWS Services is a big plus, such as EC2, ECS, ECR, Lambda, Elastic Cache, API Gateway.
·         Knowledge about API design standards, patterns and best-practices especially Swagger and OpenAPI 3.0, REST, SOAP, MQ, JSON, Microservices etc.
·         Java Microservices, RESTful Web Services.
·         Spring Boot, Spring Cloud, Hibernate.
·         JMS, queues, JBoss/Wildfly.
·         Tools - JUnit, Easymock, Mockito, Docker, Kubernetes, Teraform.
Read more
Bengaluru (Bangalore)
4 - 6 yrs
₹6L - ₹10L / yr
RESTful APIs
skill iconPython
TypeScript
skill iconNodeJS (Node.js)
skill iconDocker
+7 more
Role: Cloud Automation Engineer
Job Description:
• Contribute to customer discussions in collecting the requirement
• Engage in internal and customer POC’s to realize the potential solutions envisaged for the customers.
• Design/Develop/Migrate VRA blueprints and VRO workflows; strong hands-on knowledge in vROPS and integrations with application and VMware solutions.
• Develop automation scripts to support the design and implementation of VMware projects.
Qualification:
• Maintain current, high-level technical knowledge of the entire VMware product portfolio and future product direction and In depth level knowledge
• Maintain deep technical and business knowledge of cloud computing and networking applications, industry directions, and trends.
• Experience with REST API and/or Python programming. TypeScript/NodeJS backend experience
• Experience with Kubernetes
• Familiarity with DevOps tools like Ansible, Puppet, Terraform
• End to end experience in Architecture, Design and Development of VMware Cloud Automation suite with good exposure to VMware products and/or Solutions.
• Hands-on experience in automation, coding, debugging and release.
• Sound process knowledge from requirement gathering, implementation, deployment and Support.
• Experience in working with global teams, customers and partners with solid communication skills.
• VMware CMA certification would be a plus
• Academic background in MS/BE/B-Tech/ IT/CS/ECE/EE would be preferred.
Read more
B2B2C tech Web3 startup

B2B2C tech Web3 startup

Agency job
via Merito by Gaurav Bhosle
Bangalore
5 - 10 yrs
₹30L - ₹50L / yr
Google Cloud Platform (GCP)
skill iconAmazon Web Services (AWS)
Amazon EC2
Amazon S3
Amazon RDS
+2 more
Hi

 
 
Our Client is a social commerce - web3 startup foundedby founders - IITB Graduates who are experienced in retail,ecommerce and fintech

We are looking for a Senior Platform Engineer responsible for handling our GCP/AWS clouds. The
candidate will be responsible for automating the deployment of cloud infrastructure and services to
support application development and hosting (architecting, engineering, deploying, and operationally
managing the underlying logical and physical cloud computing infrastructure).

Location: Bangalore

Reporting Manager: VP, Engineering
Job Description:
● Collaborate with teams to build and deliver solutions implementing serverless,
microservice-based, IaaS, PaaS, and containerized architectures in GCP/AWS environments.
● Responsible for deploying highly complex, distributed transaction processing systems.
● Work on continuous improvement of the products through innovation and learning. Someone with
a knack for benchmarking and optimization
● Hiring, developing, and cultivating a high and reliable cloud support team
● Building and operating complex CI/CD pipelines at scale
● Work with GCP Services, Private Service Connect, Cloud Run, Cloud Functions, Pub/Sub, Cloud
Storage, Networking in general
● Collaborate with Product Management and Product Engineering teams to drive excellence in
Google Cloud products and features.
● Ensures efficient data storage and processing functions in accordance with company security
policies and best practices in cloud security.
● Ensuring scaled database setup/montioring with near zero downtime

Key Skills:
● Hands-on software development experience in Python, NodeJS, or Java
● 5+ years of Linux/Unix Administration monitoring, reliability, and security of Linux-based, online,
high-traffic services and Web/eCommerce properties
● 5+ years of production experience in large-scale cloud-based Infrastructure (GCP preferred)
● Strong experience with Log Analysis and Monitoring tools such as CloudWatch, Splunk,Dynatrace, Nagios, etc.
● Hands-on experience with AWS Cloud – EC2, S3 Buckets, RDS
● Hands-on experience with Infrastructure as a Code (e.g., cloud formation, ARM, Terraform,Ansible, Chef, Puppet) and Version control tools
● Hands-on experience with configuration management (Chef/Ansible)
● Experience in designing High Availability infrastructure and planning for Disaster Recovery solutions

Regards
Team Merito
Read more
upraisal
Bengaluru (Bangalore)
5 - 15 yrs
₹5L - ₹50L / yr
DevOps
skill iconKubernetes
skill iconDocker
Shell Scripting
Perl
+5 more
  • Work towards improving the following 4 verticals - scalability, availability, security, and cost, for company's workflows and products.
  • Help in provisioning, managing, optimizing cloud infrastructure in AWS (IAM, EC2, RDS, CloudFront, S3, ECS, Lambda, ELK etc.)
  • Work with the development teams to design scalable, robust systems using cloud architecture for both 0-to-1 and 1-to-100 products.
  • Drive technical initiatives and architectural service improvements.
  • Be able to predict problems and implement solutions that detect and prevent outages.
  • Mentor/manage a team of engineers.
  • Design solutions with failure scenarios in mind to ensure reliability.
  • Document rigorously to keep track of all changes/upgrades to the infrastructure and as well share knowledge with the rest of the team
  • Identify vulnerabilities during development with actionable information to empower developers to remediate vulnerabilities
  • Automate the build and testing processes to consistently integrate code
  • Manage changes to documents, software, images, large web sites, and other collections of code, configuration, and metadata among disparate teams
Read more
Rapidly growing fintech SaaS firm that propels business grow

Rapidly growing fintech SaaS firm that propels business grow

Agency job
via Jobdost by Mamatha A
Bengaluru (Bangalore)
5 - 10 yrs
₹25L - ₹35L / yr
DevOps
skill iconKubernetes
skill iconDocker
skill iconAmazon Web Services (AWS)
Windows Azure
+8 more

What is the role?

As a DevOps Engineer, you are responsible for setting up and maintaining the GIT repository, DevOps tools like Jenkins, UCD, Docker, Kubernetes, Jfrog Artifactory, Cloud monitoring tools, and Cloud security.

Key Responsibilities

  • Set up, configure, and maintain GIT repos, Jenkins, UCD, etc. for multi-hosting cloud environments.
  • Architect and maintain the server infrastructure in AWS. Build highly resilient infrastructure following industry best practices.
  • Work on Docker images and maintain Kubernetes clusters.
  • Develop and maintain the automation scripts using Ansible or other available tools.
  • Maintain and monitor cloud Kubernetes Clusters and patching when necessary.
  • Work on Cloud security tools to keep applications secured.
  • Participate in software development lifecycle, specifically infra design, execution, and debugging required to achieve a successful implementation of integrated solutions within the portfolio.
  • Have the necessary technical and professional expertise.

What are we looking for?

  • Minimum 5-12 years of experience in the IT industry.
  • Expertise in implementing and managing DevOps CI/CD pipeline.
  • Experience in DevOps automation tools. Well versed with DevOps Frameworks, and Agile.
  • Working knowledge of scripting using Shell, Python, Terraform, Ansible, Puppet, or chef.
  • Experience and good understanding of any Cloud like AWS, Azure, or Google cloud.
  • Knowledge of Docker and Kubernetes is required.
  • Proficient in troubleshooting skills with proven abilities in resolving complex technical issues.
  • Experience working with ticketing tools.
  • Middleware technologies knowledge or database knowledge is desirable.
  • Experience with Jira is a plus.

What can you look for?

A wholesome opportunity in a fast-paced environment that will enable you to juggle between concepts, yet maintain the quality of content, interact, and share your ideas and have loads of learning while at work. Work with a team of highly talented young professionals and enjoy the benefits of being here.

We are

It  is a rapidly growing fintech SaaS firm that propels business growth while focusing on human motivation. Backed by Giift and Apis Partners Growth Fund II,offers a suite of three products - Plum, Empuls, and Compass. Works with more than 2000 clients across 10+ countries and over 2.5 million users. Headquartered in Bengaluru, It  is a 300+ strong team with four global offices in San Francisco, Dublin, Singapore, New Delhi.

Way forward

We look forward to connecting with you. As you may take time to review this opportunity, we will wait for a reasonable time of around 3-5 days before we screen the collected applications and start lining up job discussions with the hiring manager. We however assure you that we will attempt to maintain a reasonable time window for successfully closing this requirement. The candidates will be kept informed and updated on the feedback and application status.

Read more
Pixuate

at Pixuate

1 recruiter
Ramya Kukkaje
Posted by Ramya Kukkaje
Bengaluru (Bangalore)
5 - 9 yrs
₹10L - ₹20L / yr
skill iconDocker
skill iconKubernetes
DevOps
skill iconAmazon Web Services (AWS)
Oracle Cloud
+5 more
  • Pixuate is a deep-tech AI start-up enabling businesses make smarter decisions with our edge-based video analytics platform and offer innovative solutions across traffic management, industrial digital transformation, and smart surveillance. We aim to serve enterprises globally as a preferred partner for digitization of visual information.

    Job Description

    We at Pixuate are looking for highly motivated and talented Senior DevOps Engineers to support building the next generation, innovative, deep-tech AI based products. If you are someone who has a passion for building a great software, has analytical mindset and enjoys solving complex problems, thrives in a challenging environment, self-driven, constantly exploring & learning new technologies, have ability to succeed on one’s own merits and fast-track your career growth we would love to talk!

    What do we expect from this role?

    • This role’s key area of focus is to co-ordinate and manage the product from development through deployment, working with rest of the engineering team to ensure smooth functioning.
    • Work closely with the Head of Engineering in building out the infrastructure required to deploy, monitor and scale the services and systems.
    • Act as the technical expert, innovator, and strategic thought leader within the Cloud Native Development, DevOps and CI/CD pipeline technology engineering discipline.
    • Should be able to understand how technology works and how various structures fall in place, with a high-level understanding of working with various operating systems and their implications.
    • Troubleshoots basic software or DevOps stack issues

    You would be great at this job, if you have below mentioned competencies

    • Tech /M.Tech/MCA/ BSc / MSc/ BCA preferably in Computer Science
    • 5+ years of relevant work experience
    • https://www.edureka.co/blog/devops-skills#knowledge">Knowledge on Various DevOps Tools and Technologies
    • Should have worked on tools like Docker, Kubernetes, Ansible in a production environment for data intensive systems.
    • Experience in developing https://www.edureka.co/blog/continuous-delivery/">Continuous Integration/ Continuous Delivery pipelines (CI/ CD) preferably using Jenkins, scripting (Shell / Python) and https://www.edureka.co/blog/what-is-git/">Git and Git workflows
    • Experience implementing role based security, including AD integration, security policies, and auditing in a Linux/Hadoop/AWS environment.
    • Experience with the design and implementation of big data backup/recovery solutions.
    • Strong Linux fundamentals and scripting; experience as Linux Admin is good to have.
    • Working knowledge in Python is a plus
    • Working knowledge of TCP/IP networking, SMTP, HTTP, load-balancers (ELB, HAProxy) and high availability architecture is a plus
    • Strong interpersonal and communication skills
    • Proven ability to complete projects according to outlined scope and timeline
    • Willingness to travel within India and internationally whenever required
    • Demonstrated leadership qualities in past roles

    More about Pixuate:

    Pixuate, owned by Cocoslabs Innovative Solutions Pvt. Ltd., is a leading AI startup building the most advanced Edge-based video analytics products. We are recognized for our cutting-edge R&D in deep learning, computer vision and AI and we are solving some of the most challenging problems faced by enterprises. Pixuate’s plug-and-play platform revolutionizes monitoring, compliance to safety, and efficiency improvement for Industries, Banks & Enterprises by providing actionable real-time insights leveraging CCTV cameras.

    We have enabled our customers such as Hindustan Unilever, Godrej, Secuira, L&T, Bigbasket, Microlabs, Karnataka Bank etc and rapidly expanding our business to cater to the needs of Manufacturing & Logistics, Oil and Gas sector.

    Rewards & Recognitions:

    Why join us?

    You will get an opportunity to work with the founders and be part of 0 to 1 journey& get coached and guided. You will also get an opportunity to excel your skills by being innovative and contributing to the area of your personal interest. Our culture encourages innovation, freedom and rewards high performers with faster growth and recognition.   

    Where to find us?

    Website: http://pixuate.com/">http://pixuate.com/                            

    Linked in: https://www.linkedin.com/company/pixuate-ai     

     

    Place of Work:

    Work from Office – Bengaluru
Read more
Saas product startup all-in-one data stack for organizations

Saas product startup all-in-one data stack for organizations

Agency job
via Qrata by Rayal Rajan
Bengaluru (Bangalore)
1 - 3 yrs
₹15L - ₹20L / yr
skill iconDocker
skill iconKubernetes
DevOps
Terraform
Responsibilities :
● Explore
○ As a devops engineer, you will have multiple ways, tools & technologies to solve
a particular problem. We want you to take things in your own hands and figure
out the best way to solve it.
● PDCT
○ Plan, design, code & write test cases for problems you are solving
● Tuning
○ Help to tune performance and ensure high availability of infrastructure, including
reviewing system and application logs
● Security
○ Work on code-level application security
● Deploy
○ Deploy, manage and operate scalable, highly available, and fault-tolerant
systems in client environments.
Technologies (4 out of 5 are required) :
● Terraform*
● Docker*
● Kubernetes*
● Bash Scripting
● SQL
(* marked are a must)
The challenges are great (as are the rewards). If you are looking to take these DevOps
challenges head on & wish to learn a great deal out of it and contribute to the company along
the way, this is the role for you.
Ready?
If developing impactful product for a initial stage startup sounds appealing to you, let’s
have a conversation. (Confidential, of course)
Read more
Anarock Technology

at Anarock Technology

1 video
3 recruiters
Arpita Saha
Posted by Arpita Saha
Bengaluru (Bangalore)
4 - 7 yrs
₹5L - ₹12L / yr
skill iconDocker
Terraform
skill iconAmazon Web Services (AWS)
skill iconKubernetes
DevOps
+2 more

ApnaComplex is one of India’s largest and fastest-growing PropTech disruptors within the Society & Apartment Management business.  The SaaS based B2C platform is headquartered out of India’s tech start-up hub, Bangalore, with branches in 6 other cities. It currently empowers 3,600 Societies, managing over 6 Lakh Households in over 80 Indian cities to effortlessly manage all aspects of running large complexes seamlessly.

ApnaComplex is part of ANAROCK Group. ANAROCK Group is India's leading specialized real estate services company having diversified interests across the real estate value chain.

If it excites you to - drive innovation, create industry-first solutions, build new capabilities ground-up, and work with multiple new technologies, ApnaComplex is the place for you.

 

Must have-

 

  • Knowledge of Docker
  • Knowledge of Terraforms
  • Knowledge of AWS

 

Good to have -

  • Kubernetes
  • Scripting language: PHP/Go Lang and Python
  • Webserver knowledge
  • Logging and monitoring experience
  • Test, build, design, deployment, and ability to maintain continuous integration and continuous delivery process using tools like Jenkins, maven Git, etc. 
  • Build and maintain highly available production systems.
  • Must know how to choose the best tools and technologies which best fits the business needs. 
  • Develop software to integrate with internal back-end systems.
  • Investigate and resolve technical issues.
  • Problem-solving attitude.
  • Ability to automate test and deploy the code and monitor. 
  • Work in close coordination with the development and operations team such that the application is in line with performance according to the customer's expectation.
  • Lead and guide the team in identifying and implementing new technologies.

 

 

Skills that will help you build a success story with us

 

  • An ability to quickly understand and solve new problems
  • Strong interpersonal skills
  • Excellent data interpretation
  • Context-switching
  • Intrinsically motivated
  • A tactical and strategic track record for delivering research-driven results

 

Quick Glances:

 

 

ANAROCK Ethos - Values Over Value:

Our assurance of consistent ethical dealing with clients and partners reflects our motto - Values Over Value.

We value diversity within ANAROCK Group and are committed to offering equal opportunities in employment. We do not discriminate against any team member or applicant for employment based on nationality, race, color, religion, caste, gender identity / expression, sexual orientation, disability, social origin and status, indigenous status, political opinion, age, marital status or any other personal characteristics or status. ANAROCK Group values all talent and will do its utmost to hire, nurture and grow them.

Read more
Full Service Engineering and R&D based MNC

Full Service Engineering and R&D based MNC

Agency job
via Jobdost by Sathish Kumar
Bengaluru (Bangalore), Hyderabad
5 - 8 yrs
₹10L - ₹17L / yr
skill iconPython
Bash
Powershell
skill iconDocker
skill iconKubernetes
+6 more
Candidates MUST HAVE
  • Experience with Infrastructure-as-Code tools(IaS) like Terraform and Cloud Formation.
  • Proficiency in cloud-native technologies and architectures (Docker/ Kubernetes), Ci/CD pipelines.
  • Good experience in Javascript.
  • Expertise in Linux / Windows environment.
  • Good Experience in Scripting languages like PowerShell / Bash/ Python.
  • Proficiency in revision control and DevOps best practices like Git
Read more
Top 3 Fintech Startup

Top 3 Fintech Startup

Agency job
via Jobdost by Sathish Kumar
Bengaluru (Bangalore)
2 - 5 yrs
₹6L - ₹13L / yr
skill iconDocker
skill iconKubernetes
DevOps
skill iconAmazon Web Services (AWS)
Windows Azure
+5 more

Job Responsibilities:

Section 1 -

- Responsible for managing and providing L1 support to Build, design, deploy and maintain the implementation of Cloud solutions on AWS.
- Implement, deploy and maintain development, staging & production environments on AWS.
- Familiar with serverless architecture and services on AWS like Lambda, Fargate, EBS, Glue, etc.
- Understanding of Infra as a code and familiar with related tools like Terraform, Ansible Cloudformation etc.

Section 2 -

- Managing the Windows and Linux machines, Kubernetes, Git, etc.
- Responsible for L1 management of Servers, Networks, Containers, Storage, and Databases services on AWS.

Section 3 -

- Timely monitoring of production workload alerts and quick addressing the issues
- Responsible for monitoring and maintaining the Backup and DR process.

Section 4 -

- Responsible for documenting the process.
- Responsible for leading cloud implementation projects with end-to-end execution.


Qualifications:
 Bachelors of Engineering / MCA Preferably with AWS, Cloud certification



Skills & Competencies

- Linux and Windows servers management and troubleshooting.

- AWS services experience on CloudFormation, EC2, RDS, VPC, EKS, ECS, Redshift, Glue, etc. - AWS EKS

- Kubernetes and containers knowledge

- Understanding of setting up AWS Messaging, streaming and queuing Services(MSK, Kinesis, SQS, SNS, MQ)

- Understanding of serverless architecture. - High understanding of Networking concepts

- High understanding of Serverless architecture concept - Managing to monitor and alerting systems

- Sound knowledge of Database concepts like Dataware house, Data Lake, and ETL jobs

- Good Project management skills

- Documentation skills

- Backup, and DR understanding

Soft Skills - Project management, Process Documentation



Ideal Candidate:

- AWS certification with between 2-4 years of experience with certification and project execution experience.

- Someone who is interested in building sustainable cloud architecture with automation on AWS.

- Someone who is interested in learning and being challenged on a day-to-day basis.

- Someone who can take ownership of the tasks and is willing to take the necessary action to get it done.

- Someone who is curious to analyze and solve complex problems.

- Someone who is honest with their quality of work and is comfortable with taking ownership of their success and failure, both.

Behavioral Traits

- We are looking for someone who is interested to be part of creativity and the innovation-based environment with other team members.

- We are looking for someone who understands the idea/importance of teamwork and individual ownership at the same time.

- We are looking for someone who can debate logically, respectfully disagree, and can admit if proven wrong and who can learn from their mistakes and grow quickly

Read more
Quber Technologies Limited
Manish Singh
Posted by Manish Singh
Bengaluru (Bangalore)
3 - 6 yrs
₹15L - ₹25L / yr
skill iconDocker
skill iconKubernetes
DevOps
skill iconAmazon Web Services (AWS)
Microsoft Windows Azure
+7 more

As a SaaS DevOps Engineer, you will be responsible for providing automated tooling and process enhancements for SaaS deployment, application and infrastructure upgrades and production monitoring.

  • Development of automation scripts and pipelines for deployment and monitoring of new production environments.

  • Development of automation scripts for upgrades, hotfixes deployments and maintenance.

  • Work closely with Scrum teams and product groups to support the quality and growth of the SaaS services.

  • Collaborate closely with SaaS Operations team to handle day-to-day production activities - handling alerts and incidents.

  • Assist SaaS Operations team to handle customers focus projects: migrations, features enablement.

  • Write Knowledge articles to document known issues and best practices.

  • Conduct regression tests to validate solutions or workarounds.

  • Work in a globally distributed team.

 

What achievements should you have so far?

  • Bachelor's or master’s degree in Computer Science, Information Systems, or equivalent.

  • Experience with containerization, deployment, and operations.

  • Strong knowledge of CI/CD processes (Git, Jenkins, Pipelines).

  • Good experience with Linux systems and Shell scripting.

  • Basic cloud experience, preferably oriented on MS Azure.

  • Basic knowledge of containerized solutions (Helm, Kubernetes, Docker).

  • Good Networking skills and experience.

  • Having Terraform or CloudFormation knowledge will be considered a plus.

  • Ability to analyze a task from a system perspective.

  • Excellent problem-solving and troubleshooting skills.

  • Excellent written and verbal communication skills; mastery in English and local language.

  • Must be organized, thorough, autonomous, committed, flexible, customer-focused and productive.

Read more
Top Management Consulting Company

Top Management Consulting Company

Agency job
via People First Consultants by Naveed Mohd
Gurugram, Bengaluru (Bangalore), Chennai
2 - 9 yrs
₹9L - ₹27L / yr
DevOps
Microsoft Windows Azure
gitlab
skill iconAmazon Web Services (AWS)
Google Cloud Platform (GCP)
+15 more
Greetings!!

We are looking out for a technically driven  "ML OPS Engineer" for one of our premium client

COMPANY DESCRIPTION:
This Company is a global management consulting firm. We are the trusted advisor to the world's leading businesses, governments, and institutions. We work with leading organizations across the private, public and social sectors. Our scale, scope, and knowledge allow us to address


Key Skills
• Excellent hands-on expert knowledge of cloud platform infrastructure and administration
(Azure/AWS/GCP) with strong knowledge of cloud services integration, and cloud security
• Expertise setting up CI/CD processes, building and maintaining secure DevOps pipelines with at
least 2 major DevOps stacks (e.g., Azure DevOps, Gitlab, Argo)
• Experience with modern development methods and tooling: Containers (e.g., docker) and
container orchestration (K8s), CI/CD tools (e.g., Circle CI, Jenkins, GitHub actions, Azure
DevOps), version control (Git, GitHub, GitLab), orchestration/DAGs tools (e.g., Argo, Airflow,
Kubeflow)
• Hands-on coding skills Python 3 (e.g., API including automated testing frameworks and libraries
(e.g., pytest) and Infrastructure as Code (e.g., Terraform) and Kubernetes artifacts (e.g.,
deployments, operators, helm charts)
• Experience setting up at least one contemporary MLOps tooling (e.g., experiment tracking,
model governance, packaging, deployment, feature store)
• Practical knowledge delivering and maintaining production software such as APIs and cloud
infrastructure
• Knowledge of SQL (intermediate level or more preferred) and familiarity working with at least
one common RDBMS (MySQL, Postgres, SQL Server, Oracle)
Read more
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Find more jobs
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort