
We are looking for a Senior Platform Engineer responsible for handling our GCP/AWS clouds. The
candidate will be responsible for automating the deployment of cloud infrastructure and services to
support application development and hosting (architecting, engineering, deploying, and operationally
managing the underlying logical and physical cloud computing infrastructure).
Location: Bangalore
Reporting Manager: VP, Engineering
Job Description:
● Collaborate with teams to build and deliver solutions implementing serverless,
microservice-based, IaaS, PaaS, and containerized architectures in GCP/AWS environments.
● Responsible for deploying highly complex, distributed transaction processing systems.
● Work on continuous improvement of the products through innovation and learning. Someone with
a knack for benchmarking and optimization
● Hiring, developing, and cultivating a high and reliable cloud support team
● Building and operating complex CI/CD pipelines at scale
● Work with GCP Services, Private Service Connect, Cloud Run, Cloud Functions, Pub/Sub, Cloud
Storage, Networking in general
● Collaborate with Product Management and Product Engineering teams to drive excellence in
Google Cloud products and features.
● Ensures efficient data storage and processing functions in accordance with company security
policies and best practices in cloud security.
● Ensuring scaled database setup/montioring with near zero downtime
Key Skills:
● Hands-on software development experience in Python, NodeJS, or Java
● 5+ years of Linux/Unix Administration monitoring, reliability, and security of Linux-based, online,
high-traffic services and Web/eCommerce properties
● 5+ years of production experience in large-scale cloud-based Infrastructure (GCP preferred)
● Strong experience with Log Analysis and Monitoring tools such as CloudWatch, Splunk,Dynatrace, Nagios, etc.
● Hands-on experience with AWS Cloud – EC2, S3 Buckets, RDS
● Hands-on experience with Infrastructure as a Code (e.g., cloud formation, ARM, Terraform,Ansible, Chef, Puppet) and Version control tools
● Hands-on experience with configuration management (Chef/Ansible)
● Experience in designing High Availability infrastructure and planning for Disaster Recovery solutions
Regards
Team Merito

Similar jobs
Are you an experienced Infrastructure/DevOps Engineer looking for an exciting remote opportunity to design, automate, and scale modern cloud environments? We’re seeking a skilled engineer with strong expertise in Terraform and DevOps practices to join our growing team. If you’re passionate about automation, cloud infrastructure, and CI/CD pipelines, we’d love to hear from you!
Key Responsibilities:
- Design, implement, and manage cloud infrastructure using Terraform (IaC).
- Build and maintain CI/CD pipelines for seamless application deployment.
- Ensure scalability, reliability, and security of cloud-based systems.
- Collaborate with developers and QA to optimize environments and workflows.
- Automate infrastructure provisioning, monitoring, and scaling.
- Troubleshoot infrastructure and deployment issues quickly and effectively.
- Stay up to date with emerging DevOps tools, practices, and cloud technologies.
Requirements:
- Minimum 5+ years of professional experience in DevOps or Infrastructure Engineering.
- Strong expertise in Terraform and Infrastructure as Code (IaC).
- Hands-on experience with AWS / Azure / GCP (at least one cloud platform).
- Proficiency in CI/CD tools (Jenkins, GitHub Actions, GitLab CI/CD, etc.).
- Experience with Docker, Kubernetes, and container orchestration.
- Strong knowledge of Linux systems, networking, and security best practices.
- Familiarity with monitoring & logging tools (Prometheus, Grafana, ELK, etc.).
- Scripting experience (Bash, Python, or similar).
- Excellent problem-solving skills and ability to work in remote teams.
Perks and Benefits:
- Competitive salary with remote work flexibility.
- Opportunity to work with global clients on modern infrastructure.
- Growth and learning opportunities in cutting-edge DevOps practices.
- Collaborative team culture that values automation and innovation.
Key Skills Required:
· You will be part of the DevOps engineering team, configuring project environments, troubleshooting integration issues in different systems also be involved in building new features for next generation of cloud recovery services and managed services.
· You will directly guide the technical strategy for our clients and build out a new capability within the company for DevOps to improve our business relevance for customers.
· You will be coordinating with Cloud and Data team for their requirements and verify the configurations required for each production server and come with Scalable solutions.
· You will be responsible to review infrastructure and configuration of micro services and packaging and deployment of application
To be the right fit, you'll need:
· Expert in Cloud Services like AWS.
· Experience in Terraform Scripting.
· Experience in container technology like Docker and orchestration like Kubernetes.
· Good knowledge of frameworks such as Jenkins, CI/CD pipeline, Bamboo Etc.
· Experience with various version control system like GIT, build tools (Mavan, ANT, Gradle ) and cloud automation tools (Chef, Puppet, Ansible)
General Description:Owns all technical aspects of software development for assigned applications
Participates in the design and development of systems & application programs
Functions as Senior member of an agile team and helps drive consistent development practices – tools, common components, and documentation
Required Skills:
In depth experience configuring and administering EKS clusters in AWS.
In depth experience in configuring **Splunk SaaS** in AWS environments especially in **EKS**
In depth understanding of OpenTelemetry and configuration of **OpenTelemetry Collectors**
In depth knowledge of observability concepts and strong troubleshooting experience.
Experience in implementing comprehensive monitoring and logging solutions in AWS using **CloudWatch**.
Experience in **Terraform** and Infrastructure as code.
Experience in **Helm**Strong scripting skills in Shell and/or python.
Experience with large-scale distributed systems and architecture knowledge (Linux/UNIX and Windows operating systems, networking, storage) in a cloud computing or traditional IT infrastructure environment.
Must have a good understanding of cloud concepts (Storage /compute/network).
Experience collaborating with several cross functional teams to architect observability pipelines for various GCP services like GKE, cloud run Big Query etc.
Experience with Git and GitHub.Experience with code build and deployment using GitHub actions, and Artifact Registry.
Proficient in developing and maintaining technical documentation, ADRs, and runbooks.
Please Apply - https://zrec.in/RZ7zE?source=CareerSite
About Us
Infra360 Solutions is a services company specializing in Cloud, DevSecOps, Security, and Observability solutions. We help technology companies adapt DevOps culture in their organization by focusing on long-term DevOps roadmap. We focus on identifying technical and cultural issues in the journey of successfully implementing the DevOps practices in the organization and work with respective teams to fix issues to increase overall productivity. We also do training sessions for the developers and make them realize the importance of DevOps. We provide these services - DevOps, DevSecOps, FinOps, Cost Optimizations, CI/CD, Observability, Cloud Security, Containerization, Cloud Migration, Site Reliability, Performance Optimizations, SIEM and SecOps, Serverless automation, Well-Architected Review, MLOps, Governance, Risk & Compliance. We do assessments of technology architecture, security, governance, compliance, and DevOps maturity model for any technology company and help them optimize their cloud cost, streamline their technology architecture, and set up processes to improve the availability and reliability of their website and applications. We set up tools for monitoring, logging, and observability. We focus on bringing the DevOps culture to the organization to improve its efficiency and delivery.
Job Description
Job Title: DevOps Engineer GCP
Department: Technology
Location: Gurgaon
Work Mode: On-site
Working Hours: 10 AM - 7 PM
Terms: Permanent
Experience: 2-4 years
Education: B.Tech/MCA/BCA
Notice Period: Immediately
Infra360.io is searching for a DevOps Engineer to lead our group of IT specialists in maintaining and improving our software infrastructure. You'll collaborate with software engineers, QA engineers, and other IT pros in deploying, automating, and managing the software infrastructure. As a DevOps engineer you will also be responsible for setting up CI/CD pipelines, monitoring programs, and cloud infrastructure.
Below is a detailed description of the roles and responsibilities, expectations for the role.
Tech Stack :
- Kubernetes: Deep understanding of Kubernetes clusters, container orchestration, and its architecture.
- Terraform: Extensive hands-on experience with Infrastructure as Code (IaC) using Terraform for managing cloud resources.
- ArgoCD: Experience in continuous deployment and using ArgoCD to maintain GitOps workflows.
- Helm: Expertise in Helm for managing Kubernetes applications.
- Cloud Platforms: Expertise in GCP, AWS or Azure will be an added advantage.
- Debugging and Troubleshooting: The DevOps Engineer must be proficient in identifying and resolving complex issues in a distributed environment, ranging from networking issues to misconfigurations in infrastructure or application components.
Key Responsibilities:
- CI/CD and configuration management
- Doing RCA of production issues and providing resolution
- Setting up failover, DR, backups, logging, monitoring, and alerting
- Containerizing different applications on the Kubernetes platform
- Capacity planning of different environment's infrastructure
- Ensuring zero outages of critical services
- Database administration of SQL and NoSQL databases
- Infrastructure as a code (IaC)
- Keeping the cost of the infrastructure to the minimum
- Setting up the right set of security measures
- CI/CD and configuration management
- Doing RCA of production issues and providing resolution
- Setting up failover, DR, backups, logging, monitoring, and alerting
- Containerizing different applications on the Kubernetes platform
- Capacity planning of different environment's infrastructure
- Ensuring zero outages of critical services
- Database administration of SQL and NoSQL databases
- Infrastructure as a code (IaC)
- Keeping the cost of the infrastructure to the minimum
- Setting up the right set of security measures
Ideal Candidate Profile:
- A graduation/post-graduation degree in Computer Science and related fields
- 2-4 years of strong DevOps experience with the Linux environment.
- Strong interest in working in our tech stack
- Excellent communication skills
- Worked with minimal supervision and love to work as a self-starter
- Hands-on experience with at least one of the scripting languages - Bash, Python, Go etc
- Experience with version control systems like Git
- Strong experience of GCP.
- Strong experience with managing the Production Systems day in and day out
- Experience in finding issues in different layers of architecture in production environment and fixing them
- Knowledge of SQL and NoSQL databases, ElasticSearch, Solr etc.
- Knowledge of Networking, Firewalls, load balancers, Nginx, Apache etc.
- Experience in automation tools like Ansible/SaltStack and Jenkins
- Experience in Docker/Kubernetes platform and managing OpenStack (desirable)
- Experience with Hashicorp tools i.e. Vault, Vagrant, Terraform, Consul, VirtualBox etc. (desirable)
- Experience with managing/mentoring small team of 2-3 people (desirable)
- Experience in Monitoring tools like Prometheus/Grafana/Elastic APM.
- Experience in logging tools Like ELK/Loki.
Hi,
We are looking for candidate with experience in DevSecOps.
Please find below JD for your reference.
Responsibilities:
Execute shell scripts for seamless automation and system management.
Implement infrastructure as code using Terraform for AWS, Kubernetes, Helm, kustomize, and kubectl.
Oversee AWS security groups, VPC configurations, and utilize Aviatrix for efficient network orchestration.
Contribute to Opentelemetry Collector for enhanced observability.
Implement microsegmentation using AWS native resources and Aviatrix for commercial routes.
Enforce policies through Open Policy Agent (OPA) integration.
Develop and maintain comprehensive runbooks for standard operating procedures.
Utilize packet tracing for network analysis and security optimization.
Apply OWASP tools and practices for robust web application security.
Integrate container vulnerability scanning tools seamlessly within CI/CD pipelines.
Define security requirements for source code repositories, binary repositories, and secrets managers in CI/CD pipelines.
Collaborate with software and platform engineers to infuse security principles into DevOps teams.
Regularly monitor and report project status to the management team.
Qualifications:
Proficient in shell scripting and automation.
Strong command of Terraform, AWS, Kubernetes, Helm, kustomize, and kubectl.
Deep understanding of AWS security practices, VPC configurations, and Aviatrix.
Familiarity with Opentelemetry for observability and OPA for policy enforcement.
Experience in packet tracing for network analysis.
Practical application of OWASP tools and web application security.
Integration of container vulnerability scanning tools within CI/CD pipelines.
Proven ability to define security requirements for source code repositories, binary repositories, and secrets managers in CI/CD pipelines.
Collaboration expertise with DevOps teams for security integration.
Regular monitoring and reporting capabilities.
Site Reliability Engineering experience.
Hands-on proficiency with source code management tools, especially Git.
Cloud platform expertise (AWS, Azure, or GCP) with hands-on experience in deploying and managing applications.
Please send across your updated profile.
Required Skills
• Automation is a part of your daily functions, so thorough familiarity with Unix Bourne shell scripting and Python is a critical survival skill.
• Integration and maintenance of automated tools
• Strong analytical and problem-solving skills
• Working experience in source control tools such as GIT/Github/Gitlab/TFS
• Have experience with modern virtualization technologies (Docker, KVM, AWS, OpenStack, or any orchestration platforms)
• Automation of deployment, customization, upgrades, and monitoring through modern DevOps tools (Ansible, Kubernetes, OpenShift, etc) • Advanced Linux admin experience
• Using Jenkins or similar tools
• Deep understanding of Container orchestration(Preferably Kubernetes )
• Strong knowledge of Object Storage(Preferably Cept on Rook)
• Experience in installing, managing & tuning microservices environments using Kubernetes & Docker both on-premise and on the cloud.
• Experience in deploying and managing spring boot applications.
• Experience in deploying and managing Python applications using Django, FastAPI, Flask.
• Experience in deploying machine learning pipelines/data pipelines using Airflow/Kubeflow /Mlflow.
• Experience in web server and reverse Proxy like Nginx, Apache Server, HAproxy
• Experience in monitoring tools like Prometheus, Grafana.
• Experience in provisioning & maintaining SQL/NoSQL databases.
Desired Skills
• Configuration software: Ansible
• Excellent communication and collaboration skills
• Good experience on Networking Technologies like a Load balancer, ACL, Firewall, VIP, DNS
• Programmatic experience with AWS, DO, or GCP storage & machine images
• Experience on various Linux distributions
• Knowledge of Azure DevOps Server
• Docker management and troubleshooting
• Familiarity with micro-services and RESTful systems
• AWS / GCP / Azure certification
• Interact with the Engineering for supporting/maintaining/designing backend infrastructure for product support
• Create fully automated global cloud infrastructure that spans multiple regions.
• Great learning attitude to the newest technology and a Team player
Description
DevOps Engineer / SRE
- Understanding of maintenance of existing systems (Virtual machines), Linux stack
- Experience running, operating and maintainence of Kubernetes pods
- Strong Scripting skills
- Experience in AWS
- Knowledge of configuring/optimizing open source tools like Kafka, etc.
- Strong automation maintenance - ability to identify opportunities to speed up build and deploy process with strong validation and automation
- Optimizing and standardizing monitoring, alerting.
- Experience in Google cloud platform
- Experience/ Knowledge in Python will be an added advantage
- Experience on Monitoring Tools like Jenkins, Kubernetes ,Nagios,Terraform etc
Please find the JD below:
- Candidate should have good Platform experience on Azure with Terraform.
- The devops engineer needs to help developers, create the Pipelines and K8s Deployment Manifests.
- Good to have experience on migrating data from (AWS) to Azure.
- To manage/automate infrastructure automatically using Terraforms. Jenkins is the key CI/CD tool which we uses and it will be used to run these Terraforms.
- VMs to be provisioned on Azure Cloud and managed.
- Good hands-on experience of Networking on Cloud is required.
- Ability to setup Database on VM as well as managed DB and Proper set up of cloud hosted microservices needs to be done to communicate with the db services.
- Kubernetes, Storage, KeyValult, Networking (load balancing and routing) and VMs are the key infrastructure expertise which are essential.
- Requirement is to administer Kubernetes cluster end to end. (Application deployment, managing namespaces, load balancing, policy setup, using blue green/canary deployment models etc).
- The experience in AWS is desirable.
- Python experience is optional however Power shell is mandatory.
- Know-how on the use of GitHub
- Administration of Azure Kubernetes services
The brand is associated with some of the major icons across categories and tie-ups with industries covering fashion, sports, and music, of course. The founders are Marketing grads, with vast experience in the consumer lifestyle products and other major brands. With their vigorous efforts toward quality and marketing, they have been able to strike a chord with major E-commerce brands and even consumers.
What you will do:
- Defining and documenting best practices and strategies regarding application deployment and infrastructure maintenance
- Providing guidance, thought leadership and mentorship to development teams to build cloud competencies
- Ensuring application performance, uptime, and scale, maintaining high standards of code quality and thoughtful design
- Managing cloud environments in accordance with company security guidelines
- Developing and implementing technical efforts to design, build and deploy AWS applications at the direction of lead architects, including large-scale data processing, computationally intensive statistical modeling and advanced analytics
- Participating in all aspects of the software development life cycle for AWS solutions, including planning, requirements, development, testing, and quality assurance
- Troubleshooting incidents, identifying root cause, fixing and documenting problems and implementing preventive measures
- Educating teams on the implementation of new cloud-based initiatives, providing associated training as required
Desired Candidate Profile
What you need to have:- Bachelor’s degree in computer science, information technology
- 2+ years of experience as architect, designing, developing, and implementing cloud solutions on AWS platforms
- Experience in several of the following areas: database architecture, ETL, business intelligence, big data, machine learning, advanced analytic
- Proven ability to collaborate with multi-disciplinary teams of business analysts, developers, data scientists and subject matter experts
- Self-motivation with the ability to drive features to delivery
- Strong analytical and problem solving skills
- Excellent oral and written communication skills
- Good logical sense, strong technical skills and the ability to learn new technologies quickly
- AWS certifications are a plus
- Knowledge of web services, API, REST, and RPC
• At least 4 years of hands-on experience with cloud infrastructure on GCP
• Hands-on-Experience on Kubernetes is a mandate
• Exposure to configuration management and orchestration tools at scale (e.g. Terraform, Ansible, Packer)
• Knowledge and hand-on-experience in DevOps tools (e.g. Jenkins, Groovy, and Gradle)
• Knowledge and hand-on-experience on the various platforms (e.g. Gitlab, CircleCl and Spinnakar)
• Familiarity with monitoring and alerting tools (e.g. CloudWatch, ELK stack, Prometheus)
• Proven ability to work independently or as an integral member of a team
Preferable Skills:
• Familiarity with standard IT security practices such as encryption,
credentials and key management.
• Proven experience on various coding languages (Java, Python-) to
• support DevOps operation and cloud transformation
• Familiarity and knowledge of the web standards (e.g. REST APIs, web security mechanisms)
• Hands on experience with GCP
• Experience in performance tuning, services outage management and troubleshooting.
Attributes:
• Good verbal and written communication skills
• Exceptional leadership, time management, and organizational skill Ability to operate independently and make decisions with little direct supervision

