
- Building and setting up new development tools and infrastructure
- Understanding the needs of stakeholders and conveying this to developers
- Working on ways to automate and improve development and release
processes - Testing and examining code written by others and analyzing results
- Ensuring that systems are safe and secure against cybersecurity
threats - Identifying technical problems and developing software updates and ‘fixes’
- Working with software developers and software engineers to ensure that development follows established processes and works as intended
- Planning out projects and being involved in project management decisions
- BE / MCA / B.Sc-IT / B.Tech in Computer Science or a related field.
- 4+ years of overall development experience.
- Strong understanding of cloud deployment and setup
- Hands-on experience with tools like Jenkins, Gradle etc.
- Deploy updates and fixes
- Provide Level 2 technical support
- Build tools to reduce occurrences of errors and improve customer experience
- Perform root cause analysis for production errors
- Investigate and resolve technical issues
- Develop scripts to automate deployment
- Design procedures for system troubleshooting and maintenance
- Skills and Qualifications
- Proficient with git and git workflows
- Working knowledge of databases and SQL
- Problem-solving attitude
- Collaborative team spirit

Similar jobs
REVIEW CRITERIA:
MANDATORY:
- Strong Hands-On AWS Cloud Engineering / DevOps Profile
- Mandatory (Experience 1): Must have 12+ years of experience in AWS Cloud Engineering / Cloud Operations / Application Support
- Mandatory (Experience 2): Must have strong hands-on experience supporting AWS production environments (EC2, VPC, IAM, S3, ALB, CloudWatch)
- Mandatory (Infrastructure as a code): Must have hands-on Infrastructure as Code experience using Terraform in production environments
- Mandatory (AWS Networking): Strong understanding of AWS networking and connectivity (VPC design, routing, NAT, load balancers, hybrid connectivity basics)
- Mandatory (Cost Optimization): Exposure to cost optimization and usage tracking in AWS environments
- Mandatory (Core Skills): Experience handling monitoring, alerts, incident management, and root cause analysis
- Mandatory (Soft Skills): Strong communication skills and stakeholder coordination skills
ROLE & RESPONSIBILITIES:
We are looking for a hands-on AWS Cloud Engineer to support day-to-day cloud operations, automation, and reliability of AWS environments. This role works closely with the Cloud Operations Lead, DevOps, Security, and Application teams to ensure stable, secure, and cost-effective cloud platforms.
KEY RESPONSIBILITIES:
- Operate and support AWS production environments across multiple accounts
- Manage infrastructure using Terraform and support CI/CD pipelines
- Support Amazon EKS clusters, upgrades, scaling, and troubleshooting
- Build and manage Docker images and push to Amazon ECR
- Monitor systems using CloudWatch and third-party tools; respond to incidents
- Support AWS networking (VPCs, NAT, Transit Gateway, VPN/DX)
- Assist with cost optimization, tagging, and governance standards
- Automate operational tasks using Python, Lambda, and Systems Manager
IDEAL CANDIDATE:
- Strong hands-on AWS experience (EC2, VPC, IAM, S3, ALB, CloudWatch)
- Experience with Terraform and Git-based workflows
- Hands-on experience with Kubernetes / EKS
- Experience with CI/CD tools (GitHub Actions, Jenkins, etc.)
- Scripting experience in Python or Bash
- Understanding of monitoring, incident management, and cloud security basics
NICE TO HAVE:
- AWS Associate-level certifications
- Experience with Karpenter, Prometheus, New Relic
- Exposure to FinOps and cost optimization practices
Key Responsibilities
- Automation & Reliability: Automate infrastructure and operational processes to ensure high reliability, scalability, and security.
- Cloud Infrastructure Design: Gather GCP infrastructure requirements, evaluate solution options, and implement best-fit cloud architectures.
- Infrastructure as Code (IaC): Design, develop, and maintain infrastructure using Terraform and Ansible.
- CI/CD Ownership: Build, manage, and maintain robust CI/CD pipelines using Jenkins, ensuring system reliability and performance.
- Container Orchestration: Manage Docker containers and self-managed Kubernetes clusters across multiple cloud environments.
- Monitoring & Observability: Implement and manage cloud-native monitoring solutions using Prometheus, Grafana, and the ELK stack.
- Proactive Issue Resolution: Troubleshoot and resolve infrastructure and application issues across development, testing, and production environments.
- Scripting & Automation: Develop efficient automation scripts using Python and one or more of Node.js, Go, or Shell scripting.
- Security Best Practices: Maintain and enhance the security of cloud services, Kubernetes clusters, and deployment pipelines.
- Cross-functional Collaboration: Work closely with engineering, product, and security teams to design and deploy secure, scalable infrastructure.
Please Apply - https://zrec.in/RZ7zE?source=CareerSite
About Us
Infra360 Solutions is a services company specializing in Cloud, DevSecOps, Security, and Observability solutions. We help technology companies adapt DevOps culture in their organization by focusing on long-term DevOps roadmap. We focus on identifying technical and cultural issues in the journey of successfully implementing the DevOps practices in the organization and work with respective teams to fix issues to increase overall productivity. We also do training sessions for the developers and make them realize the importance of DevOps. We provide these services - DevOps, DevSecOps, FinOps, Cost Optimizations, CI/CD, Observability, Cloud Security, Containerization, Cloud Migration, Site Reliability, Performance Optimizations, SIEM and SecOps, Serverless automation, Well-Architected Review, MLOps, Governance, Risk & Compliance. We do assessments of technology architecture, security, governance, compliance, and DevOps maturity model for any technology company and help them optimize their cloud cost, streamline their technology architecture, and set up processes to improve the availability and reliability of their website and applications. We set up tools for monitoring, logging, and observability. We focus on bringing the DevOps culture to the organization to improve its efficiency and delivery.
Job Description
Job Title: DevOps Engineer GCP
Department: Technology
Location: Gurgaon
Work Mode: On-site
Working Hours: 10 AM - 7 PM
Terms: Permanent
Experience: 2-4 years
Education: B.Tech/MCA/BCA
Notice Period: Immediately
Infra360.io is searching for a DevOps Engineer to lead our group of IT specialists in maintaining and improving our software infrastructure. You'll collaborate with software engineers, QA engineers, and other IT pros in deploying, automating, and managing the software infrastructure. As a DevOps engineer you will also be responsible for setting up CI/CD pipelines, monitoring programs, and cloud infrastructure.
Below is a detailed description of the roles and responsibilities, expectations for the role.
Tech Stack :
- Kubernetes: Deep understanding of Kubernetes clusters, container orchestration, and its architecture.
- Terraform: Extensive hands-on experience with Infrastructure as Code (IaC) using Terraform for managing cloud resources.
- ArgoCD: Experience in continuous deployment and using ArgoCD to maintain GitOps workflows.
- Helm: Expertise in Helm for managing Kubernetes applications.
- Cloud Platforms: Expertise in GCP, AWS or Azure will be an added advantage.
- Debugging and Troubleshooting: The DevOps Engineer must be proficient in identifying and resolving complex issues in a distributed environment, ranging from networking issues to misconfigurations in infrastructure or application components.
Key Responsibilities:
- CI/CD and configuration management
- Doing RCA of production issues and providing resolution
- Setting up failover, DR, backups, logging, monitoring, and alerting
- Containerizing different applications on the Kubernetes platform
- Capacity planning of different environment's infrastructure
- Ensuring zero outages of critical services
- Database administration of SQL and NoSQL databases
- Infrastructure as a code (IaC)
- Keeping the cost of the infrastructure to the minimum
- Setting up the right set of security measures
- CI/CD and configuration management
- Doing RCA of production issues and providing resolution
- Setting up failover, DR, backups, logging, monitoring, and alerting
- Containerizing different applications on the Kubernetes platform
- Capacity planning of different environment's infrastructure
- Ensuring zero outages of critical services
- Database administration of SQL and NoSQL databases
- Infrastructure as a code (IaC)
- Keeping the cost of the infrastructure to the minimum
- Setting up the right set of security measures
Ideal Candidate Profile:
- A graduation/post-graduation degree in Computer Science and related fields
- 2-4 years of strong DevOps experience with the Linux environment.
- Strong interest in working in our tech stack
- Excellent communication skills
- Worked with minimal supervision and love to work as a self-starter
- Hands-on experience with at least one of the scripting languages - Bash, Python, Go etc
- Experience with version control systems like Git
- Strong experience of GCP.
- Strong experience with managing the Production Systems day in and day out
- Experience in finding issues in different layers of architecture in production environment and fixing them
- Knowledge of SQL and NoSQL databases, ElasticSearch, Solr etc.
- Knowledge of Networking, Firewalls, load balancers, Nginx, Apache etc.
- Experience in automation tools like Ansible/SaltStack and Jenkins
- Experience in Docker/Kubernetes platform and managing OpenStack (desirable)
- Experience with Hashicorp tools i.e. Vault, Vagrant, Terraform, Consul, VirtualBox etc. (desirable)
- Experience with managing/mentoring small team of 2-3 people (desirable)
- Experience in Monitoring tools like Prometheus/Grafana/Elastic APM.
- Experience in logging tools Like ELK/Loki.
We are hiring for a Lead DevOps Engineer in Cloud domain with hands on experience in Azure / GCP.
- Expertise in managing Cloud / VMWare resources and good exposure on Dockers/Kubernetes
- Working knowledge of operating systems( Unix, Linux, IBM AIX)
- Experience in installation, configuration and managing apache webserver, Tomcat/Jboss
- Good understanding of JVM, troubleshooting and performance tuning through thread dump and log analysis
-Strong expertise in Dev Ops tools:
- Deployment (Chef/Puppet/Ansible /Nebula/Nolio)
- SCM (TFS, GIT, ClearCase)
- Build tools (Ant,Maven, Make, Gradle)
- Artifact repositories (Nexes, JFrog ArtiFactory)
- CI tools (Jenkins, TeamCity),
- Experienced in scripting languages: Python, Ant, Bash and Shell
What will be required of you?
- Responsible for implementation and support of application/web server infrastructure for complex business applications
- Server configuration management, release management, deployments, automation & troubleshooting
- Set-up and configure Development, Staging, UAT and Production server environment for projects and install/configure all dependencies using the industry best practices
- Manage Code Repositories
- Manage, Document, Control and Innovate Development and Release procedure.
- Configure automated deployment on multiple environment
- Hands-on working experience of Azure or GCP.
- Knowledge Transfer the implementation to support team and until such time support any production issues
Acceldata is creating the Data observability space. We make it possible for data-driven enterprises to effectively monitor, discover, and validate Data platforms at Petabyte scale. Our customers are Fortune 500 companies including Asia's largest telecom company, a unicorn fintech startup of India, and many more. We are lean, hungry, customer-obsessed, and growing fast. Our Solutions team values productivity, integrity, and pragmatism. We provide a flexible, remote-friendly work environment.
We are building software that can provide insights into companies' data operations and allows them to focus on delivering data reliably with speed and effectiveness. Join us in building an industry-leading data observability platform that focuses on ensuring data reliability from every spectrum (compute, data and pipeline) of a cloud or on-premise data platform.
Position Summary-
This role will support the customer implementation of a data quality and reliability product. The candidate is expected to install the product in the client environment, manage proof of concepts with prospects, and become a product expert and troubleshoot post installation, production issues. The role will have significant interaction with the client data engineering team and the candidate is expected to have good communication skills.
Required experience
- 6-7 years experience providing engineering support to data domain/pipelines/data engineers.
- Experience in troubleshooting data issues, analyzing end to end data pipelines and in working with users in resolving issues
- Experience setting up enterprise security solutions including setting up active directories, firewalls, SSL certificates, Kerberos KDC servers, etc.
- Basic understanding of SQL
- Experience working with technologies like S3, Kubernetes experience preferred.
- Databricks/Hadoop/Kafka experience preferred but not required

Job Description:
• Contribute to customer discussions in collecting the requirement
• Engage in internal and customer POC’s to realize the potential solutions envisaged for the customers.
• Design/Develop/Migrate VRA blueprints and VRO workflows; strong hands-on knowledge in vROPS and integrations with application and VMware solutions.
• Develop automation scripts to support the design and implementation of VMware projects.
Qualification:
• Maintain current, high-level technical knowledge of the entire VMware product portfolio and future product direction and In depth level knowledge
• Maintain deep technical and business knowledge of cloud computing and networking applications, industry directions, and trends.
• Experience with REST API and/or Python programming. TypeScript/NodeJS backend experience
• Experience with Kubernetes
• Familiarity with DevOps tools like Ansible, Puppet, Terraform
• End to end experience in Architecture, Design and Development of VMware Cloud Automation suite with good exposure to VMware products and/or Solutions.
• Hands-on experience in automation, coding, debugging and release.
• Sound process knowledge from requirement gathering, implementation, deployment and Support.
• Experience in working with global teams, customers and partners with solid communication skills.
• VMware CMA certification would be a plus
• Academic background in MS/BE/B-Tech/ IT/CS/ECE/EE would be preferred.
Job Summary
You'd be meticulously analyzing project requirements and carry forward the development of highly robust, scalable and easily maintainable backend applications, work independently, and you'll have the support & opportunity to thrive in a fast-paced environment.
Responsibilities and Duties:
- building and setting up new development tools and infrastructure
- understanding the needs of stakeholders and conveying this to developers
- working on ways to automate and improve development and release processes
- testing and examining code written by others and analysing results
- ensuring that systems are safe and secure against cybersecurity threats
- identifying technical problems and developing software updates and ‘fixes’
- working with software developers and software engineers to ensure that development follows established processes and works as intended
- planning out projects and being involved in project management decisions
Skill Requirements:
- Managing GitHub (example: - creating branches for test, QA, development and production, creating Release tags, resolve merge conflict)
- Setting up of the servers based on the projects in either AWS or Azure (test, development, QA, staging and production)
- AWS S3 configuring and s3 web hosting, Archiving data from s3 to s3-glacier
- Deploying the build(application) to the servers using AWS CI/CD and Jenkins (Automated and manual)
- AWS Networking and Content delivery (VPC, Route 53 and CloudFront)
- Managing databases like RDS, Snowflake, Athena, Redis and Elasticsearch
- Managing IAM roles and policies for the functions like Lambda, SNS, aws cognito, secret manager, certificate manager, Guard Duty, Inspector EC2 and S3.
- AWS Analytics (Elasticsearch, Athena, Glue and kinesis).
- AWS containers (elastic container registry, elastic container service, elastic Kubernetes service, Docker Hub and Docker compose
- AWS Auto scaling group (launch configuration, launch template) and load balancer
- EBS (snapshots, volumes and AMI.)
- AWS CI/CD build spec scripting, Jenkins groovy scripting, shell scripting and python scripting.
- Sagemaker, Textract, forecast, LightSail
- Android and IOS automation building
- Monitoring tools like cloudwatch, cloudwatch log group, Alarm, metric dashboard, SNS(simple notification service), SES(simple email service)
- Amazon MQ
- Operating system Linux and windows
- X-Ray, Cloud9, Codestar
- Fluent Shell Scripting
- Soft Skills
- Scripting Skills , Good to have knowledge (Python, Javascript, Java,Node.js)
- Knowledge On Various DevOps Tools And Technologies
Qualifications and Skills
Job Type: Full-time
Experience: 4 - 7 yrs
Qualification: BE/ BTech/MCA.
Location: Bengaluru, Karnataka
- Develop and Maintain IAC using Terraform and Ansible
- Draft design documents that translate requirements into code.
- Deal with challenges associated with scale.
- Assume responsibilities from technical design through technical client support.
- Manage expectations with internal stakeholders and context-switch in a fast paced environment.
- Thrive in an environment that uses Elasticsearch extensively.
- Keep abreast of technology and contribute to the engineering strategy.
- Champion best development practices and provide mentorship
An AWS Certified Engineer with strong skills in
- Terraform o Ansible
- *nix and shell scripting
- Elasticsearch
- Circle CI
- CloudFormation
- Python
- Packer
- Docker
- Prometheus and Grafana
- Challenges of scale
- Production support
- Sharp analytical and problem-solving skills.
- Strong sense of ownership.
- Demonstrable desire to learn and grow.
- Excellent written and oral communication skills.
- Mature collaboration and mentoring abilities.
- Works independently without any supervision
- Work on continuous improvement of the products through innovation and learning. Someone with a knack for benchmarking and optimization
- Experience in deploying highly complex, distributed transaction processing systems.
- Stay abreast with new innovations and the latest technology trends and explore ways of leveraging these for improving the product in alignment with the business.
- As a component owner, where the component impacts across multiple platforms (5-10-member team), work with customers to obtain their requirements and deliver the end-to-end project.
Required Experience, Skills, and Qualifications
- 5+ years of experience as a DevOps Engineer. Experience with the Golang cycle is a plus
- At least one End to End CI/CD Implementation experience
- Excellent Problem Solving and Debugging skills in DevOps area· Good understanding of Containerization (Docker/Kubernetes)
- Hands-on Build/Package tool experience· Experience with AWS services Glue, Athena, Lambda, EC2, RDS, EKS/ECS, ALB, VPC, SSM, Route 53
- Experience with setting up CI/CD pipeline for Glue jobs, Athena, Lambda functions
- Experience architecting interaction with services and application deployments on AWS
- Experience with Groovy and writing Jenkinsfile
- Experience with repository management, code scanning/linting, secure scanning tools
- Experience with deployments and application configuration on Kubernetes
- Experience with microservice orchestration tools (e.g. Kubernetes, Openshift, HashiCorp Nomad)
- Experience with time-series and document databases (e.g. Elasticsearch, InfluxDB, Prometheus)
- Experience with message buses (e.g. Apache Kafka, NATS)
- Experience with key-value stores and service discovery mechanisms (e.g. Redis, HashiCorp Consul, etc)
Below is the Job details:
Role: DevOps Architect
Experience Level: 8-12 Years
Job Location: Hyderabad
Key Responsibilities :
Look through the various DevOps Tools/Technologies and identify the strengths and provide direction to the DevOps automation team
Out-of-box thought process on the DevOps Automation Platform implementation
Expose various tools and technologies and do POC on integration of the these tools
Evaluate Backend API's for various DevOps tools
Perform code reviews keep in context of RASUI
Mentor the team on the various E2E integrations
Be Liaison in evangelizing the automation solution currently implemented
Bring in various DevOps best Practices/Principles and participate in adoption with various app teams
Must have:
Should possess Bachelors/Masters in computer science with minimum of 8+ years of experience
Should possess minimum 3 years of strong experience in DevOps
Should possess expertise in using various DevOps tools libraries and API's (Jenkins/JIRA/AWX/Nexus/GitHub/BitBucket/SonarQube)
Should possess expertise in optimizing the DevOps stack ( Containers/Kubernetes/Monitoring )
2+ Experience in creating solutions and translate to the development team
Should have strong understanding of OOPs, SDLC (Agile Safe standards)
Proficient in Python , with a good knowledge of its ecosystems (IDEs and Frameworks)
Proficient in various cloud platforms (Azure/AWS/Google cloud platform)
Proficient in various DevOps offerings (Pivotal/OpenStack/Azure DevOps
Regards,
Talent acquisition team
Tetrasoft India
Stay home and Stay safe










