
- Expertise in Infrastructure & Application design & architecture
- Expertise in AWS, OS & networking
- Having good exposure on Infra & Application security
- Expertise in Python, Shell scripting
- Proficient with Devops tools Terraform, Jenkins, Ansible, Docker, GIT
- Solid background in systems engineering and operations
- Strong in Devops methodologies and processes
- Strong in CI/CD pipeline & SDLC.

Similar jobs
Key Responsibilities
- Design, implement, and maintain CI/CD pipelines using Azure DevOps
- Manage cloud infrastructure on Microsoft Azure including VMs, App Services, AKS, Networking, and Storage
- Implement Infrastructure as Code (IaC) using Terraform, ARM Templates, or Bicep
- Build and manage containerized environments using Docker and Kubernetes
- Deploy and manage Azure Kubernetes Service (AKS) clusters
- Automate configuration management and deployments
- Implement monitoring and logging solutions using Azure Monitor, Log Analytics, and Application Insights
- Integrate security best practices (DevSecOps) within CI/CD pipelines
- Collaborate with development teams to improve build, release, and deployment processes
- Troubleshoot production issues and optimize system performance
- Ensure high availability, scalability, and disaster recovery strategies
Required Skills & Qualifications
- 7+ years of experience in DevOps, Cloud Engineering, or Infrastructure Automation
- Strong hands-on experience with Microsoft Azure
- Expertise in CI/CD implementation using Azure DevOps
- Experience with scripting languages such as PowerShell, Bash, or Python
- Proficiency in Infrastructure as Code (Terraform, ARM, Bicep)
- Experience with container orchestration (Kubernetes/AKS)
- Knowledge of Git-based version control systems
- Experience with configuration management tools
- Strong understanding of networking, security, and cloud architecture
- Experience working in Agile/Scrum environments
• Support software build and release efforts:
• Create, set up, and maintain builds
• Review build results and resolve build problems
• Create and Maintain build servers
• Plan, manage, and control product releases
• Validate, archive, and escrow product releases
• Maintain and administer configuration management tools, including source control, defect management, project management, and other systems.
• Develop scripts and programs to automate process and integrate tools.
• Resolve help desk requests from worldwide product development staff.
• Participate in team and process improvement projects.
• Interact with product development teams to plan and implement tool and build improvements.
• Perform other duties as assigned.
While the job description describes what is anticipated as the requirements of the position, the job requirements are subject to change based upon any changing needs and requirements of the business.
Required Skills
• TFS 2017 vNext Builds or AzureDevOps Builds Process
• Must to have PowerShell 3.0+ Scripting knowledge
• Exposure on Build Tools like MSbuild, NANT, XCode.
• Exposure on Creating and Maintaining vCenter/VMware vSphere 6.5
• Hands On experiences on above Win2k12 OS and basic info on MacOS
• Good to have Shell or Batch Script (optional)
Required Experience
Candidates for this position should hold the following qualifications to be considered as a suitable applicant. Please note that except where specified as “preferred,” or as a “plus,” all points listed below are considered minimum requirements.
• Bachelors Degree in a related discipline is strongly preferred
• 3 or more years experience with Software Configuration Management tools, concepts, and processes.
• Exposure to Source control systems such as TFS, GIT, or Subversion (Optional)
• Familiarity with object-oriented concepts and programming in C# and Power Shell Scripting.
• Experience working on AzureDevOps Builds or vNext Builds or Jenkins Builds
• Experience working with developers to resolve development issues related to source control systems.
Please Apply - https://zrec.in/sEzbp?source=CareerSite
About Us
Infra360 Solutions is a services company specializing in Cloud, DevSecOps, Security, and Observability solutions. We help technology companies adapt DevOps culture in their organization by focusing on long-term DevOps roadmap. We focus on identifying technical and cultural issues in the journey of successfully implementing the DevOps practices in the organization and work with respective teams to fix issues to increase overall productivity. We also do training sessions for the developers and make them realize the importance of DevOps. We provide these services - DevOps, DevSecOps, FinOps, Cost Optimizations, CI/CD, Observability, Cloud Security, Containerization, Cloud Migration, Site Reliability, Performance Optimizations, SIEM and SecOps, Serverless automation, Well-Architected Review, MLOps, Governance, Risk & Compliance. We do assessments of technology architecture, security, governance, compliance, and DevOps maturity model for any technology company and help them optimize their cloud cost, streamline their technology architecture, and set up processes to improve the availability and reliability of their website and applications. We set up tools for monitoring, logging, and observability. We focus on bringing the DevOps culture to the organization to improve its efficiency and delivery.
Job Description
Job Title: DevOps Engineer Azure
Department: Technology
Location: Gurgaon
Work Mode: On-site
Working Hours: 10 AM - 7 PM
Terms: Permanent
Experience: 2-4 years
Education: B.Tech/MCA/BCA
Notice Period: Immediately
Infra360.io is searching for a DevOps Engineer to lead our group of IT specialists in maintaining and improving our software infrastructure. You'll collaborate with software engineers, QA engineers, and other IT pros in deploying, automating, and managing the software infrastructure. As a DevOps engineer you will also be responsible for setting up CI/CD pipelines, monitoring programs, and cloud infrastructure.
Below is a detailed description of the roles and responsibilities, expectations for the role.
Tech Stack :
- Kubernetes: Deep understanding of Kubernetes clusters, container orchestration, and its architecture.
- Terraform: Extensive hands-on experience with Infrastructure as Code (IaC) using Terraform for managing cloud resources.
- ArgoCD: Experience in continuous deployment and using ArgoCD to maintain GitOps workflows.
- Helm: Expertise in Helm for managing Kubernetes applications.
- Cloud Platforms: Expertise in AWS, GCP or Azure will be an added advantage.
- Debugging and Troubleshooting: The DevOps Engineer must be proficient in identifying and resolving complex issues in a distributed environment, ranging from networking issues to misconfigurations in infrastructure or application components.
Key Responsibilities:
- CI/CD and configuration management
- Doing RCA of production issues and providing resolution
- Setting up failover, DR, backups, logging, monitoring, and alerting
- Containerizing different applications on the Kubernetes platform
- Capacity planning of different environment's infrastructure
- Ensuring zero outages of critical services
- Database administration of SQL and NoSQL databases
- Infrastructure as a code (IaC)
- Keeping the cost of the infrastructure to the minimum
- Setting up the right set of security measures
- CI/CD and configuration management
- Doing RCA of production issues and providing resolution
- Setting up failover, DR, backups, logging, monitoring, and alerting
- Containerizing different applications on the Kubernetes platform
- Capacity planning of different environment's infrastructure
- Ensuring zero outages of critical services
- Database administration of SQL and NoSQL databases
- Infrastructure as a code (IaC)
- Keeping the cost of the infrastructure to the minimum
- Setting up the right set of security measures
Ideal Candidate Profile:
- A graduation/post-graduation degree in Computer Science and related fields
- 2-4 years of strong DevOps experience with the Linux environment.
- Strong interest in working in our tech stack
- Excellent communication skills
- Worked with minimal supervision and love to work as a self-starter
- Hands-on experience with at least one of the scripting languages - Bash, Python, Go etc
- Experience with version control systems like Git
- Strong experience of Amazon Web Services (EC2, RDS, VPC, S3, Route53, IAM etc.)
- Strong experience with managing the Production Systems day in and day out
- Experience in finding issues in different layers of architecture in production environment and fixing them
- Knowledge of SQL and NoSQL databases, ElasticSearch, Solr etc.
- Knowledge of Networking, Firewalls, load balancers, Nginx, Apache etc.
- Experience in automation tools like Ansible/SaltStack and Jenkins
- Experience in Docker/Kubernetes platform and managing OpenStack (desirable)
- Experience with Hashicorp tools i.e. Vault, Vagrant, Terraform, Consul, VirtualBox etc. (desirable)
- Experience with managing/mentoring small team of 2-3 people (desirable)
- Experience in Monitoring tools like Prometheus/Grafana/Elastic APM.
- Experience in logging tools Like ELK/Lo
Intuitive is the fastest growing top-tier Cloud Solutions and Services company supporting Global Enterprise Customer across Americas, Europe and Middle East.
Intuitive is looking for highly talented hands-on Cloud Infrastructure Architects to help accelerate our growing Professional Services consulting Cloud & DevOps practice. This is an excellent opportunity to join Intuitive’ s global world class technology teams, working with some of the best and brightest engineers while also developing your skills and furthering your career working with some of the largest customers.
Job Description:- Integrate gates into CI/CD pipeline and push all flaws/issues to developers IDE (as far left as possible) - ideally in code repo but required by the time code is in the artifact repository.
- Demonstrable experience in Containerization-Docker and orchestration (Kubernetes)
- Experience withsetting up self-managed Kubernetes clusters without using any managed cloud offerings like EKS
- Experience working withAWS - Managing AWS services - EC2, S3, Cloudfront, VPC, SNS, Lambda, AWS Autoscaling, AWS IAM, RDS, EBS, Kinesis, SQS, DynamoDB, Elastic Cache, Redshift, Cloudwatch, Amazon Inspector.
- Familiarity withLinux and UNIX systems (e.g. CentOS, RedHat) and command line system administration such as Bash, VIM, SSH.
- Hands on experience in configuration management of server farms (using tools such asPuppet, Chef, Ansible, etc.,).
- Demonstrated understanding of ITIL methodologies, ITIL v3 or v4 certification
- Kubernetes CKA or CKAD certification nice to have
Excellent communication skills
Open to work on EST time zone
We are looking for a DevOps Engineer (individual contributor) to maintain and build upon our next-generation infrastructure. We aim to ensure that our systems are secure, reliable and high-performing by constantly striving to achieve best-in-class infrastructure and security by:
- Leveraging a variety of tools to ensure all configuration is codified (using tools like Terraform and Flux) and applied in a secure, repeatable way (via CI)
- Routinely identifying new technologies and processes that enable us to streamline our operations and improve overall security
- Holistically monitoring our overall DevOps setup and health to ensure our roadmap constantly delivers high-impact improvements
- Eliminating toil by automating as many operational aspects of our day-to-day work as possible using internally created, third party and/or open-source tools
- Maintain a culture of empowerment and self-service by minimizing friction for developers to understand and use our infrastructure through a combination of innovative tools, excellent documentation and teamwork
Tech stack: Microservices primarily written in JavaScript, Kotlin, Scala, and Python. The majority of our infrastructure sits within EKS on AWS, using Istio. We use Terraform and Helm/Flux when working with AWS and EKS (k8s). Deployments are managed with a combination of Jenkins and Flux. We rely heavily on Kafka, Cassandra, Mongo and Postgres and are increasingly leveraging AWS-managed services (e.g. RDS, lambda).
Acceldata is creating the Data observability space. We make it possible for data-driven enterprises to effectively monitor, discover, and validate Data platforms at Petabyte scale. Our customers are Fortune 500 companies including Asia's largest telecom company, a unicorn fintech startup of India, and many more. We are lean, hungry, customer-obsessed, and growing fast. Our Solutions team values productivity, integrity, and pragmatism. We provide a flexible, remote-friendly work environment.
We are building software that can provide insights into companies' data operations and allows them to focus on delivering data reliably with speed and effectiveness. Join us in building an industry-leading data observability platform that focuses on ensuring data reliability from every spectrum (compute, data and pipeline) of a cloud or on-premise data platform.
Position Summary-
This role will support the customer implementation of a data quality and reliability product. The candidate is expected to install the product in the client environment, manage proof of concepts with prospects, and become a product expert and troubleshoot post installation, production issues. The role will have significant interaction with the client data engineering team and the candidate is expected to have good communication skills.
Required experience
- 6-7 years experience providing engineering support to data domain/pipelines/data engineers.
- Experience in troubleshooting data issues, analyzing end to end data pipelines and in working with users in resolving issues
- Experience setting up enterprise security solutions including setting up active directories, firewalls, SSL certificates, Kerberos KDC servers, etc.
- Basic understanding of SQL
- Experience working with technologies like S3, Kubernetes experience preferred.
- Databricks/Hadoop/Kafka experience preferred but not required
Role and responsibilities
- Expertise in AWS (Most typical Services),Docker & Kubernetes.
- Strong Scripting knowledge, Strong Devops Automation, Good at Linux
- Hands on with CI/CD (CircleCI preferred but any CI/CD tool will do). Strong Understanding of GitHub
- Strong understanding of AWS networking and. Strong with Security & Certificates.
Nice-to-have skills
- Involved in Product Engineering

Total Experience: 6 – 12 Years
Required Skills and Experience
- 3+ years of relevant experience with DevOps tools Jenkins, Ansible, Chef etc
- 3+ years of experience in continuous integration/deployment and software tools development experience with Python and shell scripts etc
- Building and running Docker images and deployment on Amazon ECS
- Working with AWS services (EC2, S3, ELB, VPC, RDS, Cloudwatch, ECS, ECR, EKS)
- Knowledge and experience working with container technologies such as Docker and Amazon ECS, EKS, Kubernetes
- Experience with source code and configuration management tools such as Git, Bitbucket, and Maven
- Ability to work with and support Linux environments (Ubuntu, Amazon Linux, CentOS)
- Knowledge and experience in cloud orchestration tools such as AWS Cloudformation/Terraform etc
- Experience with implementing "infrastructure as code", “pipeline as code” and "security as code" to enable continuous integration and delivery
- Understanding of IAM, RBAC, NACLs, and KMS
- Good communication skills
Good to have:
- Strong understanding of security concepts, methodologies and apply them such as SSH, public key encryption, access credentials, certificates etc.
- Knowledge of database administration such as MongoDB.
- Knowledge of maintaining and using tools such as Jira, Bitbucket, Confluence.
Responsibilities
- Work with Leads and Architects in designing and implementation of technical infrastructure, platform, and tools to support modern best practices and facilitate the efficiency of our development teams through automation, CI/CD pipelines, and ease of access and performance.
- Establish and promote DevOps thinking, guidelines, best practices, and standards.
- Contribute to architectural discussions, Agile software development process improvement, and DevOps best practices.






