
Job Title: AWS Devops Engineer – Manager Business solutions
Location: Gurgaon, India
Experience Required: 8-12 years
Industry: IT
We are looking for a seasoned AWS DevOps Engineer with robust experience in AWS middleware services and MongoDB Cloud Infrastructure Management. The role involves designing, deploying, and maintaining secure, scalable, and high-availability infrastructure, along with developing efficient CI/CD pipelines and automating operational processes.
Key Deliverables (Essential functions & Responsibilities of the Job):
· Design, deploy, and manage AWS infrastructure, with a focus on middleware services such as API Gateway, Lambda, SQS, SNS, ECS, and EKS.
· Administer and optimize MongoDB Atlas or equivalent cloud-based MongoDB solutions for high availability, security, and performance.
· Develop, manage, and enhance CI/CD pipelines using tools like AWS CodePipeline, Jenkins, GitHub Actions, GitLab CI/CD, or Bitbucket Pipelines.
· Automate infrastructure provisioning using Terraform, AWS CloudFormation, or AWS CDK.
· Implement monitoring and logging solutions using CloudWatch, Prometheus, Grafana, or the ELK Stack.
· Enforce cloud security best practices — IAM, VPC setups, encryption, certificate management, and compliance controls.
· Work closely with development teams to improve application reliability, scalability, and performance.
· Manage containerized environments using Docker, Kubernetes (EKS), or AWS ECS.
· Perform MongoDB administration tasks such as backups, performance tuning, indexing, and sharding.
· Participate in on-call rotations to ensure 24/7 infrastructure availability and quick incident resolution.
Knowledge Skills and Abilities:
· 7+ years of hands-on AWS DevOps experience, especially with middleware services.
· Strong expertise in MongoDB Atlas or other cloud MongoDB services.
· Proficiency in Infrastructure as Code (IaC) tools like Terraform, CloudFormation, or AWS CDK.
· Solid experience with CI/CD tools: Jenkins, CodePipeline, GitHub Actions, GitLab, Bitbucket, etc.
· Excellent scripting skills in Python, Bash, or PowerShell.
· Experience in containerization and orchestration: Docker, EKS, ECS.
· Familiarity with monitoring tools like CloudWatch, ELK, Prometheus, Grafana.
· Strong understanding of AWS networking and security: IAM, VPC, KMS, Security Groups.
· Ability to solve complex problems and thrive in a fast-paced environment.
Preferred Qualifications
· AWS Certified DevOps Engineer – Professional or AWS Solutions Architect – Associate/Professional.
· MongoDB Certified DBA or Developer.
· Experience with serverless services like AWS Lambda, Step Functions.
· Exposure to multi-cloud or hybrid cloud environments.
Mail updated resume with current salary-
Email: jobs[at]glansolutions[dot]com
Satish; 88O 27 49 743
Google search: Glan management consultancy

About vaaragonenterprisescom
About
Company social profiles
Similar jobs
Job Description:
• Drive end-to-end automation from GitHub/GitLab/BitBucket to Deployment,
Observability and Enabling the SRE activities
• Guide operations support (setup, configuration, management, troubleshooting) of
digital platforms and applications
• Solid understanding of DevSecOps Workflows that support CI, CS, CD, CM, CT.
• Deploy, configure, and manage SaaS and PaaS cloud platform and applications
• Provide Level 1 (OS, patching) and Level 2 (app server instance troubleshooting)
• DevOps programming: writing scripts, building operations/server instance/app/DB
monitoring tools Set up / manage continuous build and dev project management
environment: JenkinX/GitHub Actions/Tekton, Git, Jira Designing secure networks,
systems, and application architectures
• Collaborating with cross-functional teams to ensure secure product development
• Disaster recovery, network forensics analysis, and pen-testing solutions
• Planning, researching, and developing security policies, standards, and procedures
• Awareness training of the workforce on information security standards, policies, and
best practices
• Installation and use of firewalls, data encryption and other security products and
procedures
• Maturity in understanding compliance, policy and cloud governance and ability to
identify and execute automation.
• At Wesco, we discuss more about solutions than problems. We celebrate innovation
and creativity.
Please Apply - https://zrec.in/L51Qf?source=CareerSite
About Us
Infra360 Solutions is a services company specializing in Cloud, DevSecOps, Security, and Observability solutions. We help technology companies adapt DevOps culture in their organization by focusing on long-term DevOps roadmap. We focus on identifying technical and cultural issues in the journey of successfully implementing the DevOps practices in the organization and work with respective teams to fix issues to increase overall productivity. We also do training sessions for the developers and make them realize the importance of DevOps. We provide these services - DevOps, DevSecOps, FinOps, Cost Optimizations, CI/CD, Observability, Cloud Security, Containerization, Cloud Migration, Site Reliability, Performance Optimizations, SIEM and SecOps, Serverless automation, Well-Architected Review, MLOps, Governance, Risk & Compliance. We do assessments of technology architecture, security, governance, compliance, and DevOps maturity model for any technology company and help them optimize their cloud cost, streamline their technology architecture, and set up processes to improve the availability and reliability of their website and applications. We set up tools for monitoring, logging, and observability. We focus on bringing the DevOps culture to the organization to improve its efficiency and delivery.
Job Description
Job Title: Senior DevOps Engineer (Infrastructure/SRE)
Department: Technology
Location: Gurgaon
Work Mode: On-site
Working Hours: 10 AM - 7 PM
Terms: Permanent
Experience: 4-6 years
Education: B.Tech/MCA
Notice Period: Immediately
About Us
At Infra360.io, we are a next-generation cloud consulting and services company committed to delivering comprehensive, 360-degree solutions for cloud, infrastructure, DevOps, and security. We partner with clients to transform and optimize their technology landscape, ensuring resilience, scalability, cost efficiency and innovation.
Our core services include Cloud Strategy, Site Reliability Engineering (SRE), DevOps, Cloud Security Posture Management (CSPM), and related Managed Services. We specialize in driving operational excellence across multi-cloud environments, helping businesses achieve their goals with agility and reliability.
We thrive on ownership, collaboration, problem-solving, and excellence, fostering an environment where innovation and continuous learning are at the forefront. Join us as we expand and redefine what’s possible in cloud technology and infrastructure.
Role Summary
We are looking for a Senior DevOps Engineer (Infrastructure) to design, automate, and manage cloud-based and datacentre infrastructure for diverse projects. The ideal candidate will have deep expertise in a public cloud platform (AWS, GCP, or Azure), with a strong focus on cost optimization, security best practices, and infrastructure automation using tools like Terraform and CI/CD pipelines.
This role involves designing scalable architectures (containers, serverless, and VMs), managing databases, and ensuring system observability with tools like Prometheus and Grafana. Strong leadership, client communication, and team mentoring skills are essential. Experience with VPN technologies and configuration management tools (Ansible, Helm) is also critical. Multi-cloud experience and familiarity with APM tools are a plus.
Ideal Candidate Profile
- Solid 4-6 years of experience as a DevOps engineer with a proven track record of architecting and automating solutions on Cloud
- Experience in troubleshooting production incidents and handling high-pressure situations.
- Strong leadership skills and the ability to mentor team members and provide guidance on best practices.
- Bachelor's or Master's degree in Computer Science, Engineering, or a related field.
- Extensive experience with Kubernetes, Terraform, ArgoCD, and Helm.
- Strong with at least one public cloud AWS/GCP/Azure
- Strong with Cost Optimization and Security Best practices
- Strong with Infrastructure automation using Terraform and CI/CD automation
- Strong with Configuration Management using Ansible, Helm etc
- Good with designing architectures (Containers, Serverless, VMs etc)
- Hands-on Experience working on Multiple Projects
- Strong with Client communication and requirements gathering
- Databases management experience
- Good experience with Prometheus, Grafana & Alert Manager
- Able to manage multiple clients and take ownership of client issues.
- Experience with Git and coding best practices
- Proficiency in cloud networking, including VPCs, DNS, VPNs (OpenVPN, OpenSwan, Pritunl, Site-to-Site VPNs), load balancers, and firewalls, ensuring secure and efficient connectivity.
- Strong understanding of cloud security best practices, identity and access management (IAM), and compliance requirements for modern infrastructure.
Good to have
- Multi-cloud experience with AWS, GCP & Azure
- Experience with APM & Observability tools like - Newrelic, Datadog, and OpenTelemetry
- Proficiency in scripting languages (Python, Go) for automation and tooling to improve infrastructure and application reliability.
Key Responsibilities
- Design and Development:
- Architect, design, and develop high-quality, scalable, and secure cloud-based software solutions.
- Collaborate with product and engineering teams to translate business requirements into technical specifications.
- Write clean, maintainable, and efficient code, following best practices and coding standards.
- Cloud Infrastructure:
- Develop and optimise cloud-native applications, leveraging cloud services like AWS, Azure, or Google Cloud Platform (GCP).
- Implement and manage CI/CD pipelines for automated deployment and testing.
- Ensure the security, reliability, and performance of cloud infrastructure.
- Technical Leadership:
- Mentor and guide junior engineers, providing technical leadership and fostering a collaborative team environment.
- Participate in code reviews, ensuring adherence to best practices and high-quality code delivery.
- Lead technical discussions and contribute to architectural decisions.
- Problem Solving and Troubleshooting:
- Identify, diagnose, and resolve complex software and infrastructure issues.
- Perform root cause analysis for production incidents and implement preventative measures.
- Continuous Improvement:
- Stay up-to-date with the latest industry trends, tools, and technologies in cloud computing and software engineering.
- Contribute to the continuous improvement of development processes, tools, and methodologies.
- Drive innovation by experimenting with new technologies and solutions to enhance the platform.
- Collaboration:
- Work closely with DevOps, QA, and other teams to ensure smooth integration and delivery of software releases.
- Communicate effectively with stakeholders, including technical and non-technical team members.
- Client Interaction & Management:
- Will serve as a direct point of contact for multiple clients.
- Able to handle the unique technical needs and challenges of two or more clients concurrently.
- Involve both direct interaction with clients and internal team coordination.
- Production Systems Management:
- Must have extensive experience in managing, monitoring, and debugging production environments.
- Will work on troubleshooting complex issues and ensure that production systems are running smoothly with minimal downtime.
Intuitive is the fastest growing top-tier Cloud Solutions and Services company supporting Global Enterprise Customer across Americas, Europe and Middle East.
Excellent communication skills
Open to work on EST time zone(6pm to 3am)
Technical Skills:
· In depth understanding of DevSecOps process and governance
· Understanding of various branching strategies
· Hands on experience working with various testing and scanning tools (ex. SonarQube, Snyk, Blackduck, etc.)
· Expertise working with one or more CICD platforms (ex. Azure DevOps, GitLab, GitHub Actions, etc)
· Expertise within one CSP and experience/working knowledge of a second CSP (Azure, AWS, GCP)
· Proficient with Terraform
· Hands on experience working with Kubernetes
· Proficient working with GIT version control
· Hands on experience working with monitoring/observability tool(s) (Splunk, Data Dog, Dynatrace, etc)
· Hands on experience working with Configuration Management platform(s) (Chef, Saltstack, Ansible, etc)
· Hands on experience with GitOps
Job Summary: We are looking for a senior DevOps engineer to help us build functional systems that improve customer experience. They will be responsible for deploying product updates, identifying production issues and implementing integrations that meet our customers' needs.
Key Responsibilities
- Utilise various open source technologies & build independent web based tools, microservices and solutions
- Write deployment scripts
- Configure and manage data sources like MySQL, Mongo, ElasticSearch, etc
- Configure and deploy pipelines for various microservices using CI/CD tools
- Automated server monitoring setup & HA adherence
- Defining and setting development, test, release, update, and support processes for DevOps operation
- Coordination and communication within the team and with customers where integrations are required
- Work with company personnel to define technical problems and requirements, determine solutions, and implement those solutions.
- Work with product team to design automated pipelines to support SaaS delivery and operations in cloud platforms.
- Review and act on the Service requests, Infrastructure requests and Incidents logged by our Implementation teams and clients. Identifying, analysing, and resolving infrastructure vulnerabilities and application deployment issues
- Modifying and improving existing systems. Suggest process improvements and implement them.
- Collaborate with Software Engineers to help them deploy and operate different systems, also help to automate and streamline company's operations and processes.
- Developing interface simulators and designing automated module deployments.
Key Skills
- Bachelor's degree in software engineering, computer science, information technology, information systems.
- 3+ years of experience in managing Linux based cloud microservices infrastructure (AWS, GCP or Azure)
- Hands-on experience with databases including MySQL.
- Experience OS tuning and optimizations for running databases and other scalable microservice solutions
- Proficient working with git repositories and git workflows
- Able to setup and manage CI/CD pipelines
- Excellent troubleshooting, working knowledge of various tools, open-source technologies, and cloud services
- Awareness of critical concepts in DevOps and Agile principles
- Sense of ownership and pride in your performance and its impact on company’s success
- Critical thinker and problem-solving skills
- Extensive experience in DevOps engineering, team management, and collaboration.
- Ability to install and configure software, gather test-stage data, and perform debugging.
- Ability to ensure smooth software deployment by writing script updates and running diagnostics.
- Proficiency in documenting processes and monitoring various metrics.
- Advanced knowledge of best practices related to data encryption and cybersecurity.
- Ability to keep up with software development trends and innovation.
- Exceptional interpersonal and communication skills
Experience:
- Must have 4+ years of experience as Devops Engineer in a SaaS product based company
About SuperProcure
SuperProcure is a leading logistics and supply chain management solutions provider that aims to bring efficiency, transparency, and process optimization across the globe with the help of technology and data. SuperProcure started our journey in 2017 to help companies digitize their logistics operations. We created industry-recognized products which are now being used by 150+ companies like Tata Consumer Products, ITC, Flipkart, Tata Chemicals, PepsiCo, L&T Constructions, GMM Pfaudler, Havells, others. It helps achieve real-time visibility, 100% audit adherence & transparency, 300% improvement in team productivity, up to 9% savings in freight costs and many more benefits. SuperProcure is determined to make the lives of the logistic teams easier, add value, and help in establishing a fair and beneficial process for the company.
Super Procure is backed by IndiaMart and incubated under IIMCIP & Lumis, Supply Chain Labs. SuperProcure was also recognized as Top 50 Emerging Start-ups of India at the NASSCOM Product Conclave organized in Bengaluru and was a part of the recently launched National Logistics policy by the prime minister of India. More details about our journey can be found here
Life @ SuperProcure
SuperProcure operates in an extremely innovative, entrepreneurial, analytical, and problem-solving work culture. Every team member is fully motivated and committed to the company's vision and believes in getting things done. In our organization, every employee is the CEO of what he/she does; from conception to execution, the work needs to be thought through.
Our people are the core of our organization, and we believe in empowering them and making them a part of the daily decision-making, which impacts the business and shapes the company's overall strategy. They are constantly provided with resources,
mentorship and support from our highly energetic teams and leadership. SuperProcure is extremely inclusive and believes in collective success.
Looking for a bland, routine 9-6 job? PLEASE DO NOT APPLY. Looking for a job where you wake up and add significant value to a $180 Billion logistics industry everyday? DO APPLY.
OTHER DETAILS
- Engagement : Full Time
- No. of openings : 1
- CTC : 12 - 20lpa
Projects you'll be working on:
- We're focused on enhancing our product for our clients and their users, as well as streamlining operations and improving our technical foundation.
- Writing scripts for procurement, configuration and deployment of instances (infrastructure automation) on GCP
- Managing Kubernetes cluster
- Manage product and services like VPC, Elasticsearch, cloud functions, rabbitMQ, redis servers, postgres infrastructure, app engine, etc.
- Supporting developers in setting up infrastructure for services
- Manage and improve microservices infrastructure
- Managing high availability, low latency applications
- Focus on security best practices to ensure assist in security and compliance activities
Requirements
- Minimum 3 years experience as DevOps
- Minimum 1 years' experience with Kubernetes Cluster (Infrastructure as code, maintaining and scalability).
- BASH expertise, node or python professional programming experience
- Experience with setting up, configuring and using Jenkins or any CI tools, building CI/CD pipeline
- Experience setting microservices architecture
- Experience with package management and deployments
- Thorough understanding of networking.
- Understanding of all common services and protocols
- Experience in web server configuration, monitoring, network design and high availability
- Thorough understanding of DNS, VPN, SSL
Technologies you'll work with:
- GKE, Prometheus, Grafana, Stackdriver
- ArgoCD and GitHub Actions
- NodeJS Backend
- Postgres, ElasticSearch, Redis, RabbitMQ
- Whatever else you decide - we're constantly re-evaluating our stack and tools
- Having prior experience with the technologies is a plus, but not mandatory for skilled candidates.
Benefits
- Remote Option - You can work from location of your choice :)
- Reimbursement of Home Office Setup
- Competitive Salary
- Friendly atmosphere
- Flexible paid vacation policy
- Cloud and virtualization-based technologies (Amazon Web Services (AWS), VMWare).
- Java Application Server Administration (Weblogic, WidlFfy, JBoss, Tomcat).
- Docker and Kubernetes (EKS)
- Linux/UNIX Administration (Amazon Linux and RedHat).
- Developing and supporting cloud infrastructure designs and implementations and guiding application development teams.
- Configuration Management tools (Chef or Puppet or ansible).
- Log aggregations tools such as Elastic and/or Splunk.
- Automate infrastructure and application deployment-related tasks using terraform.
- Automate repetitive tasks required to maintain a secure and up-to-date operational environment.
Responsibilities
- Build and support always-available private/public cloud-based software-as-a-service (SaaS) applications.
- Build AWS or other public cloud infrastructure using Terraform.
- Deploy and manage Kubernetes (EKS) based docker applications in AWS.
- Create custom OS images using Packer.
- Create and revise infrastructure and architectural designs and implementation plans and guide the implementation with operations.
- Liaison between application development, infrastructure support, and tools (IT Services) teams.
- Development and documentation of Chef recipes and/or ansible scripts. Support throughout the entire deployment lifecycle (development, quality assurance, and production).
- Help developers leverage infrastructure, application, and cloud platform features and functionality participate in code and design reviews, and support developers by building CI/CD pipelines using Bamboo, Jenkins, or Spinnaker.
- Create knowledge-sharing presentations and documentation to help developers and operations teams understand and leverage the system's capabilities.
- Learn on the job and explore new technologies with little supervision.
- Leverage scripting (BASH, Perl, Ruby, Python) to build required automation and tools on an ad-hoc basis.
Who we have in mind:
- Solid experience in building a solution on AWS or other public cloud services using Terraform.
- Excellent problem-solving skills with a desire to take on responsibility.
- Extensive knowledge in containerized application and deployment in Kubernetes
- Extensive knowledge of the Linux operating system, RHEL preferred.
- Proficiency with shell scripting.
- Experience with Java application servers.
- Experience with GiT and Subversion.
- Excellent written and verbal communication skills with the ability to communicate technical issues to non-technical and technical audiences.
- Experience working in a large-scale operational environment.
- Internet and operating system security fundamentals.
- Extensive knowledge of massively scalable systems. Linux operating system/application development desirable.
- Programming in scripting languages such as Python. Other object-oriented languages (C++, Java) are a plus.
- Experience with Configuration Management Automation tools (chef or puppet).
- Experience with virtualization, preferably on multiple hypervisors.
- BS/MS in Computer Science or equivalent experience.
- Excellent written and verbal skills.
Education or Equivalent Experience:
- Bachelor's degree or equivalent education in related fields
- Certificates of training in associated fields/equipment’s
About Us
We have grown over 1400% in revenues in the last year.
Interface.ai provides an Intelligent Virtual Assistant (IVA) to FIs to automate calls and customer inquiries across multiple channels and engage their customers with financial insights and upsell/cross-sell.
Our IVA is transforming financial institutions’ call centers from a cost to a revenue center.
Our core technology is built 100% in-house with several breakthroughs in Natural Language Understanding. Our parser is built based on zero-shot learning that helps us to launch industry-specific IVA that can achieve over 90% accuracy on Day-1.
We are 45 people strong with employees spread across India and US locations. Many of them come from ML teams at Apple, Microsoft, and Salesforce in the US along with enterprise architects with over 20+ years of experience building large-scale systems. Our India team consists of people from ISB, IIMs, and many who have been previously part of early-stage startups.
We are a fully remote team.
Founders come from Banking and Enterprise Technology backgrounds with previous experience scaling companies from scratch to $50M+ in revenues.
As a Site Reliability Engineer you will be in charge of:
- Designing, analyzing and troubleshooting large-scale distributed systems
- Engaging in cross-functional team discussions on design, deployment, operation, and maintenance, in a fast-moving, collaborative set up
- Building automation scripts to validate the stability, scalability, and reliability of interface.ai’s products & services as well as enhance interface.ai’s employees’ productivity
- Debugging and optimizing code and automating routine tasks
- Troubleshoot and diagnose issues (hardware or software), propose and implement solutions to ensure they occur with reduced frequency
- Perform the periodic on-call duty to handle security, availability, and reliability of interface.ai’s products
- You will follow and write good code and solid engineering practices
Requirements
You can be a great fit if you are :
- Extremely self motivated
- Ability to learn quickly
- Growth Mindset (read this if you don't know what it means - https://www.amazon.com/Mindset-Psychology-Carol-S-Dweck/dp/0345472322" target="_blank">link)
- Emotional Maturity (read this if you don't know what it means - https://medium.com/@krisgage/15-signs-of-emotional-maturity-38b1a2ab9766" target="_blank">link)
- Passionate about the possibilities at the intersection of AI + Banking
- Worked in a startup of 5 to 30 employees
- Developer with a strong interest in systems Design. You will be building, maintaining, and scaling our cloud infrastructure through software tooling and automation.
- 4-8 years of industry experience developing and troubleshooting large-scale infrastructure on the cloud
- Have a solid understanding of system availability, latency, and performance
- Strong programming skills in at least one major programming language and the ability to learn new languages as needed
- Strong System/network debugging skills
- Experience with management/automation tools such as Terraform/Puppet/Chef/SALT
- Experience with setting up production-level monitoring and telemetry
- Expertise in Container management & AWS
- Experience with kubernetes is a plus
- Experience building CI/CD pipelines
- Experience working with Web sockets, Redis, Postgres, Elastic search, Logstash
- Experience working in an agile team environment and proficient understanding of code versioning tools, such as Git.
- Ability to effectively articulate technical challenges and solutions.
- Proactive outlook for ways to make our systems more reliable
We are looking for people with programming skills in Python, SQL, Cloud Computing. Candidate should have experience in at least one of the major cloud-computing platforms - AWS/Azure/GCP. He should professioanl experience in handling applications and databases in the cloud using VMs and Docker images. He should have ability to design and develop applications for the cloud.
You will be responsible for
- Leading the DevOps strategy and development of SAAS Product Deployments
- Leading and mentoring other computer programmers.
- Evaluating student work and providing guidance in the online courses in programming and cloud computing.
Desired experience/skills
Qualifications: Graduate degree in Computer Science or related field, or equivalent experience.
Skills:
- Strong programming skills in Python, SQL,
- Cloud Computing
Experience:
2+ years of programming experience including Python, SQL, and Cloud Computing. Familiarity with command line working environment.
Note: A strong programming background, in any language and cloud computing platform is required. We are flexible about the degree of familiarity needed for the specific environments Python, SQL. If you have extensive experience in one of the cloud computing platforms and less in others you should still, consider applying.
Soft Skills:
- Good interpersonal, written, and verbal communication skills; including the ability to explain the concepts to others.
- A strong understanding of algorithms and data structures, and their performance characteristics.
- Awareness of and sensitivity to the educational goals of a multicultural population would also be desirable.
- Detail oriented and well organized.
Responsibilities
- Designing and building infrastructure to support AWS, Azure, and GCP-based Cloud services and infrastructure.
- Creating and utilizing tools to monitor our applications and services in the cloud including system health indicators, trend identification, and anomaly detection.
- Working with development teams to help engineer scalable, reliable, and resilient software running in the cloud.
- Participating in on-call escalation to troubleshoot customer-facing issues
- Analyzing and monitoring performance bottlenecks and key metrics to optimize software and system performance.
- Providing analytics and forecasts for cloud capacity, troubleshooting analysis, and uptime.
Skills
- Should have strong experience of a couple of years, in leading DevOps team and planning, defining DevOps roadmap and executing as per the same along with the team
- Familiarity with AWS cloud and JSON templates, Python, AWS Cloud formation templates
- Designing solutions using one or more AWS features, tools, and technologies such as EC2, EBS, Glacier, S3, ELB, CloudFormation, Lambada, CloudWatch, VPC, RDS, Direct Connect, AWS CLI, REST API
- Design and implement system architecture with AWS cloud - Develop automation scripts, ARM templates, Ansible, Chef, Python, Powershell Knowledge of AWS services and cloud design patterns- Knowledge on Cloud fundamentals like autoscaling, serverless
- Have experience with DevOps and Infrastructure as Code: AWS environment and application automation utilizing CloudFormation and third-party tools. CI/CD pipeline setup utilizing
- CI experience with the following is a must: Jenkins, Bitbucket/GIT, Nexus or Artifactory, SonarQube, WireMock or other mocking solution
- Expert knowledge on Windows/Linux OS/Mac with at least 5-6 years of system administration experience
- Should have strong skills in using JIRA build tool
- Should have knowledge in managing the CI/CD pipeline on public cloud deployments using AWS
- Should have strong skills in using tools like Jenkins, Docker, Kubernetes (AWS EKS, Azure AKS), and Cloudformation.
- Experience in monitoring tools like Pingdom, Nagios, etc.
- Experience in reverse proxy services like Nginx and Apache
- Desirable experience in Bitbucket with version control tools like GIT/SVN
- Experience of manual/automated testing desired application deployments
- Experience in database technologies such as PostgreSQL, MySQL
- Knowledge of helm and terraform
- Provide consultation and review all outgoing critical customer communications.
- Apply DevOps thinking in bringing the development and IT Ops process, people, and tools together within the company in order to increase the speed, efficiency, and quality.
- Perform architecture and security reviews for different projects, work with leads to develop strategy and roadmap for the client requirements. Involve in designing of the overall architecture of the system with another leads/architect.
- Develop and grow engineers in DevOps technology to meet the incoming requirements from the business team.
- Work with senior technical team to bring in new technologies/tools being used within the company. Develop and promote best practices and emerging concepts for DevSecOps and secure CI/CD. Participate in Solution Strategy, innovation areas, and technology roadmap.
Key Skills:
- Deals positively with high levels of uncertainty, ambiguity, and shifting priorities.
- Ability to influence stakeholders as a trusted advisor across all levels, including teams outside of shared services.
- Ability to think outside of the box and be innovative by keeping abreast of new trends, identifying opportunities to bring in change for business benefit.
- Implementing CI (Continuous Integration) and CD (Continuous Deployment). Have Good exposure to CI & Build Management tools like Jenkins Azure DevOps GitHub Actions Maven Gradle and etc
- Deployment and provisioning tools (Chef/Ansible/Terraform/AWS CDK etc)
- Docker Orchestration tools like Kubernetes/Swarm etc
- Good hands-on knowledge of automation scripting Python Shell Ruby etc
- Version Control for Source Code Management (SCM) tool: GIT/Bitbucket and etc
- Expertise in Linux based systems like Unix Linux Ubuntu and also manage security systems Linux file system permission etc
- Container Orchestration tool: Kubernetes Swarm Meso Marathon Docker Writing Docker file Docker compose
- Expertise in managing Cloud resources and good exposure to Docker
- Public/Private/Hybrid cloud: AWS /Microsoft Azure/ Google Cloud Platform etc
- Extensive experience with cloud services elastic capacity administration and cloud deployment and migration.
- Good to have knowledge of tools like Splunk, New Relic, PagerDuty, VictorOps
- Familiarity with Network protocols and elements - TCP/IP HTTP(S) SSL DNS Firewall router load balancers proxy.
- Excellent in creating new and improve existing workflows within the agile software development lifecycle.
- Familiar with incident and change management processes.
- Ability to effectively priorities work with fast-changing requirements.
- Troubleshoot and debug infrastructure Network and operating system issues.
- Resolve complex issues in scenarios like resource consumptions server performance backup strategy Scaling.
- Investigate and perform Root Cause Analysis on users' reported issues and provide a workaround before implementing a final fix.
- Monitor servers and applications to ensure the smooth running of IT Architecture (Applications Services Schedulers Server Performance etc)
Design Skills:
- Interpret and implement the designs of others adhering to standards and guidelines
- Design solutions within their area of expertise using technologies that already exist within Tesco
- Understand the roadmaps for their area of Technology Design secure solutions
- Design solutions that can be consumed in a self-service manner by the engineering teams
- Understand the impact of technologies at an enterprise-scale innovation
- Demonstrate knowledge of the latest technology trends related to Infrastructure
- Understand how Industry trends impact their own area
- identify opportunities to automate work and deliver against them










