
- Design cloud infrastructure that is secure, scalable, and highly available on AWS, Azure and GCP
- Work collaboratively with software engineering to define infrastructure and deployment requirements
- Provision, configure and maintain AWS, Azure, GCP cloud infrastructure defined as code
- Ensure configuration and compliance with configuration management tools
- Administer and troubleshoot Linux based systems
- Troubleshoot problems across a wide array of services and functional areas
- Build and maintain operational tools for deployment, monitoring, and analysis of AWS, Azure Infrastructure and systems
- Perform infrastructure cost analysis and optimization

About Tessact
About
Connect with the team
Similar jobs
Job Title: Senior DevOps Engineer
Location: Gurgaon – Sector 39
Work Mode: 5 Days Onsite
Experience: 5+ Years
About the Role
We are looking for an experienced Senior DevOps Engineer to build, manage, and maintain highly reliable, scalable, and secure infrastructure. The role involves deploying product updates, handling production issues, implementing customer integrations, and leading DevOps best practices across teams.
Key Responsibilities
- Manage and maintain production-grade infrastructure ensuring high availability and performance.
- Deploy application updates, patches, and bug fixes across environments.
- Handle Level-2 support and resolve escalated production issues.
- Perform root cause analysis and implement preventive solutions.
- Build automation tools and scripts to improve system reliability and efficiency.
- Develop monitoring, logging, alerting, and reporting systems.
- Ensure secure deployments following data encryption and cybersecurity best practices.
- Collaborate with development, product, and QA teams for smooth releases.
- Lead and mentor a small DevOps team (3–4 engineers).
Core Focus Areas
Server Setup & Management (60%)
- Hands-on management of bare-metal servers.
- Server provisioning, configuration, and lifecycle management.
- Network configuration including redundancy, bonding, and performance tuning.
Queue Systems – Kafka / RabbitMQ (15%)
- Implementation and management of message queues for distributed systems.
Storage Systems – SAN / NAS (15%)
- Setup and management of enterprise storage systems.
- Ensure backup, recovery, and data availability.
Database Knowledge (5%)
- Working experience with Redis, MySQL/PostgreSQL, MongoDB, Elasticsearch.
- Basic database administration and performance tuning.
Telecom Exposure (Good to Have – 5%)
- Experience with SMS, voice systems, or real-time data processing environments.
Technical Skills Required
- Linux administration & Shell scripting
- CI/CD tools – Jenkins
- Git (GitHub / SVN) and branching strategies
- Docker & Kubernetes
- AWS cloud services
- Ansible for configuration management
- Databases: MySQL, MariaDB, MongoDB
- Web servers: Apache, Tomcat
- Load balancing & HA: HAProxy, Keepalived
- Monitoring tools: Nagios and related observability stacks
Job Title: Senior DevOps Engineer
Location: Sector 39, Gurgaon (Onsite)
Employment Type: Full-Time
Working Days: 6 Days (Alternate Saturdays Working)
Experience Required: 5+ Years
Team Role: Lead & Mentor a team of 3–4 engineers
About the Role
We are seeking a highly skilled Senior DevOps Engineer to lead our infrastructure and automation initiatives while mentoring a small team. This role involves setting up and managing physical and cloud-based servers, configuring storage systems, and implementing automation to ensure high system availability and reliability. The ideal candidate will have strong Linux administration skills, hands-on experience with DevOps tools, and the leadership capabilities to guide and grow the team.
Key Responsibilities
Infrastructure & Server Management (60%)
- Set up, configure, and manage bare-metal (physical) servers as well as cloud-based environments.
- Configure network bonding, firewalls, and system security for optimal performance and reliability.
- Implement and maintain high-availability solutions for mission-critical systems.
Queue Systems (Kafka / RabbitMQ) (15%)
- Deploy and manage message queue systems to support high-throughput, real-time data exchange.
- Ensure reliable event-driven communication between distributed services.
Storage Systems (SAN/NAS) (15%)
- Configure and manage Storage Area Networks (SAN) and Network Attached Storage (NAS).
- Optimize storage performance, redundancy, and availability.
Database Administration (5%)
- Administer and optimize MariaDB, MySQL, MongoDB, Redis, and Elasticsearch.
- Handle backup, recovery, replication, and performance tuning.
General DevOps & Automation
- Deploy product updates, patches, and fixes while ensuring minimal downtime.
- Design and manage CI/CD pipelines using Jenkins or similar tools.
- Administer and automate workflows with Docker, Kubernetes, Ansible, AWS, and Git.
- Manage web and application servers (Apache httpd, Tomcat).
- Implement monitoring, logging, and alerting systems (Nagios, HAProxy, Keepalived).
- Conduct root cause analysis and implement automation to reduce manual interventions.
- Mentor a team of 3–4 engineers, fostering best practices and continuous improvement.
Required Skills & Qualifications
✅ 5+ years of proven DevOps engineering experience
✅ Strong expertise in Linux administration & shell scripting
✅ Hands-on experience with bare-metal server management & storage systems
✅ Proficiency in Docker, Kubernetes, AWS, Jenkins, Git, and Ansible
✅ Experience with Kafka or RabbitMQ in production environments
✅ Knowledge of CI/CD, automation, monitoring, and high-availability tools (Nagios, HAProxy, Keepalived)
✅ Excellent problem-solving, troubleshooting, and leadership abilities
✅ Strong communication skills with the ability to mentor and lead teams
Good to Have
- Experience in Telecom projects involving SMS, voice, or real-time data handling.
- Understanding of maintenance of existing systems (Virtual machines), Linux stack
- Experience running, operating and maintainence of Kubernetes pods
- Strong Scripting skills
- Experience in AWS
- Knowledge of configuring/optimizing open source tools like Kafka, etc.
- Strong automation maintenance - ability to identify opportunities to speed up build and deploy process with strong validation and automation
- Optimizing and standardizing monitoring, alerting.
- Experience in Google cloud platform
- Experience/ Knowledge in Python will be an added advantage
- Experience on Monitoring Tools like Jenkins, Kubernetes ,Terraform etc
Job Summary: We are looking for a senior DevOps engineer to help us build functional systems that improve customer experience. They will be responsible for deploying product updates, identifying production issues and implementing integrations that meet our customers' needs.
Key Responsibilities
- Utilise various open source technologies & build independent web based tools, microservices and solutions
- Write deployment scripts
- Configure and manage data sources like MySQL, Mongo, ElasticSearch, etc
- Configure and deploy pipelines for various microservices using CI/CD tools
- Automated server monitoring setup & HA adherence
- Defining and setting development, test, release, update, and support processes for DevOps operation
- Coordination and communication within the team and with customers where integrations are required
- Work with company personnel to define technical problems and requirements, determine solutions, and implement those solutions.
- Work with product team to design automated pipelines to support SaaS delivery and operations in cloud platforms.
- Review and act on the Service requests, Infrastructure requests and Incidents logged by our Implementation teams and clients. Identifying, analysing, and resolving infrastructure vulnerabilities and application deployment issues
- Modifying and improving existing systems. Suggest process improvements and implement them.
- Collaborate with Software Engineers to help them deploy and operate different systems, also help to automate and streamline company's operations and processes.
- Developing interface simulators and designing automated module deployments.
Key Skills
- Bachelor's degree in software engineering, computer science, information technology, information systems.
- 3+ years of experience in managing Linux based cloud microservices infrastructure (AWS, GCP or Azure)
- Hands-on experience with databases including MySQL.
- Experience OS tuning and optimizations for running databases and other scalable microservice solutions
- Proficient working with git repositories and git workflows
- Able to setup and manage CI/CD pipelines
- Excellent troubleshooting, working knowledge of various tools, open-source technologies, and cloud services
- Awareness of critical concepts in DevOps and Agile principles
- Sense of ownership and pride in your performance and its impact on company’s success
- Critical thinker and problem-solving skills
- Extensive experience in DevOps engineering, team management, and collaboration.
- Ability to install and configure software, gather test-stage data, and perform debugging.
- Ability to ensure smooth software deployment by writing script updates and running diagnostics.
- Proficiency in documenting processes and monitoring various metrics.
- Advanced knowledge of best practices related to data encryption and cybersecurity.
- Ability to keep up with software development trends and innovation.
- Exceptional interpersonal and communication skills
Experience:
- Must have 4+ years of experience as Devops Engineer in a SaaS product based company
About SuperProcure
SuperProcure is a leading logistics and supply chain management solutions provider that aims to bring efficiency, transparency, and process optimization across the globe with the help of technology and data. SuperProcure started our journey in 2017 to help companies digitize their logistics operations. We created industry-recognized products which are now being used by 150+ companies like Tata Consumer Products, ITC, Flipkart, Tata Chemicals, PepsiCo, L&T Constructions, GMM Pfaudler, Havells, others. It helps achieve real-time visibility, 100% audit adherence & transparency, 300% improvement in team productivity, up to 9% savings in freight costs and many more benefits. SuperProcure is determined to make the lives of the logistic teams easier, add value, and help in establishing a fair and beneficial process for the company.
Super Procure is backed by IndiaMart and incubated under IIMCIP & Lumis, Supply Chain Labs. SuperProcure was also recognized as Top 50 Emerging Start-ups of India at the NASSCOM Product Conclave organized in Bengaluru and was a part of the recently launched National Logistics policy by the prime minister of India. More details about our journey can be found here
Life @ SuperProcure
SuperProcure operates in an extremely innovative, entrepreneurial, analytical, and problem-solving work culture. Every team member is fully motivated and committed to the company's vision and believes in getting things done. In our organization, every employee is the CEO of what he/she does; from conception to execution, the work needs to be thought through.
Our people are the core of our organization, and we believe in empowering them and making them a part of the daily decision-making, which impacts the business and shapes the company's overall strategy. They are constantly provided with resources,
mentorship and support from our highly energetic teams and leadership. SuperProcure is extremely inclusive and believes in collective success.
Looking for a bland, routine 9-6 job? PLEASE DO NOT APPLY. Looking for a job where you wake up and add significant value to a $180 Billion logistics industry everyday? DO APPLY.
OTHER DETAILS
- Engagement : Full Time
- No. of openings : 1
- CTC : 12 - 20lpa
● Manage AWS services and day to day cloud operations.
● Work closely with the development and QA team to make the deployment process
smooth and devise new tools and technologies in order to achieve automation of most
of the components.
● Strengthen the infrastructure in terms of Reliability (configuring HA etc.), Security (cloud
network management, VPC, etc.) and Scalability (configuring clusters, load balancers,
etc.)
● Expert level understanding of DB replication, Sharding (mySQL DB Systems), HA
clusters, Failovers and recovery mechanisms.
● Build and maintain CI-CD (continuous integration/deployment) workflows.
● Having an expert knowledge on AWS EC2, S3, RDS, Cloudfront and other AWS offered
services and products.
● Installation and management of software systems in order to support the development
team e.g. DB installation and administration, web servers, caching and other such
systems.
Requirements:
● B. Tech or Bachelor's in a related field.
● 2-5 years of hands-on experience with AWS cloud services such as EC2, ECS,
Cloudwatch, SQS, S3, CloudFront, route53.
● Experience with setting up CI-CD pipelines and successfully running large scale
systems.
● Experience with source control systems (SVN, GIT etc), Deployment and build
automation tools like Jenkins, Bamboo, Ansible etc.
● Good experience and understanding of Linux/Unix based systems and hands-on
experience working with them with respect to networking, security, administration.
● Atleast 1-2 years of experience with shell/python/perl scripting; having experience with
Bash scripting is an added advantage.
● Experience with automation tasks like, automated backups, configuring fail overs,
automating deployment related process is a must have.
● Good to have knowledge of setting up the ELK stack; Infrastructure as a code services
like Terraform; working and automating processes with AWS SDK/CLI tools with scripts
- Bachelor’s and/or master’s degree in Computer Science, Computer Engineering or related technical discipline
- About 5 years of professional experience supporting AWS cloud environments
- Certified Amazon Architect Associate or Architect
- Experience serving as lead (shift management, reporting) will be a plus
- AWS Architect Certified Solution Architect Professional (Must have)
- Minimum 4yrs experience, maximum 8 years’ experience.
- 100% work from office in Hyderabad
- Very fluent in English

Platform Services Engineer
DevSecOps Engineer
- Strong Systems Experience- Linux, networking, cloud, APIs
- Scripting language Programming - Shell, Python
- Strong Debugging Capability
- AWS Platform -IAM, Network,EC2, Lambda, S3, CloudWatch
- Knowledge on Terraform, Packer, Ansible, Jenkins
- Observability - Prometheus, InfluxDB, Dynatrace,
- Grafana, Splunk • DevSecOps-CI/CD - Jenkins
- Microservices
- Security & Access Management
- Container Orchestration a plus - Kubernetes, Docker etc.
- Big Data Platforms knowledge EMR, Databricks. Cloudera a plus
- 7+ years of experience in System Administration, Networking, Automation, Monitoring
- Excellent problem solving, analytical skills and technical troubleshooting skills
- Experience managing systems deployed in public cloud platforms (Microsoft Azure, AWS or Google Cloud)
- Experience implementing and maintaining CI/CD pipelines (Jenkins, Concourse, etc.)
- Linux experience, flavours: Ubuntu, Redhat, CentOS (sysadmin, bash scripting)
- Experience setting up monitoring (Datadog, Splunk, etc.)
- Experience in Infrastructure Automation tools like Terraform
- Experience in Package Manager for Kubernetes like Helm Charts
- Experience with databases and data storage (Oracle, MongoDB, Postgres SQL, ELK stack)
- Experience with Docker
- Experience with orchestration technologies (Kubernetes or DC/OS)
- Familiar with Agile Software Development
Projects you'll be working on:
- We're focused on enhancing our product for our clients and their users, as well as streamlining operations and improving our technical foundation.
- Writing scripts for procurement, configuration and deployment of instances (infrastructure automation) on GCP
- Managing Kubernetes cluster
- Manage product and services like VPC, Elasticsearch, cloud functions, rabbitMQ, redis servers, postgres infrastructure, app engine, etc.
- Supporting developers in setting up infrastructure for services
- Manage and improve microservices infrastructure
- Managing high availability, low latency applications
- Focus on security best practices to ensure assist in security and compliance activities
Requirements
- Minimum 3 years experience as DevOps
- Minimum 1 years' experience with Kubernetes Cluster (Infrastructure as code, maintaining and scalability).
- BASH expertise, node or python professional programming experience
- Experience with setting up, configuring and using Jenkins or any CI tools, building CI/CD pipeline
- Experience setting microservices architecture
- Experience with package management and deployments
- Thorough understanding of networking.
- Understanding of all common services and protocols
- Experience in web server configuration, monitoring, network design and high availability
- Thorough understanding of DNS, VPN, SSL
Technologies you'll work with:
- GKE, Prometheus, Grafana, Stackdriver
- ArgoCD and GitHub Actions
- NodeJS Backend
- Postgres, ElasticSearch, Redis, RabbitMQ
- Whatever else you decide - we're constantly re-evaluating our stack and tools
- Having prior experience with the technologies is a plus, but not mandatory for skilled candidates.
Benefits
- Remote Option - You can work from location of your choice :)
- Reimbursement of Home Office Setup
- Competitive Salary
- Friendly atmosphere
- Flexible paid vacation policy










