
- 5+ years of experience in DevOps including automated system configuration, application deployment, and infrastructure-as-code.
- Advanced Linux system administration abilities.
- Real-world experience managing large-scale AWS or GCP environments. Multi-account management a plus.
- Experience with managing production environments on AWS or GCP.
- Solid understanding CI/CD pipelines using GitHub, CircleCI/Jenkins, JFrog Artifactory/Nexus.
- Experience on any configuration management tools like Ansible, Puppet or Chef is a must.
- Experience in any one of the scripting languages: Shell, Python, etc.
- Experience in containerization using Docker and orchestration using Kubernetes/EKS/GKE is a must.
- Solid understanding of SSL and DNS.
- Experience on deploying and running any open-source monitoring/graphing solution like Prometheus, Grafana, etc.
- Basic understanding of networking concepts.
- Always adhere to security best practices.
- Knowledge on Bigdata (Hadoop/Druid) systems administration will be a plus.
- Knowledge on managing and running DBs (MySQL/MariaDB/Postgres) will be an added advantage.
What you get to do
- Work with development teams to build and maintain cloud environments to specifications developed closely with multiple teams. Support and automate the deployment of applications into those environments
- Diagnose and resolve occurring, latent and systemic reliability issues across entire stack: hardware, software, application and network. Work closely with development teams to troubleshoot and resolve application and service issues
- Continuously improve Conviva SaaS services and infrastructure for availability, performance and security
- Implement security best practices – primarily patching of operating systems and applications
- Automate everything. Build proactive monitoring and alerting tools. Provide standards, documentation, and coaching to developers.
- Participate in 12x7 on-call rotations
- Work with third party service/support providers for installations, support related calls, problem resolutions etc.

Similar jobs
Required Skills: Advanced AWS Infrastructure Expertise, CI/CD Pipeline Automation, Monitoring, Observability & Incident Management, Security, Networking & Risk Management, Infrastructure as Code & Scripting
Criteria:
- 5+ years of DevOps/SRE experience in cloud-native, product-based companies (B2C scale preferred)
- Strong hands-on AWS expertise across core and advanced services (EC2, ECS/EKS, Lambda, S3, CloudFront, RDS, VPC, IAM, ELB/ALB, Route53)
- Proven experience designing high-availability, fault-tolerant cloud architectures for large-scale traffic
- Strong experience building & maintaining CI/CD pipelines (Jenkins mandatory; GitHub Actions/GitLab CI a plus)
- Prior experience running production-grade microservices deployments and automated rollout strategies (Blue/Green, Canary)
- Hands-on experience with monitoring & observability tools (Grafana, Prometheus, ELK, CloudWatch, New Relic, etc.)
- Solid hands-on experience with MongoDB in production, including performance tuning, indexing & replication
- Strong scripting skills (Bash, Shell, Python) for automation
- Hands-on experience with IaC (Terraform, CloudFormation, or Ansible)
- Deep understanding of networking fundamentals (VPC, subnets, routing, NAT, security groups)
- Strong experience in incident management, root cause analysis & production firefighting
Description
Role Overview
Company is seeking an experienced Senior DevOps Engineer to design, build, and optimize cloud infrastructure on AWS, automate CI/CD pipelines, implement monitoring and security frameworks, and proactively identify scalability challenges. This role requires someone who has hands-on experience running infrastructure at B2C product scale, ideally in media/OTT or high-traffic applications.
Key Responsibilities
1. Cloud Infrastructure — AWS (Primary Focus)
- Architect, deploy, and manage scalable infrastructure using AWS services such as EC2, ECS/EKS, Lambda, S3, CloudFront, RDS, ELB/ALB, VPC, IAM, Route53, etc.
- Optimize cloud cost, resource utilization, and performance across environments.
- Design high-availability, fault-tolerant systems for streaming workloads.
2. CI/CD Automation
- Build and maintain CI/CD pipelines using Jenkins, GitHub Actions, or GitLab CI.
- Automate deployments for microservices, mobile apps, and backend APIs.
- Implement blue/green and canary deployments for seamless production rollouts.
3. Observability & Monitoring
- Implement logging, metrics, and alerting using tools like Grafana, Prometheus, ELK, CloudWatch, New Relic, etc.
- Perform proactive performance analysis to minimize downtime and bottlenecks.
- Set up dashboards for real-time visibility into system health and user traffic spikes.
4. Security, Compliance & Risk Highlighting
• Conduct frequent risk assessments and identify vulnerabilities in:
o Cloud architecture
o Access policies (IAM)
o Secrets & key management
o Data flows & network exposure
• Implement security best practices including VPC isolation, WAF rules, firewall policies, and SSL/TLS management.
5. Scalability & Reliability Engineering
- Analyze traffic patterns for OTT-specific load variations (weekends, new releases, peak hours).
- Identify scalability gaps and propose solutions across:
- o Microservices
- o Caching layers
- o CDN distribution (CloudFront)
- o Database workloads
- Perform capacity planning and load testing to ensure readiness for 10x traffic growth.
6. Database & Storage Support
- Administer and optimize MongoDB for high-read/low-latency use cases.
- Design backup, recovery, and data replication strategies.
- Work closely with backend teams to tune query performance and indexing.
7. Automation & Infrastructure as Code
- Implement IaC using Terraform, CloudFormation, or Ansible.
- Automate repetitive infrastructure tasks to ensure consistency across environments.
Required Skills & Experience
Technical Must-Haves
- 5+ years of DevOps/SRE experience in cloud-native, product-based companies.
- Strong hands-on experience with AWS (core and advanced services).
- Expertise in Jenkins CI/CD pipelines.
- Solid background working with MongoDB in production environments.
- Good understanding of networking: VPCs, subnets, security groups, NAT, routing.
- Strong scripting experience (Bash, Python, Shell).
- Experience handling risk identification, root cause analysis, and incident management.
Nice to Have
- Experience with OTT, video streaming, media, or any content-heavy product environments.
- Familiarity with containers (Docker), orchestration (Kubernetes/EKS), and service mesh.
- Understanding of CDN, caching, and streaming pipelines.
Personality & Mindset
- Strong sense of ownership and urgency—DevOps is mission critical at OTT scale.
- Proactive problem solver with ability to think about long-term scalability.
- Comfortable working with cross-functional engineering teams.
Why Join company?
• Build and operate infrastructure powering millions of monthly users.
• Opportunity to shape DevOps culture and cloud architecture from the ground up.
• High-impact role in a fast-scaling Indian OTT product.
Are you eager to kick-start your career in DevOps and learn the latest technologies to solve complex problems? Do you enjoy hands-on problem-solving, exploring cloud technologies, and supporting innovative solutions? At Aivar, we are looking for a DevOps Engineer to join our team.
In this role, you will assist in the implementation and support of DevOps practices, including containerization, orchestration, and CI/CD pipelines, while learning from industry experts.
This is an exciting opportunity to grow your skills and work on transformative projects in a collaborative environment.
Requirements
Preferred Technical Qualifications
- 2 – 5 years of experience in DevOps, system administration, or software development (internship experience is acceptable).
- Familiarity with container technologies such as Docker and Kubernetes.
- Understanding of Infrastructure as Code (IaC) tools like Terraform or CloudFormation.
- Knowledge of CI/CD tools like Jenkins, GitLab CI, or GitHub Actions.
- Programming experience in Python, Java, or another language used in DevOps workflows.
- Understanding of cloud platforms such as AWS, Azure, or GCP
- Willingness to learn advanced Kubernetes concepts and troubleshooting techniques.Preferred Soft Skills
Collaboration Skills:
- Willingness to work in cross-functional teams and support the alignment of technical solutions with business goals.
- Eager to learn how to work effectively with customers, engineers, and architects to deliver DevOps solutions.
Effective Communication:
- Ability to communicate technical concepts clearly to team members and stakeholders.
- Desire to improve documentation and presentation skills to share ideas effectively.
Problem-Solving Mindset:
- Curiosity to explore and learn solutions for infrastructure challenges in DevOps environments.
- Interest in learning how to diagnose and resolve issues in containerized and
- distributed systems.
Adaptability and Continuous Learning:
- Strong desire to learn emerging DevOps tools and practices in a dynamic environment.
- Commitment to staying updated with trends in cloud computing, DevOps, and
Team-Oriented Approach:
- Enthusiastic about contributing to a collaborative team environment and supporting
- overall project goals.
- Open to feedback and actively sharing knowledge to help the team grow.
Certifications (Optional but Preferred)
- Certified Kubernetes Application Developer (CKAD) or equivalent Linux Foundation
- certification
- Any beginner-level certifications in DevOps or cloud services are a plus.
- Any AWS Certification
Why Join Aivar?
At Aivar, we are re-imagining analytics consulting by integrating AI and machine learning to create repeatable solutions that deliver measurable business outcomes. With a culture centered on innovation, collaboration, and growth, we provide opportunities to work on transformative projects across industries.
About Diversity and Inclusion
We believe diversity drives innovation and growth. Our inclusive environment encourages individuals of all backgrounds to contribute their unique perspectives to shape the future and analytics.
Job Description:
Infilect is a GenAI company pioneering the use of Image Recognition in Consumer Packaged Goods retail.
We are looking for a Senior DevOps Engineer to be responsible and accountable for the smooth running of our Cloud, AI workflows, and AI-based Computer Systems. Furthermore, the candidate will supervise the implementation and maintenance of the company’s computing needs including the in-house GPU & AI servers along with AI workloads.
Responsibilities
- Understanding and automating AI based deployment an AI based workflows
- Implementing various development, testing, automation tools, and IT infrastructure
- Manage Cloud, computer systems and other IT assets.
- Strive for continuous improvement and build continuous integration, continuous development, and constant deployment pipeline (CI/CD Pipeline)
- Design, develop, implement, and coordinate systems, policies, and procedures for Cloud and on-premise systems
- Ensure the security of data, network access, and backup systems
- Act in alignment with user needs and system functionality to contribute to organizational policy
- Identify problematic areas, perform RCA and implement strategic solutions in time
- Preserve assets, information security, and control structures
- Handle monthly/annual cloud budget and ensure cost effectiveness
Requirements and skills
- Well versed in automation tools such as Docker, Kubernetes, Puppet, Ansible etc.
- Working Knowledge of Python, SQL database stack or any full-stack with relevant tools.
- Understanding agile development, CI/CD, sprints, code reviews, Git and GitHub/Bitbucket workflows
- Well versed with ELK stack or any other logging, monitoring and analysis tools
- Proven working experience of 2+ years as an DevOps/Tech lead/IT Manager or relevant positions
- Excellent knowledge of technical management, information analysis, and of computer hardware/software systems
- Hands-on experience with computer networks, network administration, and network installation
- Knowledge in ISO/SOC Type II implementation with be a
- BE/B.Tech/ME/M.Tech in Computer Science, IT, Electronics or a similar field
Job Purpose and Impact
The DevOps Engineer is a key position to strengthen the security automation capabilities which have been identified as a critical area for growth and specialization within Global IT’s scope. As part of the Cyber Intelligence Operation’s DevOps Team, you will be helping shape our automation efforts by building, maintaining and supporting our security infrastructure.
Key Accountabilities
- Collaborate with internal and external partners to understand and evaluate business requirements.
- Implement modern engineering practices to ensure product quality.
- Provide designs, prototypes and implementations incorporating software engineering best practices, tools and monitoring according to industry standards.
- Write well-designed, testable and efficient code using full-stack engineering capability.
- Integrate software components into a fully functional software system.
- Independently solve moderately complex issues with minimal supervision, while escalating more complex issues to appropriate staff.
- Proficiency in at least one configuration management or orchestration tool, such as Ansible.
- Experience with cloud monitoring and logging services.
Qualifications
Minimum Qualifications
- Bachelor's degree in a related field or equivalent exp
- Knowledge of public cloud services & application programming interfaces
- Working exp with continuous integration and delivery practices
Preferred Qualifications
- 3-5 years of relevant exp whether in IT, IS, or software development
- Exp in:
- Code repositories such as Git
- Scripting languages (Python & PowerShell)
- Using Windows, Linux, Unix, and mobile platforms within cloud services such as AWS
- Cloud infrastructure as a service (IaaS) / platform as a service (PaaS), microservices, Docker containers, Kubernetes, Terraform, Jenkins
- Databases such as Postgres, SQL, Elastic
About the role:
We are seeking a highly skilled Azure DevOps Engineer with a strong background in backend development to join our rapidly growing team. The ideal candidate will have a minimum of 4 years of experience and has have extensive experience in building and maintaining CI/CD pipelines, automating deployment processes, and optimizing infrastructure on Azure. Additionally, expertise in backend technologies and development frameworks is required to collaborate effectively with the development team in delivering scalable and efficient solutions.
Responsibilities
- Collaborate with development and operations teams to implement continuous integration and deployment processes.
- Automate infrastructure provisioning, configuration management, and application deployment using tools such as Ansible, and Jenkins.
- Design, implement, and maintain Azure DevOps pipelines for continuous integration and continuous delivery (CI/CD)
- Develop and maintain build and deployment pipelines, ensuring that they are scalable, secure, and reliable.
- Monitor and maintain the health of the production infrastructure, including load balancers, databases, and application servers.
- Automate the software development and delivery lifecycle, including code building, testing, deployment, and release.
- Familiarity with Azure CLI, Azure REST APIs, Azure Resource Manager template, Azure billing/cost management and the Azure Management Console
- Must have experience of any one of the programming language (Java, .Net, Python )
- Ensure high availability of the production environment by implementing disaster recovery and business continuity plans.
- Build and maintain monitoring, alerting, and trending operational tools (CloudWatch, New Relic, Splunk, ELK, Grafana, Nagios).
- Stay up to date with new technologies and trends in DevOps and make recommendations for improvements to existing processes and infrastructure.
- ontribute to backend development projects, ensuring robust and scalable solutions.
- Work closely with the development team to understand application requirements and provide technical expertise in backend architecture.
- Design and implement database schemas.
- Identify and implement opportunities for performance optimization and scalability of backend systems.
- Participate in code reviews, architectural discussions, and sprint planning sessions.
- Stay updated with the latest Azure technologies, tools, and best practices to continuously improve our development and deployment processes.
- Mentor junior team members and provide guidance and training on best practices in DevOps.
Required Qualifications
- BS/MS in Computer Science, Engineering, or a related field
- 4+ years of experience as an Azure DevOps Engineer (or similar role) with experience in backed development.
- Strong understanding of CI/CD principles and practices.
- Expertise in Azure DevOps services, including Azure Pipelines, Azure Repos, and Azure Boards.
- Experience with infrastructure automation tools like Terraform or Ansible.
- Proficient in scripting languages like PowerShell or Python.
- Experience with Linux and Windows server administration.
- Strong understanding of backend development principles and technologies.
- Excellent communication and collaboration skills.
- Ability to work independently and as part of a team.
- Problem-solving and analytical skills.
- Experience with industry frameworks and methodologies: ITIL/Agile/Scrum/DevOps
- Excellent problem-solving, critical thinking, and communication skills.
- Have worked in a product based company.
What we offer:
- Competitive salary and benefits package
- Opportunity for growth and advancement within the company
- Collaborative, dynamic, and fun work environment
- Possibility to work with cutting-edge technologies and innovative projects
ApnaComplex is one of India’s largest and fastest-growing PropTech disruptors within the Society & Apartment Management business. The SaaS based B2C platform is headquartered out of India’s tech start-up hub, Bangalore, with branches in 6 other cities. It currently empowers 3,600 Societies, managing over 6 Lakh Households in over 80 Indian cities to effortlessly manage all aspects of running large complexes seamlessly.
ApnaComplex is part of ANAROCK Group. ANAROCK Group is India's leading specialized real estate services company having diversified interests across the real estate value chain.
If it excites you to - drive innovation, create industry-first solutions, build new capabilities ground-up, and work with multiple new technologies, ApnaComplex is the place for you.
Must have-
- Knowledge of Docker
- Knowledge of Terraforms
- Knowledge of AWS
Good to have -
- Kubernetes
- Scripting language: PHP/Go Lang and Python
- Webserver knowledge
- Logging and monitoring experience
- Test, build, design, deployment, and ability to maintain continuous integration and continuous delivery process using tools like Jenkins, maven Git, etc.
- Build and maintain highly available production systems.
- Must know how to choose the best tools and technologies which best fits the business needs.
- Develop software to integrate with internal back-end systems.
- Investigate and resolve technical issues.
- Problem-solving attitude.
- Ability to automate test and deploy the code and monitor.
- Work in close coordination with the development and operations team such that the application is in line with performance according to the customer's expectation.
- Lead and guide the team in identifying and implementing new technologies.
Skills that will help you build a success story with us
- An ability to quickly understand and solve new problems
- Strong interpersonal skills
- Excellent data interpretation
- Context-switching
- Intrinsically motivated
- A tactical and strategic track record for delivering research-driven results
Quick Glances:
- https://www.apnacomplex.com/why-apnacomplex">What to look for at ApnaComplex
- https://www.linkedin.com/company/1070467/admin/">Who are we A glimpse of ApnaComplex, know us better
- https://www.apnacomplex.com/media-buzz">ApnaComplex - Media – Visit our media page
ANAROCK Ethos - Values Over Value:
Our assurance of consistent ethical dealing with clients and partners reflects our motto - Values Over Value.
We value diversity within ANAROCK Group and are committed to offering equal opportunities in employment. We do not discriminate against any team member or applicant for employment based on nationality, race, color, religion, caste, gender identity / expression, sexual orientation, disability, social origin and status, indigenous status, political opinion, age, marital status or any other personal characteristics or status. ANAROCK Group values all talent and will do its utmost to hire, nurture and grow them.
● Manage AWS services and day to day cloud operations.
● Work closely with the development and QA team to make the deployment process
smooth and devise new tools and technologies in order to achieve automation of most
of the components.
● Strengthen the infrastructure in terms of Reliability (configuring HA etc.), Security (cloud
network management, VPC, etc.) and Scalability (configuring clusters, load balancers,
etc.)
● Expert level understanding of DB replication, Sharding (mySQL DB Systems), HA
clusters, Failovers and recovery mechanisms.
● Build and maintain CI-CD (continuous integration/deployment) workflows.
● Having an expert knowledge on AWS EC2, S3, RDS, Cloudfront and other AWS offered
services and products.
● Installation and management of software systems in order to support the development
team e.g. DB installation and administration, web servers, caching and other such
systems.
Requirements:
● B. Tech or Bachelor's in a related field.
● 2-5 years of hands-on experience with AWS cloud services such as EC2, ECS,
Cloudwatch, SQS, S3, CloudFront, route53.
● Experience with setting up CI-CD pipelines and successfully running large scale
systems.
● Experience with source control systems (SVN, GIT etc), Deployment and build
automation tools like Jenkins, Bamboo, Ansible etc.
● Good experience and understanding of Linux/Unix based systems and hands-on
experience working with them with respect to networking, security, administration.
● Atleast 1-2 years of experience with shell/python/perl scripting; having experience with
Bash scripting is an added advantage.
● Experience with automation tasks like, automated backups, configuring fail overs,
automating deployment related process is a must have.
● Good to have knowledge of setting up the ELK stack; Infrastructure as a code services
like Terraform; working and automating processes with AWS SDK/CLI tools with scripts

Platform Services Engineer
DevSecOps Engineer
- Strong Systems Experience- Linux, networking, cloud, APIs
- Scripting language Programming - Shell, Python
- Strong Debugging Capability
- AWS Platform -IAM, Network,EC2, Lambda, S3, CloudWatch
- Knowledge on Terraform, Packer, Ansible, Jenkins
- Observability - Prometheus, InfluxDB, Dynatrace,
- Grafana, Splunk • DevSecOps-CI/CD - Jenkins
- Microservices
- Security & Access Management
- Container Orchestration a plus - Kubernetes, Docker etc.
- Big Data Platforms knowledge EMR, Databricks. Cloudera a plus
Implement DevOps capabilities in cloud offerings using CI/CD toolsets and automation
Defining and setting development, test, release, update, and support processes for DevOps
operation
Troubleshooting techniques and fixing the code bugs
Coordination and communication within the team and with client team
Selecting and deploying appropriate CI/CD tools
Strive for continuous improvement and build continuous integration, continuous
development, and constant deployment pipeline (CI/CD Pipeline)
Pre-requisite skills required:
Experience working on Linux based infrastructure
Experience of scripting in at-least 2 languages ( Bash + Python / Ruby )
Working knowledge of various tools, open-source technologies, and cloud services
Experience with Docker, AWS ( ec2, s3, iam, eks, route53), Ansible, Helm, Terraform
Experience with building, maintaining, and deploying Kubernetes environments and
applications
Experience with build and release automation and dependency management; implementing
CI/CD
Clear fundamentals with DNS, HTTP, HTTPS, Micro-Services, Monolith etc.








.png&w=256&q=75)