
- Responsible for the entire infrastructure including Production (both bare metal and AWS).
- Manage and maintain the production systems and operations including SysAdmin, DB activities.
- Improve tools and processes, automate manual efforts, and maintain the health of the system.
- Champion best practices, CI-CD, Metrics Driven Development
- Optimise the company's computing architecture
- Conduct systems tests for security, performance, and availability
- Maintain security of the system
- Develop and maintain design and troubleshooting documentation
- 7+ years of experience into DevOps/Technical Operations
- Extensive experience in operating scripting language like shell, python, etc
- Experience in developing and maintaining CI/CD process for SaaS applications using tools such as Jenkins
- Hands on experience in using configuration management tools such as Puppet, SaltStack, Ansible, etc
- Hands-on experience to build and handle VMs, Containers utilizing tools such as Kubernetes, Docker, etc
- Hands on experience in building, designing and maintaining cloud-based applications with AWS, Azure,GCP, etc
- Knowledge of Databases (MySQL, NoSQL)
- Knowledge of security/ethical hacking
- Have experience with ElasticSearch, Kibana, LogStash
- Have experience with Cassandra, Hadoop, or Spark
- Have experience with Mongo, Hive

Similar jobs
Job Title : Senior DevOps Engineer
Experience : 5+ Years
Location : Gurgaon, Sector 39
About the Role :
We are seeking an experienced Senior DevOps Engineer to lead our DevOps practices, manage a small team, and build functional, scalable systems that enhance customer experience. You will be responsible for deployments, automation, troubleshooting, integrations, monitoring, and team mentoring while ensuring secure and efficient operations.
Mandatory Skills :
Linux Administration, Shell Scripting, CI/CD (Jenkins), Git/GitHub, Docker, Kubernetes, AWS, Ansible, Database Administration (MariaDB/MySQL/MongoDB), Apache httpd/Tomcat, HAProxy, Nagios, Keepalived, Monitoring/Logging/Alerting, and On-premise Server Management.
Key Responsibilities :
- Implement and manage integrations as per business and customer needs.
- Deploy product updates, fixes, and enhancements.
- Provide Level 2 technical support and resolve production issues.
- Build tools to reduce errors and improve system performance.
- Develop scripts and automation for CI/CD, monitoring, and visualization.
- Perform root cause analysis of incidents and implement long-term fixes.
- Ensure robust monitoring, logging, and alerting systems are in place.
- Manage on-premise servers and ensure smooth deployments.
- Collaborate with development teams for system integration.
- Mentor and guide a team of 3 to 4 engineers.
Required Qualifications & Experience :
- Bachelor’s degree in Computer Science, Software Engineering, IT, or related field (Master’s preferred).
- 5+ years of experience in DevOps engineering with team management exposure.
- Strong expertise in:
- Linux Administration & Shell Scripting
- CI/CD pipelines (Jenkins or similar)
- Git/GitHub, branching, and code repository standards
- Docker, Kubernetes, AWS, Ansible
- Database administration (MariaDB, MySQL, MongoDB)
- Web servers (Apache httpd, Apache Tomcat)
- Networking & Load Balancing tools (HAProxy, Keepalived)
- Monitoring & alerting tools (Nagios, logging frameworks)
- On-premise server management
- Strong debugging, automation, and system troubleshooting skills.
- Knowledge of security best practices including data encryption.
Personal Attributes :
- Excellent problem-solving and analytical skills.
- Strong communication and leadership abilities.
- Detail-oriented with a focus on reliability and performance.
- Ability to mentor juniors and collaborate with cross-functional teams.
- Keen interest in emerging DevOps and cloud trends.
Position: DevOps Engineer / Senior DevOps Engineer
Experience: 3 to 6 Years
Key Skills: AWS, Terraform, Docker, Kubernetes, DevSecOps pipeline
Job Description:
- AWS Infrastructure: Architect, deploy, and manage AWS services like EC2, S3, RDS, Lambda, SageMaker, API Gateway, and VPC.
- Networking: Proficient in subnetting, endpoints, NACL, security groups, VPC flow logs, and routing.
- API Management: Design and manage secure, scalable APIs using AWS API Gateway.
- CI/CD Pipelines: Build and maintain CI/CD pipelines with AWS CodePipeline, CodeBuild, and CodeDeploy.
- Automation & IaC: Use Terraform and CloudFormation for automating infrastructure management.
- Containerization & Kubernetes: Expertise in Docker, Kubernetes, and managing containerized deployments.
- Monitoring & Logging: Implement monitoring with AWS CloudWatch, CloudTrail, and other tools.
- Security: Apply AWS security best practices using IAM, KMS, Secrets Manager, and GuardDuty.
- Cost Management: Monitor and optimize AWS usage and costs.
- Collaboration: Partner with development, QA, and operations teams to enhance productivity and system reliability.
We are seeking an experienced Lead DevOps Engineer with deep expertise in Kubernetes infrastructure design and implementation. This role requires someone who can architect, build, and manage enterprise-grade Kubernetes clusters from the ground up. You’ll lead modernization initiatives, shape infrastructure strategy, and work with cutting-edge cloud-native technologies.
🚀 Key Responsibilities
Infrastructure Design & Implementation
- Architect and design enterprise-grade Kubernetes clusters across AWS, Azure, and GCP.
- Build production-ready Kubernetes infrastructure with HA, scalability, and security best practices.
- Implement Infrastructure as Code with Terraform, Helm, and GitOps workflows.
- Set up monitoring, logging, and observability for Kubernetes workloads.
- Design and execute backup and disaster recovery strategies for containerized applications.
Leadership & Team Management
- Lead a team of 3–4 DevOps engineers, providing technical mentorship.
- Drive best practices in containerization, orchestration, and cloud-native development.
- Collaborate with development teams to optimize deployment strategies.
- Conduct code reviews and maintain infrastructure quality standards.
- Build knowledge-sharing culture with documentation and training.
Operational Excellence
- Manage and scale CI/CD pipelines integrated with Kubernetes.
- Implement security policies (RBAC, network policies, container scanning).
- Optimize cluster performance and cost-efficiency.
- Automate operations to minimize manual interventions.
- Ensure 99.9% uptime for production workloads.
Strategic Planning
- Define the infrastructure roadmap aligned with business needs.
- Evaluate and adopt new cloud-native technologies.
- Perform capacity planning and cloud cost optimization.
- Drive risk assessment and mitigation strategies.
🛠 Must-Have Technical Skills
Kubernetes Expertise
- 6+ years of hands-on Kubernetes experience in production.
- Deep knowledge of Kubernetes architecture (etcd, API server, scheduler, kubelet).
- Advanced Kubernetes networking (CNI, Ingress, Service mesh).
- Strong grasp of Kubernetes storage (CSI, PVs, StorageClasses).
- Experience with Operators and Custom Resource Definitions (CRDs).
Infrastructure as Code
- Terraform (advanced proficiency).
- Helm (developing and managing complex charts).
- Config management tools (Ansible, Chef, Puppet).
- GitOps workflows (ArgoCD, Flux).
Cloud Platforms
- Hands-on experience with at least 2 of the following:
- AWS: EKS, EC2, VPC, IAM, CloudFormation
- Azure: AKS, VNets, ARM templates
- GCP: GKE, Compute Engine, Deployment Manager
CI/CD & DevOps Tools
- Jenkins, GitLab CI, GitHub Actions, Azure DevOps
- Docker (advanced optimization and security practices)
- Container registries (ECR, ACR, GCR, Docker Hub)
- Strong Git workflows and branching strategies
Monitoring & Observability
- Prometheus & Grafana (metrics and dashboards)
- ELK/EFK stack (centralized logging)
- Jaeger/Zipkin (tracing)
- AlertManager (intelligent alerting)
💡 Good-to-Have Skills
- Service Mesh (Istio, Linkerd, Consul)
- Serverless (Knative, OpenFaaS, AWS Lambda)
- Running databases in Kubernetes (Postgres, MongoDB operators)
- ML pipelines (Kubeflow, MLflow)
- Security tools (Aqua, Twistlock, Falco, OPA)
- Compliance (SOC2, PCI-DSS, GDPR)
- Python/Go for automation
- Advanced Shell scripting (Bash/PowerShell)
🎓 Qualifications
- Bachelor’s in Computer Science, Engineering, or related field.
- Certifications (preferred):
- Certified Kubernetes Administrator (CKA)
- Certified Kubernetes Application Developer (CKAD)
- Cloud provider certifications (AWS/Azure/GCP).
Experience
- 6–7 years of DevOps/Infrastructure engineering.
- 4+ years of Kubernetes in production.
- 2+ years in a lead role managing teams.
- Experience with large-scale distributed systems and microservices.
Lead DevSecOps Engineer
Location: Pune, India (In-office) | Experience: 3–5 years | Type: Full-time
Apply here → https://lnk.ink/CLqe2
About FlytBase:
FlytBase is a Physical AI platform powering autonomous drones and robots across industrial sites. Our software enables 24/7 operations in critical infrastructure like solar farms, ports, oil refineries, and more.
We're building intelligent autonomy — not just automation — and security is core to that vision.
What You’ll Own
You’ll be leading and building the backbone of our AI-native drone orchestration platform — used by global industrial giants for autonomous operations.
Expect to:
- Design and manage multi-region, multi-cloud infrastructure (AWS, Kubernetes, Terraform, Docker)
- Own infrastructure provisioning through GitOps, Ansible, Helm, and IaC
- Set up observability stacks (Prometheus, Grafana) and write custom alerting rules
- Build for Zero Trust security — logs, secrets, audits, access policies
- Lead incident response, postmortems, and playbooks to reduce MTTR
- Automate and secure CI/CD pipelines with SAST, DAST, image hardening
- Script your way out of toil using Python, Bash, or LLM-based agents
- Work alongside dev, platform, and product teams to ship secure, scalable systems
What We’re Looking For:
You’ve probably done a lot of this already:
- 3–5+ years in DevOps / DevSecOps for high-availability SaaS or product infra
- Hands-on with Kubernetes, Terraform, Docker, and cloud-native tooling
- Strong in Linux internals, OS hardening, and network security
- Built and owned CI/CD pipelines, IaC, and automated releases
- Written scripts (Python/Bash) that saved your team hours
- Familiar with SOC 2, ISO 27001, threat detection, and compliance work
Bonus if you’ve:
- Played with LLMs or AI agents to streamline ops and Built bots that monitor, patch, or auto-deploy.
What It Means to Be a Flyter
- AI-native instincts: You don’t just use AI — you think in it. Your terminal window has a co-pilot.
- Ownership without oversight: You own outcomes, not tasks. No one micromanages you here.
- Joy in complexity: Security + infra + scale = your happy place.
- Radical candor: You give and receive sharp feedback early — and grow faster because of it.
- Loops over lines: we prioritize continuous feedback, iteration, and learning over one-way execution or rigid, linear planning.
- H3: Happy. Healthy. High-Performing. We believe long-term performance stems from an environment where you feel emotionally fulfilled, physically well, and deeply motivated.
- Systems > Heroics: We value well-designed, repeatable systems over last-minute firefighting or one-off effort.
Perks:
▪ Unlimited leave & flexible hours
▪ Top-tier health coverage
▪ Budget for AI tools, courses
▪ International deployments
▪ ESOPs and high-agency team culture
Apply Here- https://lnk.ink/CLqe2
Key Responsibilities
• As a part of the DevOps team, you will be responsible for the configuration, optimization, documentation, and support of the CI/CD components.
• Creating and managing build and release pipelines with Azure DevOps and Jenkins.
• Assist in planning and reviewing application architecture and design to promote an efficient deployment process.
• Troubleshoot server performance issues & handle the continuous integration system.
• Automate infrastructure provisioning using ARM Templates and Terraform.
• Monitor and Support deployment, Cloud-based and On-premises Infrastructure.
• Diagnose and develop root-cause solutions for failures and performance issues in the production environment.
• Deploy and manage Infrastructure for production applications
• Configure security best practices for application and infrastructure
Essential Requirements
• Good hands-on experience with cloud platforms like Azure, AWS & GCP. (Preferably Azure)
• Strong knowledge of CI/CD principles.
• Strong work experience with CI/CD implementation tools like Azure DevOps, Team City, Octopus Deploy, AWS Code Deploy, and Jenkins.
• Experience of writing automation scripts with PowerShell, Bash, Python, etc.
• GitHub, JIRA, Confluence, and Continuous Integration (CI) system.
• Understanding of secure DevOps practices
Good to Have -
•Knowledge of scripting languages such as PowerShell, Bash
• Experience with project management and workflow tools such as Agile, Jira, Scrum/Kanban, etc.
• Experience with Build technologies and cloud services. (Jenkins, TeamCity, Azure DevOps, Bamboo, AWS Code Deploy)
• Strong communication skills and ability to explain protocol and processes with team and management.
• Must be able to handle multiple tasks and adapt to a constantly changing environment.
• Must have a good understanding of SDLC.
• Knowledge of Linux, Windows server, Monitoring tools, and Shell scripting.
• Self-motivated; demonstrating the ability to achieve in technologies with minimal supervision.
• Organized, flexible, and analytical ability to solve problems creatively
About the Company
Blue Sky Analytics is a Climate Tech startup that combines the power of AI & Satellite data to aid in the creation of a global environmental data stack. Our funders include Beenext and Rainmatter. Over the next 12 months, we aim to expand to 10 environmental data-sets spanning water, land, heat, and more!
We are looking for DevOps Engineer who can help us build the infrastructure required to handle huge datasets on a scale. Primarily, you will work with AWS services like EC2, Lambda, ECS, Containers, etc. As part of our core development crew, you’ll be figuring out how to deploy applications ensuring high availability and fault tolerance along with a monitoring solution that has alerts for multiple microservices and pipelines. Come save the planet with us!
Your Role
- Applications built at scale to go up and down on command.
- Manage a cluster of microservices talking to each other.
- Build pipelines for huge data ingestion, processing, and dissemination.
- Optimize services for low cost and high efficiency.
- Maintain high availability and scalable PSQL database cluster.
- Maintain alert and monitoring system using Prometheus, Grafana, and Elastic Search.
Requirements
- 1-4 years of work experience.
- Strong emphasis on Infrastructure as Code - Cloudformation, Terraform, Ansible.
- CI/CD concepts and implementation using Codepipeline, Github Actions.
- Advanced hold on AWS services like IAM, EC2, ECS, Lambda, S3, etc.
- Advanced Containerization - Docker, Kubernetes, ECS.
- Experience with managed services like database cluster, distributed services on EC2.
- Self-starters and curious folks who don't need to be micromanaged.
- Passionate about Blue Sky Climate Action and working with data at scale.
Benefits
- Work from anywhere: Work by the beach or from the mountains.
- Open source at heart: We are building a community where you can use, contribute and collaborate on.
- Own a slice of the pie: Possibility of becoming an owner by investing in ESOPs.
- Flexible timings: Fit your work around your lifestyle.
- Comprehensive health cover: Health cover for you and your dependents to keep you tension free.
- Work Machine of choice: Buy a device and own it after completing a year at BSA.
- Quarterly Retreats: Yes there's work-but then there's all the non-work+fun aspect aka the retreat!
- Yearly vacations: Take time off to rest and get ready for the next big assignment by availing the paid leaves.
Work Location: Andheri East, Mumbai
Experience: 2-4 Years
About the Role:
At Bizongo, we believe in delivering excellence which drives business efficiency for our customers. As Software Engineer’s at Bizongo, you will be working on developing next generation of technology that will impact how businesses take care of their process and derive process excellence. We are looking for engineers who can bring fresh ideas, function at scale and are passionate about technology. We expect our engineers to be multidimensional, display leadership and have a zeal for learning as well as experimentation as we push business efficiency through our technology. As a DevOps Engineer, you should have hands-on experience as a DevOps engineer with strong technical proficiency on public clouds, Linux an programming/scripting.
Job Responsibilities:
Gather and analyse cloud infrastructure requirements
Automate obsessively
Support existing infrastructure, analyse problem areas and come up with solutions
Optimise stack performance and costs
Write code for new and existing tools
Must-haves:
Experience with DevOps techniques and philosophies
Passion to work in an exciting fast paced environment
Self-starter who can implement with minimal guidance
Good conceptual understanding of the building blocks of modern web-based infrastructure: DNS, TCP/IP, Networking, HTTP, SSL/TLS
Strong Linux skills
Experience with automation of code builds and deployments
Experience in nginx configurations for dynamic web application
Help with cost optimisations of infrastructure requirements
Assist development teams with any infrastructure needs
Strong command line skills to automate routine system administration tasks
An eye for monitoring. The ideal candidate should be able to look at complex infrastructure and be able to figure out what to monitor and how.
Databases: MySQL, PostgreSQL and cloud-based relational database solutions like Amazon RDS. Database replication and scalability
High Availability: Load Balancing (ELB), Reverse Proxies, CDNs etc.
Scripting Languages: Python/Bash/Shell/Perl
Version control with Git. Exposure to various branching workflows and collaborative development
Virtualisation and Docker
AWS core components (or their GCP/Azure equivalents) and their management: EC2, ELB, NAT, VPC, IAM Roles and policies, EBS and S3, CloudFormation, Elasticache, Route53, etc.
Configuration Management: SaltStack/Ansible/Puppet
CI/CD automation experience
Understanding of Agile development practices is a plus
Bachelor’s degree in Information Science / Information Technology, Computer Science, Engineering, Mathematics, Physics, or a related field
Why work with us?
Opportunity to work with "India’s leading B2B" E-commerce venture. The company grew its revenue by more than 12x last year to reach to a 200 Cr annual revenue run rate scale in March 2018. We invite you to be part of the upcoming growth story of B2B sector through Bizongo
Having formed a strong base in Ecommerce, Supply Chain, Retail industries; exploring FMCG, Food Processing, Engineering, Consumer Durable, Auto Ancillary and Chemical industries
Design and Development launched for Bizongo recently and seeing tremendous growth as a steady and additional revenue stream
Opportunity to work with most dynamic individuals in Asia recognized under Forbes 30 Under 30 and industry stalwarts from across companies like Microsoft, Paypal, Gravitas, Parksons, ITC, Snapdeal, Fedex, Deloitte and HUL
Working in Bizongo translates into being a part of a dynamic start-up with some of the most enthusiastic, hardworking and intelligent people in a fast paced and electrifying environment
Bizongo has been awarded as the most Disruptive Procurement Startup of the year - 2017
Being a company that is expanding itself every day and working towards exploring newer avenues in the market, every employee grows with the company
The position provides a chance to build on existing talents, learn new skills and gain valuable experience in the field of Ecommerce
About the Company:
Company Website: https://www.bizongo.com/
Any solution worth anything is unfailingly preceded by clear articulation of a problem worth solving. Even a modest study of Indian Packaging industry would lead someone to observe the enormous fragmentation, chaos and rampant unreliability pervading the ecosystem. When businesses are unable to cope even with these basic challenges, how can they even think of materializing an eco-friendly & resource-efficient packaging economy? These are some hardcore problems with real-world consequences which our country is hard-pressed to solve.
Bizongo was conceived as an answer to these first level challenges of disorganization in the industry. We employed technology to build a business model that can streamline the packaging value-chain & has enormous potential to scale sustainably. Our potential to fill this vacuum was recognized early on by Accel Partners and IDG Ventures who jointly led our Series A funding. Most recently, B Capital group, a global tech fund led by Facebook co-founder Mr. Eduardo Savarin, invested in our technological capabilities when it jointly led our Series B funding with IFC.
The International Finance Corporation (IFC), the private-sector investment arm of the World Bank, cited our positive ecosystem impact towards the network of 30,000 SMEs operating in the packaging industry, as one of the core reasons for their investment decision. Beyond these bastions of support, we are extremely grateful to have found validation by various authoritative institutions including Forbes 30 Under 30 Asia. Being the only major B2B player in the country with such an unprecedented model has lent us enormous scope of experimentation in our efforts to break new grounds. Dreaming and learning together thus, we have grown from a team of 3, founded in 2015, to a 250+ strong family with office presence across Mumbai, Gurgaon and Bengaluru. So those who strive for opportunities to rise above their own limitations, who seek to build an ecosystem of positive change and to find remarkable solutions to challenges where none existed before, such creators would find a welcome abode in Bizongo.
• At least 4 years of hands-on experience with cloud infrastructure on GCP
• Hands-on-Experience on Kubernetes is a mandate
• Exposure to configuration management and orchestration tools at scale (e.g. Terraform, Ansible, Packer)
• Knowledge and hand-on-experience in DevOps tools (e.g. Jenkins, Groovy, and Gradle)
• Knowledge and hand-on-experience on the various platforms (e.g. Gitlab, CircleCl and Spinnakar)
• Familiarity with monitoring and alerting tools (e.g. CloudWatch, ELK stack, Prometheus)
• Proven ability to work independently or as an integral member of a team
Preferable Skills:
• Familiarity with standard IT security practices such as encryption,
credentials and key management.
• Proven experience on various coding languages (Java, Python-) to
• support DevOps operation and cloud transformation
• Familiarity and knowledge of the web standards (e.g. REST APIs, web security mechanisms)
• Hands on experience with GCP
• Experience in performance tuning, services outage management and troubleshooting.
Attributes:
• Good verbal and written communication skills
• Exceptional leadership, time management, and organizational skill Ability to operate independently and make decisions with little direct supervision













