
We are seeking an experienced Lead DevOps Engineer with deep expertise in Kubernetes infrastructure design and implementation. This role requires someone who can architect, build, and manage enterprise-grade Kubernetes clusters from the ground up. Youāll lead modernization initiatives, shape infrastructure strategy, and work with cutting-edge cloud-native technologies.
š Key Responsibilities
Infrastructure Design & Implementation
- Architect and design enterprise-grade Kubernetes clusters across AWS, Azure, and GCP.
- Build production-ready Kubernetes infrastructure with HA, scalability, and security best practices.
- Implement Infrastructure as Code with Terraform, Helm, and GitOps workflows.
- Set up monitoring, logging, and observability for Kubernetes workloads.
- Design and execute backup and disaster recovery strategies for containerized applications.
Leadership & Team Management
- Lead a team of 3ā4 DevOps engineers, providing technical mentorship.
- Drive best practices in containerization, orchestration, and cloud-native development.
- Collaborate with development teams to optimize deployment strategies.
- Conduct code reviews and maintain infrastructure quality standards.
- Build knowledge-sharing culture with documentation and training.
Operational Excellence
- Manage and scale CI/CD pipelines integrated with Kubernetes.
- Implement security policies (RBAC, network policies, container scanning).
- Optimize cluster performance and cost-efficiency.
- Automate operations to minimize manual interventions.
- Ensure 99.9% uptime for production workloads.
Strategic Planning
- Define the infrastructure roadmap aligned with business needs.
- Evaluate and adopt new cloud-native technologies.
- Perform capacity planning and cloud cost optimization.
- Drive risk assessment and mitigation strategies.
š Must-Have Technical Skills
Kubernetes Expertise
- 6+ years of hands-on Kubernetes experience in production.
- Deep knowledge of Kubernetes architecture (etcd, API server, scheduler, kubelet).
- Advanced Kubernetes networking (CNI, Ingress, Service mesh).
- Strong grasp of Kubernetes storage (CSI, PVs, StorageClasses).
- Experience with Operators and Custom Resource Definitions (CRDs).
Infrastructure as Code
- Terraform (advanced proficiency).
- Helm (developing and managing complex charts).
- Config management tools (Ansible, Chef, Puppet).
- GitOps workflows (ArgoCD, Flux).
Cloud Platforms
- Hands-on experience with at least 2 of the following:
- AWS: EKS, EC2, VPC, IAM, CloudFormation
- Azure: AKS, VNets, ARM templates
- GCP: GKE, Compute Engine, Deployment Manager
CI/CD & DevOps Tools
- Jenkins, GitLab CI, GitHub Actions, Azure DevOps
- Docker (advanced optimization and security practices)
- Container registries (ECR, ACR, GCR, Docker Hub)
- Strong Git workflows and branching strategies
Monitoring & Observability
- Prometheus & Grafana (metrics and dashboards)
- ELK/EFK stack (centralized logging)
- Jaeger/Zipkin (tracing)
- AlertManager (intelligent alerting)
š” Good-to-Have Skills
- Service Mesh (Istio, Linkerd, Consul)
- Serverless (Knative, OpenFaaS, AWS Lambda)
- Running databases in Kubernetes (Postgres, MongoDB operators)
- ML pipelines (Kubeflow, MLflow)
- Security tools (Aqua, Twistlock, Falco, OPA)
- Compliance (SOC2, PCI-DSS, GDPR)
- Python/Go for automation
- Advanced Shell scripting (Bash/PowerShell)
š Qualifications
- Bachelorās in Computer Science, Engineering, or related field.
- Certifications (preferred):
- Certified Kubernetes Administrator (CKA)
- Certified Kubernetes Application Developer (CKAD)
- Cloud provider certifications (AWS/Azure/GCP).
Experience
- 6ā7 years of DevOps/Infrastructure engineering.
- 4+ years of Kubernetes in production.
- 2+ years in a lead role managing teams.
- Experience with large-scale distributed systems and microservices.

About CoffeeBeans
About
CoffeeBeans Consulting is a technology partner dedicated to driving business transformation. With deep expertise in Cloud, Data, MLOPs, AI, Infrastructure services, Application modernization services, Blockchain, and Big Data, we help organizations tackle complex challenges and seize growth opportunities in todayās fast-paced digital landscape. Weāre more than just a tech service provider; we're a catalyst for meaningful change
Tech stack
Candid answers by the company
CoffeeBeans Consulting, founded in 2017, is a high-end technology consulting firm that helps businesses build better products and improve delivery quality through a mix of engineering, product, and process expertise. They work across domains to deliver scalable backend systems, data engineering pipelines, and AI-driven solutions, often using modern stacks like Java, Spring Boot, Python, Spark, Snowflake, Azure, and AWS. With a strong focus on clean architecture, performance optimization, and practical problem-solving, CoffeeBeans partners with clients for both internal and external projectsādriving meaningful business outcomes through tech excellence.
Similar jobs
Key Responsibilities:
- Design, implement, and maintain scalable, secure, and cost-effective infrastructure on AWS and Azure
- Set up and manage CI/CD pipelines for smooth code integration and delivery using tools like GitHub Actions, Bitbucket Runners, AWS Code build/deploy, Azure DevOps, etc.
- Containerize applications using Docker and manage orchestration with Kubernetes, ECS, Fargate, AWS EKS, Azure AKS.
- Manage and monitor production deployments to ensure high availability and performance
- Implement and manage CDN solutions using AWS CloudFront and Azure Front Door for optimal content delivery and latency reduction
- Define and apply caching strategies at application, CDN, and reverse proxy layers for performance and scalability
- Set up and manage reverse proxies and Cloudflare WAF to ensure application security and performance
- Implement infrastructure as code (IaC) using Terraform, CloudFormation, or ARM templates
- Administer and optimize databases (RDS, PostgreSQL, MySQL, etc.) including backups, scaling, and monitoring
- Configure and maintain VPCs, subnets, routing, VPNs, and security groups for secure and isolated network setups
- Implement monitoring, logging, and alerting using tools like CloudWatch, Grafana, ELK, or Azure Monitor
- Collaborate with development and QA teams to align infrastructure with application needs
- Troubleshoot infrastructure and deployment issues efficiently and proactively
- Ensure cloud cost optimization and usage tracking
Required Skills & Experience:
- 3-4 years of hands-on experience in a DevOps
- Strong expertise with both AWS and Azure cloud platforms
- Proficient in Git, branching strategies, and pull request workflows
- Deep understanding of CI/CD concepts and experience with pipeline tools
- Proficiency in Docker, container orchestration (Kubernetes, ECS/EKS/AKS)
- Good knowledge of relational databases and experience in managing DB backups, performance, and migrations
- Experience with networking concepts including VPC, subnets, firewalls, VPNs, etc.
- Experience with Infrastructure as Code tools (Terraform preferred)
- Strong working knowledge of CDN technologies: AWS CloudFront and Azure Front Door
- Understanding of caching strategies: edge caching, browser caching, API caching, and reverse proxy-level caching
- Experience with Cloudflare WAF, reverse proxy setups, SSL termination, and rate-limiting
- Familiarity with Linux system administration, scripting (Bash, Python), and automation tools
- Working knowledge of monitoring and logging tools
- Strong troubleshooting and problem-solving skills
Good to Have (Bonus Points):
- Experience with serverless architecture (e.g., AWS Lambda, Azure Functions)
- Exposure to cost monitoring tools like CloudHealth, Azure Cost Management
- Experience with compliance/security best practices (SOC2, ISO, etc.)
- Familiarity with Service Mesh (Istio, Linkerd) and API gateways
- Knowledge of Secrets Management tools (e.g., HashiCorp Vault, AWS Secrets Manager)
The candidate should have a background in development/programming with experience in at least one of the following: .NET, Java (Spring Boot), ReactJS, or AngularJS.
Ā
Primary Skills:
- AWS or GCP Cloud
- DevOps CI/CD pipelines (e.g., Azure DevOps, Jenkins)
- Python/Bash/PowerShell scripting
Ā
Secondary Skills:
- Docker or Kubernetes
Job Summary:
We are seeking a highly skilled and proactiveĀ DevOps EngineerĀ withĀ 4+ years of experienceĀ to join our dynamic team. This role requires strong technical expertise across cloud infrastructure, CI/CD pipelines, container orchestration, and infrastructure as code (IaC). The ideal candidate should also have directĀ client-facing experienceĀ and a proactive approach to managing both internal and external stakeholders.
Key Responsibilities:
- Collaborate with cross-functional teams and external clients to understand infrastructure requirements and implement DevOps best practices.
- Design, build, and maintain scalable cloud infrastructure onĀ AWSĀ (EC2, S3, RDS, ECS, etc.).
- Develop and manage infrastructure usingĀ TerraformĀ orĀ CloudFormation.
- Manage and orchestrate containers usingĀ DockerĀ andĀ Kubernetes (EKS).
- Implement and maintainĀ CI/CD pipelinesĀ usingĀ JenkinsĀ orĀ GitHub Actions.
- Write robust automation scripts usingĀ PythonĀ andĀ Shell scripting.
- Monitor system performance and availability, and ensure high uptime and reliability.
- Execute and optimizeĀ SQLqueriesĀ forĀ MSSQLĀ andĀ PostgresQLĀ databases.
- Maintain clear documentation and provide technical support to stakeholders and clients.
Required Skills:
- MinimumĀ 4+ years of experienceĀ in aĀ DevOpsĀ or related role.
- Proven experience in client-facing engagements and communication.
- Strong knowledge ofĀ AWS servicesĀ ā EC2, S3, RDS, ECS, etc.
- Proficiency inĀ Infrastructure as CodeĀ usingĀ TerraformĀ orĀ CloudFormation.
- Hands-on experience withĀ DockerĀ andĀ Kubernetes (EKS).
- Strong experience in setting up and maintainingĀ CI/CD pipelinesĀ withĀ JenkinsĀ orĀ GitHub.
- Solid understanding ofĀ SQLĀ and working experience withĀ MSSQLĀ andĀ PostgreSQL.
- Proficient inĀ PythonĀ andĀ Shell scripting.
Preferred Qualifications:
- AWS Certifications (e.g., AWS Certified DevOps Engineer) are a plus.
- Experience working in Agile/Scrum environments.
- Strong problem-solving and analytical skills.
Key Qualifications :
- At least 2 years of hands-on experience with cloud infrastructure on AWS or GCP
- Exposure to configuration management and orchestration tools at scale (e.g. Terraform, Ansible, Packer)
- Knowledge in DevOps tools (e.g. Jenkins, Groovy, and Gradle)
- Familiarity with monitoring and alerting tools(e.g. CloudWatch, ELK stack, Prometheus)
- Proven ability to work independently or as an integral member of a team
Preferable Skills :Ā
- Familiarity with standard IT security practices such as encryption, credentials and key management
- Proven ability to acquire various coding languages (Java, Python- ) to support DevOps operation and cloud transformation
- Familiarity in web standards (e.g. REST APIs, web security mechanisms)
- Multi-cloud management experience with GCP / Azure
- Experience in performance tuning, services outage management and troubleshooting
Ā
Ā
LogiNext is looking for a technically savvy and passionate Senior DevOps Engineer to cater to the development and operations efforts in product. You will choose and deploy tools and technologies to build and support a robust and scalable infrastructure.
You have hands-on experience in building secure, high-performing and scalable infrastructure. You have experience to automate and streamline the development operations and processes. You are a master in troubleshooting and resolving issues in non-production and production environments.
Responsibilities:
Design and implement scalable infrastructure for delivering and running web, mobile and big data applications on cloud Scale and optimise a variety of SQL and NoSQL databases, web servers, application frameworks, caches, and distributed messaging systems Automate the deployment and configuration of the virtualized infrastructure and the entire software stack Support several Linux servers running our SaaS platform stack on AWS, Azure, GCP Define and build processes to identify performance bottlenecks and scaling pitfalls Manage robust monitoring and alerting infrastructure Explore new tools to improve development operations
Requirements:
Bachelorās degree in Computer Science, Information Technology or a related field 4 to 7 years of experience in designing and maintaining high volume and scalable micro-services architecture on cloud infrastructure Strong background in Linux/Unix Administration and Python/Shell Scripting Extensive experience working with cloud platforms like AWS (EC2, ELB, S3, Auto-scaling, VPC, Lambda), GCP, Azure Experience in deployment automation, Continuous Integration and Continuous Deployment (Jenkins, Maven, Puppet, Chef, GitLab) and monitoring tools like Zabbix, Cloud Watch Monitoring, Nagios Knowledge of Java Virtual Machines, Apache Tomcat, Nginx, Apache Kafka, Microservices architecture, Caching mechanisms Experience in enterprise application development, maintenance and operations Knowledge of best practices and IT operations in an always-up, always-available service Excellent written and oral communication skills, judgment and decision-making skills
Introduction
http://www.synapsica.com/">Synapsica is a https://yourstory.com/2021/06/funding-alert-synapsica-healthcare-ivycap-ventures-endiya-partners/">series-A funded HealthTech startup founded by alumni from IIT Kharagpur, AIIMS New Delhi, and IIM Ahmedabad. We believe healthcare needs to be transparent and objective while being affordable. Every patient has the right to know exactly what is happening in their bodies and they don't have to rely on cryptic 2 liners given to them as a diagnosis.Ā
Towards this aim, we are building an artificial intelligence enabled cloud based platform to analyse medical images and create v2.0 of advanced radiology reporting.Ā We are backed by IvyCap, Endia Partners, YCombinator and other investors from India, US, and Japan. We are proud to have GE and The Spinal Kinetics as our partners. Hereās a small sample of what weāre building: https://www.youtube.com/watch?v=FR6a94Tqqls">https://www.youtube.com/watch?v=FR6a94Tqqls
Ā
Your Roles and Responsibilities
The Lead DevOps Engineer will be responsible for the management, monitoring and operation of our applications and services in production. The DevOps Engineer will be a hands-on person who can work independently or with minimal guidance and has the ability to drive the teamās deliverables by mentoring and guiding junior team members. You will work with the existing teams very closely and build on top of tools like Kubernetes, Docker and Terraform and support our numerous polyglot services.
Introducing a strong DevOps ethic into the rest of the team is crucial, and we expect you to lead the team on best practices in deployment, monitoring, and tooling. You'll work collaboratively with software engineering to deploy and operate our systems, help automate and streamline our operations and processes, build and maintain tools for deployment, monitoring, and operations and troubleshoot and resolve issues in our development, test and production environments. The position is based in our Bangalore office.
Ā
Ā
Primary Responsibilities
- Providing strategies and creating pathways in support of product initiatives in DevOps and automation, with a focus on the design of systems and services that run on cloud platforms.
- Optimizations and execution of the CI/CD pipelines of multiple products and timely promotion of the releases to production environments
- Ensuring that mission critical applications are deployed and optimised for high availability, security & privacy compliance and disaster recovery.
- Strategize, implement and verify secure coding techniques, integrate code security tools for Continuous Integration
- Ensure analysis, efficiency, responsiveness, scalability and cross-platform compatibility of applications through capturedĀ metrics, testing frameworks, and debugging methodologies.
- Technical documentation through all stages of development
- Establish strong relationships, and proactively communicate, with team members as well as individuals across the organisation
Ā
Requirements
- Minimum of 6 years of experience on Devops tools.
- Working experience with Linux, container orchestration and management technologies (Docker, Kubernetes, EKS, ECS ā¦).
- Hands-on experience with "infrastructure as code" solutions (Cloudformation, Terraform, Ansible etc).
- Background of building and maintaining CI/CD pipelines (Gitlab-CI, Jenkins, CircleCI, Github actions etc).
- Experience with the Hashicorp stack (Vault, Packer, Nomad etc).
- Hands-on experience in building and maintaining monitoring/logging/alerting stacks (ELK stack, Prometheus stack, Grafana etc).
- Devops mindset and experience with Agile / SCRUM Methodology
- Basic knowledge of Storage , Databases (SQL and noSQL)
- Good understanding of networking technologies, HAProxy, firewalling and security.
- Experience in Security vulnerability scans and remediation
- Experience in API security and credentials management
- Worked on Microservice configurations across dev/test/prod environments
- Ability to quickly adapt new languages and technologies
- A strong team player attitude with excellent communication skills.
- Very high sense of ownership.
- Deep interest and passion for technology
- Ability to plan projects, execute them and meet the deadline
- Excellent verbal and written English communication.

Skills required:
Strong knowledge and experience of cloud infrastructure (AWS, Azure or GCP), systems, network design, and cloud migration projects.
Strong knowledge and understanding of CI/CD processes tools (Jenkins/Azure DevOps) is a must.
Strong knowledge and understanding of Docker & Kubernetes is a must.
Strong knowledge of Python, along with one more language (Shell, Groovy, or Java).
Strong prior experience using automation tools like Ansible, Terraform.
Architect systems, infrastructure & platforms using Cloud Services.
Strong communication skills. Should have demonstrated the ability to collaborate across teams and organizations.
Benefits of working with OpsTree Solutions:
Opportunity to work on the latest cutting edge tools/technologies in DevOps
Knowledge focused work culture
Collaboration with very enthusiastic DevOps experts
High growth trajectory
Opportunity to work with big shots in the IT industry
Cloud Software Engineer
Notice Period: 45 days / Immediate Joining
Ā
Banyan Data Services (BDS) is a US-based Infrastructure services Company, headquartered in San Jose, California, USA. It provides full-stack managed services to support business applications and data infrastructure.⯠We do provide the data solutions and services on bare metal, On-prem, and all Cloud platforms.⯠Our engagement service is built on the DevOps standard practice and SRE model.
Ā
We offer you an opportunity to join our rocket ship startup, run by a world-class executive team. We are looking for candidates that aspire to be a part of the cutting-edge solutions and services we offer, that address next-gen data evolution challenges. Candidates who are willing to use their experience in areas directly related to Infrastructure Services, Software as Service, and Cloud Services and create a niche in the market.
Ā
Roles and Responsibilities
Ā· A wide variety of engineering projects including data visualization, web services, data engineering, web-portals, SDKs, and integrations in numerous languages, frameworks, and clouds platforms
Ā· Apply continuous delivery practices to deliver high-quality software and value as early as possible.
Ā· Work in collaborative teams to build new experiences
Ā· Participate in the entire cycle of software consulting and delivery from ideation to deployment
Ā· Integrating multiple software products across cloud and hybrid environments
Ā· Developing processes and procedures for software applications migration to the cloud, as well as managed services in the cloud
Ā· Migrating existing on-premises software applications to cloud leveraging a structured method and best practices
Ā
Desired Candidate Profile :Ā *** freshers can also apply ***
Ā
Ā· 2+years of experience with 1 or more development languages such as Java, Python, or Spark.
Ā· 1 year + of experience with private/public/hybrid cloud model design, implementation, orchestration, and support.
Ā· Certification or any training's completion of any one of the cloud environments like AWS, GCP, Azure, Oracle Cloud, and Digital Ocean.
Ā· Strong problem-solvers who are comfortable in unfamiliar situations, and can view challenges through multiple perspectives
Ā· Driven to develop technical skills for oneself and team-mates
Ā· Hands-on experience with cloud computing and/or traditional enterprise datacentre technologies, i.e., network, compute, storage, and virtualization.
Ā· Possess at least one cloud-related certification from AWS, Azure, or equivalent
Ā· Ability to write high-quality, well-tested code and comfort with Object-Oriented or functional programming patterns
Ā· Past experience quickly learning new languages and frameworks
Ā· Ability to work with a high degree of autonomy and self-direction
http://www.banyandata.com" target="_blank">www.banyandata.comĀ
Ā
Ā
We are looking for a full-time remote DevOps Engineer who has worked with CI/CD automation, big data pipelines and Cloud Infrastructure, to solve complex technical challenges at scale that will reshape the healthcare industry for generations. You will get the opportunity to be involved in the latest tech in big data engineering, novel machine learning pipelines and highly scalable backend development. The successful candidates will be working in a team of highly skilled and experienced developers, data scientists and CTO.
Ā
Job Requirements
Ā
- Experience deploying, automating, maintaining, and improving complex services and pipelines ⢠Strong understanding of DevOps tools/process/methodologies
- Experience with AWS Cloud Formation and AWS CLI is essential
- The ability to work to project deadlines efficiently and with minimum guidance
- A positive attitude and enjoys working within a global distributed team
Ā
Skills
Ā
- Highly proficient working with CI/CD and automating infrastructure provisioning
- Deep understanding of AWS Cloud platform and hands on experience setting up and maintaining with large scale implementations
- Experience with JavaScript/TypeScript, Node, Python and Bash/Shell Scripting
- Hands on experience with Docker and container orchestration
- Experience setting up and maintaining big data pipelines, Serverless stacks and containers infrastructure
- An interest in healthcare and medical sectors
- Technical degree with 4 plus yearsā infrastructure and automation experience
Ā











