
- Automate deployments of infrastructure components and repetitive tasks.
- Drive changes strictly via the infrastructure-as-code methodology.
- Promote the use of source control for all changes including application and system-level changes.
- Design & Implement self-recovering systems after failure events.
- Participate in system sizing and capacity planning of various components.
- Create and maintain technical documents such as installation/upgrade MOPs.
- Coordinate & collaborate with internal teams to facilitate installation & upgrades of systems.
- Support 24x7 availability for corporate sites & tools.
- Participate in rotating on-call schedules.
- Actively involved in researching, evaluating & selecting new tools & technologies.
- Cloud computing – AWS, OCI, OpenStack
- Automation/Configuration management tools such as Terraform & Chef
- Atlassian tools administration (JIRA, Confluence, Bamboo, Bitbucket)
- Scripting languages - Ruby, Python, Bash
- Systems administration experience – Linux (Redhat), Mac, Windows
- SCM systems - Git
- Build tools - Maven, Gradle, Ant, Make
- Networking concepts - TCP/IP, Load balancing, Firewall
- High-Availability, Redundancy & Failover concepts
- SQL scripting & queries - DML, DDL, stored procedures
- Decisive and ability to work under pressure
- Prioritizing workload and multi-tasking ability
- Excellent written and verbal communication skills
- Database systems – Postgres, Oracle, or other RDBMS
- Mac automation tools - JAMF or other
- Atlassian Datacenter products
- Project management skills
Qualifications
- 3+ years of hands-on experience in the field or related area
- Requires MS or BS in Computer Science or equivalent field

Similar jobs
About the Role
We are seeking an accomplished DevOps Lead with 12+ years of experience in cloud infrastructure, automation, Blockchain, and CI/CD processes. The DevOps Lead will play a pivotal role in architecting scalable cloud environments, driving automation, ensuring secure deployments, and enabling efficient software delivery pipelines. The role involves working with AWS, Huawei Cloud, Kubernetes, Terraform, blockchain-based infrastructure, and modern DevOps toolchains while providing leadership, technical guidance, and client-facing communication.
Key Responsibilities
Leadership & Team Management
● Lead, mentor, and grow a team of DevOps engineers, setting technical direction and ensuring adherence to best practices.
● Facilitate collaboration across engineering, QA, security, and blockchain development teams.
● Act as the primary technical liaison with clients, managing expectations, requirements, and solution delivery.
Infrastructure Automation & Management
● Architect, implement, and manage infrastructure as code (IaC) using Terraform across multi-cloud environments.
● Standardize environments across AWS, Digital Ocean, Huawei Cloud with a focus on scalability, reliability, and security.
● Manage provisioning, scaling, monitoring, and cost optimization of infrastructure resources.
CI/CD & Automation
● Build, maintain, and optimize CI/CD pipelines supporting multiple applications and microservices.
● Integrate automated testing, static code analysis, and security scans into the pipelines.
● Implement blue-green / canary deployments and ensure zero downtime release strategies.
● Promote DevSecOps by embedding security policies into every phase of the delivery pipeline.
Containerization & Orchestration
● Deploy, manage, and monitor applications on Kubernetes clusters (EKS, CCE, or equivalent).
● Utilize Helm charts, Kustomize, and operators for environment consistency.
● Optimize container performance and manage networking, storage, and secrets.
Monitoring, Logging & Incident Response
● Implement and manage monitoring and alerting solutions (Prometheus, Grafana, ELK, CloudWatch, Loki).
● Define SLOs, SLIs, and SLAs for production systems.
● Lead incident response, root cause analysis, and implement preventative measures.
Governance, Security & Compliance
● Implement best practices for secrets management, key rotation, and role-based access control.
● Integrate vulnerability scanning and security audits into pipelines.
Required Skills & Qualifications
● 12+ years of experience in DevOps, with at least 5+ years in a lead capacity.
● Proven expertise with Terraform and IaC across multiple environments.
● Strong hands-on experience with AWS and Huawei Cloud infrastructure services.
● Deep expertise in Kubernetes cluster administration, scaling, monitoring, and networking.
● Advanced experience designing CI/CD pipelines using Jenkins, GitHub Actions, GitLab CI, or similar.
● Solid background in automated deployments, configuration management, and version control (Git, Ansible, Puppet, or Chef).
● Strong scripting and automation skills (Python, Bash, Go, or similar).
● Proficiency with monitoring/observability tools (Prometheus, Grafana, ELK, CloudWatch, Datadog).
● Strong understanding of blockchain infrastructure, node operations, staking setups, and deployment automation.
● Knowledge of container security, network policies, and zero-trust principles.
● Excellent communication, client handling, and stakeholder management skills with proven ability to present complex DevOps concepts to non-technical audiences.
● Ability to design and maintain highly available, scalable, and fault-tolerant systems in production environments.
We are looking for two Senior DevOps Engineer to join our Mumbai-based infrastructure team for a critical on-premises deployment project. This role is focused on transforming manual, legacy deployment practices into structured, secure, and compliant processes within a Windows-first, latency-sensitive environment.
The successful candidate will drive the creation of SOPs, deployment pipelines (without containerization), and a staging environment to support a hybrid stack of ASP.NET MVC (.NET), MS SQL Server (replication mode), and Java microservices with MySQL. This position requires on-site presence in Mumbai due to regulatory and infrastructure constraints and will play a key role in ensuring compliance with SEBI, RBI, PFMI, and IOSCO standards.
Key responsibility would be to lead deployment modernization efforts in a secure, on-premises environment based in Mumbai. The role involves working with legacy Windows infrastructure, ASP.NET MVC apps, MS SQL replication, and manual deployment processes. No containerization or CI/CD tools are in place, so we’re looking for someone who can establish automation and structure from the ground up.
Mandatory: On-site availability in Mumbai, strong experience with manual Windows-based deployments, regulatory compliance awareness (SEBI/RBI/PFMI).
Duration: 3-6 months | Immediate start
Responsibilities
Provisioning and de-provisioning AWS accounts for internal customers
Work alongside systems and development teams to support the transition and operation of client websites/applications in and out of AWS.
Deploying, managing, and operating AWS environments
Identifying appropriate use of AWS operational best practices
Estimating AWS costs and identifying operational cost control mechanisms
Keep technical documentation up to date
Proactively keep up to date on AWS services and developments
Create (where appropriate) automation, in order to streamline provisioning and de-provisioning processes
Lead certain data/service migration projects
Job Requirements
Experience provisioning, operating, and maintaining systems running on AWS
Experience with Azure/AWS.
Capabilities to provide AWS operations and deployment guidance and best practices throughout the lifecycle of a project
Experience with application/data migration to/from AWS
Experience with NGINX and the HTTP protocol.
Experience with configuration and management software such as GIT Strong analytical and problem-solving skills
Deployment experience using common AWS technologies like VPC, and regionally distributed EC2 instances, Docker, and more.
Ability to work in a collaborative environment
Detail-oriented, strong work ethic and high standard of excellence
A fast learner, the Achiever, sets high personal goals
Must be able to work on multiple projects and consistently meet project deadlines
We are looking for a DevOps Engineer for managing the interchange of data between the server and the users. Your primary responsibility will be the development of all server-side logic, definition, and maintenance of the central database, and ensuring high performance and responsiveness to request from the frontend. You will also be responsible for integrating the front-end elements built by your co-workers into the application. Therefore, a basic understanding of frontend technologies is necessary as well.
What we are looking for
- Must have strong knowledge of Kubernetes and Helm3
- Should have previous experience in Dockerizing the applications.
- Should be able to automate manual tasks using Shell or Python
- Should have good working knowledge on AWS and GCP clouds
- Should have previous experience working on Bitbucket, Github, or any other VCS.
- Must be able to write Jenkins Pipelines and have working knowledge on GitOps and ArgoCD.
- Have hands-on experience in Proactive monitoring using tools like NewRelic, Prometheus, Grafana, Fluentbit, etc.
- Should have a good understanding of ELK Stack.
- Exposure on Jira, confluence, and Sprints.
What you will do:
- Mentor junior Devops engineers and improve the team’s bar
- Primary owner of tech best practices, tech processes, DevOps initiatives, and timelines
- Oversight of all server environments, from Dev through Production.
- Responsible for the automation and configuration management
- Provides stable environments for quality delivery
- Assist with day-to-day issue management.
- Take lead in containerising microservices
- Develop deployment strategies that allow DevOps engineers to successfully deploy code in any environment.
- Enables the automation of CI/CD
- Implement dashboard to monitors various
- 1-3 years of experience in DevOps
- Experience in setting up front end best practices
- Working in high growth startups
- Ownership and Be Proactive.
- Mentorship & upskilling mindset.
- systems and applications
what you’ll get- Health Benefits
- Innovation-driven culture
- Smart and fun team to work with
- Friends for life
Karkinos Healthcare Pvt. Ltd.
The fundamental principle of Karkinos healthcare is democratization of cancer care in a participatory fashion with existing health providers, researchers and technologists. Our vision is to provide millions of cancer patients with affordable and effective treatments and have India become a leader in oncology research. Karkinos will be with the patient every step of the way, to advise them, connect them to the best specialists, and to coordinate their care.
Karkinos has an eclectic founding team with strong technology, healthcare and finance experience, and a panel of eminent clinical advisors in India and abroad.
Roles and Responsibilities:
- Critical role that involves in setting up and owning the dev, staging, and production infrastructure for the platform that uses micro services, data warehouses and a datalake.
- Demonstrate technical leadership with incident handling and troubleshooting.
- Provide software delivery operations and application release management support, including scripting, automated build and deployment processing and process reengineering.
- Build automated deployments for consistent software releases with zero downtime
- Deploy new modules, upgrades and fixes to the production environment.
- Participate in the development of contingency plans including reliable backup and restore procedures.
- Participate in the development of the end to end CI / CD process and follow through with other team members to ensure high quality and predictable delivery
- Participate in development of CI / CD processes
- Work on implementing DevSecOps and GitOps practices
- Work with the Engineering team to integrate more complex testing into a containerized pipeline to ensure minimal regressions
- Build platform tools that rest of the engineering teams can use.
Apply only if you have:
- 2+ years of software development/technical support experience.
- 1+ years of software development, operations experience deploying and maintaining multi-tiered infrastructure and applications at scale.
- 2+ years of experience in public cloud services: AWS (VPC, EC2, ECS, Lambda, Redshift, S3, API Gateway) or GCP (Kubernetes Engine, Cloud SQL, Cloud Storage, BIG Query, API Gateway, Container Registry) - preferably in GCP.
- Experience managing infra for distributed NoSQL system (Kafka/MongoDB), Containers, Micro services, deployment and service orchestration using Kubernetes.
- Experience and a god understanding of Kubernetes, Service Mesh (Istio preferred), API Gateways, Network proxies, etc.
- Experience in setting up infra for central monitoring of infrastructure, ability to debug, trace
- Experience and deep understanding of Cloud Networking and Security
- Experience in Continuous Integration and Delivery (Jenkins / Maven Github/Gitlab).
- Strong scripting language knowledge, such as Python, Shell.
- Experience in Agile development methodologies and release management techniques.
- Excellent analytical and troubleshooting.
- Ability to continuously learn and make decisions with minimal supervision. You understand that making mistakes means that you are learning.
Interested Applicants can share their resume at sajal.somani[AT]karkinos[DOT]in with subject as "DevOps Engineer".
What will you do?
- Setup, manage Applications with automation, DevOps, and CI/CD tools.
- Deploy, Maintain and Monitor Infrastructure and Services.
- Automate code and Infra Deployments.
- Tune, optimize and keep systems up to date.
- Design and implement deployment strategies.
- Setup infrastructure in cloud platforms like AWS, Azure, Google Cloud, IBM cloud, Digital Ocean etc as per requirement.

EXP:: 4 - 7 yrs
- Any scripting language:: Python, Scala, shell or bash
- Cloud:: AWS
- Database:: Relational (SQL) & non-relational (NoSQL)
- CI/CD tools and Version controlling
Engineering group to plan ongoing feature development, product maintenance.
• Familiar with Virtualization, Containers - Kubernetes, Core Networking, Cloud Native
Development, Platform as a Service – Cloud Foundry, Infrastructure as a Service, Distributed
Systems etc
• Implementing tools and processes for deployment, monitoring, alerting, automation, scalability,
and ensuring maximum availability of server infrastructure
• Should be able to manage distributed big data systems such as hadoop, storm, mongoDB,
elastic search and cassandra etc.,
• Troubleshooting multiple deployment servers, Software installation, Managing licensing etc,.
• Plan, coordinate, and implement network security measures in order to protect data, software, and
hardware.
• Monitor the performance of computer systems and networks, and to coordinate computer network
access and use.
• Design, configure and test computer hardware, networking software, and operating system
software.
• Recommend changes to improve systems and network configurations, and determine hardware or
software requirements related to such changes.






