
We are seeking a passionate DevOps Engineer to help create the next big thing in data analysis and search solutions.
You will join our Cloud infrastructure team supporting our developers . As a DevOps Engineer, you’ll be automating our environment setup and developing infrastructure as code to create a scalable, observable, fault-tolerant and secure environment. You’ll incorporate open source tools, automation, and Cloud Native solutions and will empower our developers with this knowledge.
We will pair you up with world-class talent in cloud and software engineering and provide a position and environment for continuous learning.

Similar jobs
• Bachelor’s or master’s degree in Computer Engineering,
Computer Science, Computer Applications, Mathematics, Statistics or related technical field or
equivalent practical experience. Relevant experience of at least 3 years in lieu of above if from a
different stream of education.
• Well-versed in DevOps principals & practices and hands-on DevOps
tool-chain integration experience: Release Orchestration & Automation, Source Code & Build
Management, Code Quality & Security Management, Behavior Driven Development, Test Driven
Development, Continuous Integration, Continuous Delivery, Continuous Deployment, and
Operational Monitoring & Management; extra points if you can demonstrate your knowledge with
working examples.
• Hands-on experience with demonstrable working experience with DevOps tools
and platforms viz., Slack, Jira, GIT, Jenkins, Code Quality & Security Plugins, Maven, Artifactory,
Terraform, Ansible/Chef/Puppet, Spinnaker, Tekton, StackStorm, Prometheus, Grafana, ELK,
PagerDuty, VictorOps, etc.
• Well-versed in Virtualization & Containerization; must demonstrate
experience in technologies such as Kubernetes, Istio, Docker, OpenShift, Anthos, Oracle VirtualBox,
Vagrant, etc.
• Well-versed in AWS and/or Azure or and/or Google Cloud; must demonstrate
experience in at least FIVE (5) services offered under AWS and/or Azure or and/or Google Cloud in
any categories: Compute or Storage, Database, Networking & Content Delivery, Management &
Governance, Analytics, Security, Identity, & Compliance (or) equivalent demonstratable Cloud
Platform experience.
• Well-versed with demonstrable working experience with API Management,
API Gateway, Service Mesh, Identity & Access Management, Data Protection & Encryption, tools &
platforms.
• Hands-on programming experience in either core Java and/or Python and/or JavaScript
and/or Scala; freshers passing out of college or lateral movers into IT must be able to code in
languages they have studied.
• Well-versed with Storage, Networks and Storage Networking basics
which will enable you to work in a Cloud environment.
• Well-versed with Network, Data, and
Application Security basics which will enable you to work in a Cloud as well as Business
Applications / API services environment.
• Extra points if you are certified in AWS and/or Azure
and/or Google Cloud.
Role Overview:
As a DevOps Engineer (L2), you will play a key role in designing, implementing, and optimizing infrastructure. You will take ownership of automating processes, improving system reliability, and supporting the development lifecycle.
Key Responsibilities:
- Design and manage scalable, secure, and highly available cloud infrastructure.
- Lead efforts in implementing and optimizing CI/CD pipelines.
- Automate repetitive tasks and develop robust monitoring solutions.
- Ensure the security and compliance of systems, including IAM, VPCs, and network configurations.
- Troubleshoot complex issues across development, staging, and production environments.
- Mentor and guide L1 engineers on best practices.
- Stay updated on emerging DevOps tools and technologies.
- Manage cloud resources efficiently using Infrastructure as Code (IaC) tools like Terraform and AWS CloudFormation.
Qualifications:
- Bachelor’s degree in Computer Science, IT, or a related field.
- Proven experience with CI/CD pipelines and tools like Jenkins, GitLab, or Azure DevOps.
- Advanced knowledge of cloud platforms (AWS, Azure, or GCP) with hands-on experience in deployments, migrations, and optimizations.
- Strong expertise in containerization (Docker) and orchestration tools (Kubernetes).
- Proficiency in scripting languages like Python, Bash, or PowerShell.
- Deep understanding of system security, networking, and load balancing.
- Strong analytical skills and problem-solving mindset.
- Certifications (e.g., AWS Certified Solutions Architect, Kubernetes Administrator) are a plus.
What We Offer:
- Opportunity to work with a cutting-edge tech stack in a product-first company.
- Collaborative and growth-oriented environment.
- Competitive salary and benefits.
- Freedom to innovate and contribute to impactful projects.
staging, QA, and development of cloud infrastructures running in 24×7 environments.
● Most of our deployments are in K8s, You will work with the team to run and manage multiple K8s
environments 24/7
● Implement and oversee all aspects of the cloud environment including provisioning, scale,
monitoring, and security.
● Nurture cloud computing expertise internally and externally to drive cloud adoption.
● Implement systems solutions, and processes needed to manage cloud cost, monitoring, scalability,
and redundancy.
● Ensure all cloud solutions adhere to security and compliance best practices.
● Collaborate with Enterprise Architecture, Data Platform, DevOps, and Integration Teams to ensure
cloud adoption follows standard best practices.
Responsibilities :
● Bachelor’s degree in Computer Science, Computer Engineering or Information Technology or
equivalent experience.
● Experience with Kubernetes on cloud and deployment technologies such as Helm is a major plus
● Expert level hands on experience with AWS (Azure and GCP experience are a big plus)
● 10 or more years of experience.
● Minimum of 5 years’ experience building and supporting cloud solutions
Bachelor's degree in Computer Science or a related field, or equivalent work experience
Strong understanding of cloud infrastructure and services, such as AWS, Azure, or Google Cloud Platform
Experience with infrastructure as code tools such as Terraform or CloudFormation
Proficiency in scripting languages such as Python, Bash, or PowerShell
Familiarity with DevOps methodologies and tools such as Git, Jenkins, or Ansible
Strong problem-solving and analytical skills
Excellent communication and collaboration skills
Ability to work independently and as part of a team
Willingness to learn new technologies and tools as required
- Building and setting up new development tools and infrastructure
- Understanding the needs of stakeholders and conveying this to developers
- Working on ways to automate and improve development and release
processes - Testing and examining code written by others and analyzing results
- Ensuring that systems are safe and secure against cybersecurity
threats - Identifying technical problems and developing software updates and ‘fixes’
- Working with software developers and software engineers to ensure that development follows established processes and works as intended
- Planning out projects and being involved in project management decisions
- BE / MCA / B.Sc-IT / B.Tech in Computer Science or a related field.
- 4+ years of overall development experience.
- Strong understanding of cloud deployment and setup
- Hands-on experience with tools like Jenkins, Gradle etc.
- Deploy updates and fixes
- Provide Level 2 technical support
- Build tools to reduce occurrences of errors and improve customer experience
- Perform root cause analysis for production errors
- Investigate and resolve technical issues
- Develop scripts to automate deployment
- Design procedures for system troubleshooting and maintenance
- Skills and Qualifications
- Proficient with git and git workflows
- Working knowledge of databases and SQL
- Problem-solving attitude
- Collaborative team spirit
- 3+ years experience leading a team of DevOps engineers
- 8+ years experience managing DevOps for large engineering teams developing cloud-native software
- Strong in networking concepts
- In-depth knowledge of AWS and cloud architectures/services.
- Experience within the container and container orchestration space (Docker, Kubernetes)
- Passion for CI/CD pipeline using tools such as Jenkins etc.
- Familiarity with config management tools like Ansible Terraform etc
- Proven record of measuring and improving DevOps metrics
- Familiarity with observability tools and experience setting them up
- Passion for building tools and productizing services that empower development teams.
- Excellent knowledge of Linux command-line tools and ability to write bash scripts.
- Strong in Unix / Linux administration and management,
KEY ROLES/RESPONSIBILITIES:
- Own and manage the entire cloud infrastructure
- Create the entire CI/CD pipeline to build and release
- Explore new technologies and tools and recommend those that best fit the team and organization
- Own and manage the site reliability
- Strong decision-making skills and metric-driven approach
- Mentor and coach other team members
Karkinos Healthcare Pvt. Ltd.
The fundamental principle of Karkinos healthcare is democratization of cancer care in a participatory fashion with existing health providers, researchers and technologists. Our vision is to provide millions of cancer patients with affordable and effective treatments and have India become a leader in oncology research. Karkinos will be with the patient every step of the way, to advise them, connect them to the best specialists, and to coordinate their care.
Karkinos has an eclectic founding team with strong technology, healthcare and finance experience, and a panel of eminent clinical advisors in India and abroad.
Roles and Responsibilities:
- Critical role that involves in setting up and owning the dev, staging, and production infrastructure for the platform that uses micro services, data warehouses and a datalake.
- Demonstrate technical leadership with incident handling and troubleshooting.
- Provide software delivery operations and application release management support, including scripting, automated build and deployment processing and process reengineering.
- Build automated deployments for consistent software releases with zero downtime
- Deploy new modules, upgrades and fixes to the production environment.
- Participate in the development of contingency plans including reliable backup and restore procedures.
- Participate in the development of the end to end CI / CD process and follow through with other team members to ensure high quality and predictable delivery
- Participate in development of CI / CD processes
- Work on implementing DevSecOps and GitOps practices
- Work with the Engineering team to integrate more complex testing into a containerized pipeline to ensure minimal regressions
- Build platform tools that rest of the engineering teams can use.
Apply only if you have:
- 2+ years of software development/technical support experience.
- 1+ years of software development, operations experience deploying and maintaining multi-tiered infrastructure and applications at scale.
- 2+ years of experience in public cloud services: AWS (VPC, EC2, ECS, Lambda, Redshift, S3, API Gateway) or GCP (Kubernetes Engine, Cloud SQL, Cloud Storage, BIG Query, API Gateway, Container Registry) - preferably in GCP.
- Experience managing infra for distributed NoSQL system (Kafka/MongoDB), Containers, Micro services, deployment and service orchestration using Kubernetes.
- Experience and a god understanding of Kubernetes, Service Mesh (Istio preferred), API Gateways, Network proxies, etc.
- Experience in setting up infra for central monitoring of infrastructure, ability to debug, trace
- Experience and deep understanding of Cloud Networking and Security
- Experience in Continuous Integration and Delivery (Jenkins / Maven Github/Gitlab).
- Strong scripting language knowledge, such as Python, Shell.
- Experience in Agile development methodologies and release management techniques.
- Excellent analytical and troubleshooting.
- Ability to continuously learn and make decisions with minimal supervision. You understand that making mistakes means that you are learning.
Interested Applicants can share their resume at sajal.somani[AT]karkinos[DOT]in with subject as "DevOps Engineer".
Work Location: Andheri East, Mumbai
Experience: 2-4 Years
About the Role:
At Bizongo, we believe in delivering excellence which drives business efficiency for our customers. As Software Engineer’s at Bizongo, you will be working on developing next generation of technology that will impact how businesses take care of their process and derive process excellence. We are looking for engineers who can bring fresh ideas, function at scale and are passionate about technology. We expect our engineers to be multidimensional, display leadership and have a zeal for learning as well as experimentation as we push business efficiency through our technology. As a DevOps Engineer, you should have hands-on experience as a DevOps engineer with strong technical proficiency on public clouds, Linux an programming/scripting.
Job Responsibilities:
Gather and analyse cloud infrastructure requirements
Automate obsessively
Support existing infrastructure, analyse problem areas and come up with solutions
Optimise stack performance and costs
Write code for new and existing tools
Must-haves:
Experience with DevOps techniques and philosophies
Passion to work in an exciting fast paced environment
Self-starter who can implement with minimal guidance
Good conceptual understanding of the building blocks of modern web-based infrastructure: DNS, TCP/IP, Networking, HTTP, SSL/TLS
Strong Linux skills
Experience with automation of code builds and deployments
Experience in nginx configurations for dynamic web application
Help with cost optimisations of infrastructure requirements
Assist development teams with any infrastructure needs
Strong command line skills to automate routine system administration tasks
An eye for monitoring. The ideal candidate should be able to look at complex infrastructure and be able to figure out what to monitor and how.
Databases: MySQL, PostgreSQL and cloud-based relational database solutions like Amazon RDS. Database replication and scalability
High Availability: Load Balancing (ELB), Reverse Proxies, CDNs etc.
Scripting Languages: Python/Bash/Shell/Perl
Version control with Git. Exposure to various branching workflows and collaborative development
Virtualisation and Docker
AWS core components (or their GCP/Azure equivalents) and their management: EC2, ELB, NAT, VPC, IAM Roles and policies, EBS and S3, CloudFormation, Elasticache, Route53, etc.
Configuration Management: SaltStack/Ansible/Puppet
CI/CD automation experience
Understanding of Agile development practices is a plus
Bachelor’s degree in Information Science / Information Technology, Computer Science, Engineering, Mathematics, Physics, or a related field
Why work with us?
Opportunity to work with "India’s leading B2B" E-commerce venture. The company grew its revenue by more than 12x last year to reach to a 200 Cr annual revenue run rate scale in March 2018. We invite you to be part of the upcoming growth story of B2B sector through Bizongo
Having formed a strong base in Ecommerce, Supply Chain, Retail industries; exploring FMCG, Food Processing, Engineering, Consumer Durable, Auto Ancillary and Chemical industries
Design and Development launched for Bizongo recently and seeing tremendous growth as a steady and additional revenue stream
Opportunity to work with most dynamic individuals in Asia recognized under Forbes 30 Under 30 and industry stalwarts from across companies like Microsoft, Paypal, Gravitas, Parksons, ITC, Snapdeal, Fedex, Deloitte and HUL
Working in Bizongo translates into being a part of a dynamic start-up with some of the most enthusiastic, hardworking and intelligent people in a fast paced and electrifying environment
Bizongo has been awarded as the most Disruptive Procurement Startup of the year - 2017
Being a company that is expanding itself every day and working towards exploring newer avenues in the market, every employee grows with the company
The position provides a chance to build on existing talents, learn new skills and gain valuable experience in the field of Ecommerce
About the Company:
Company Website: https://www.bizongo.com/
Any solution worth anything is unfailingly preceded by clear articulation of a problem worth solving. Even a modest study of Indian Packaging industry would lead someone to observe the enormous fragmentation, chaos and rampant unreliability pervading the ecosystem. When businesses are unable to cope even with these basic challenges, how can they even think of materializing an eco-friendly & resource-efficient packaging economy? These are some hardcore problems with real-world consequences which our country is hard-pressed to solve.
Bizongo was conceived as an answer to these first level challenges of disorganization in the industry. We employed technology to build a business model that can streamline the packaging value-chain & has enormous potential to scale sustainably. Our potential to fill this vacuum was recognized early on by Accel Partners and IDG Ventures who jointly led our Series A funding. Most recently, B Capital group, a global tech fund led by Facebook co-founder Mr. Eduardo Savarin, invested in our technological capabilities when it jointly led our Series B funding with IFC.
The International Finance Corporation (IFC), the private-sector investment arm of the World Bank, cited our positive ecosystem impact towards the network of 30,000 SMEs operating in the packaging industry, as one of the core reasons for their investment decision. Beyond these bastions of support, we are extremely grateful to have found validation by various authoritative institutions including Forbes 30 Under 30 Asia. Being the only major B2B player in the country with such an unprecedented model has lent us enormous scope of experimentation in our efforts to break new grounds. Dreaming and learning together thus, we have grown from a team of 3, founded in 2015, to a 250+ strong family with office presence across Mumbai, Gurgaon and Bengaluru. So those who strive for opportunities to rise above their own limitations, who seek to build an ecosystem of positive change and to find remarkable solutions to challenges where none existed before, such creators would find a welcome abode in Bizongo.

● Develop and deliver automation software required for building & improving the functionality, reliability, availability, and manageability of applications and cloud platforms
● Champion and drive the adoption of Infrastructure as Code (IaC) practices and mindset
● Design, architect, and build self-service, self-healing, synthetic monitoring and alerting platform and tools
● Automate the development and test automation processes through CI/CD pipeline (Git, Jenkins, SonarQube, Artifactory, Docker containers)
● Build container hosting-platform using Kubernetes
● Introduce new cloud technologies, tools & processes to keep innovating in commerce area to drive greater business value.
Skills Required:
● Excellent written and verbal communication skills and a good listener.
● Proficiency in deploying and maintaining Cloud based infrastructure services (AWS, GCP, Azure – good hands-on experience in at least one of them)
● Well versed with service-oriented architecture, cloud-based web services architecture, design patterns and frameworks.
● Good knowledge of cloud related services like compute, storage, network, messaging (Eg SNS, SQS) and automation (Eg. CFT/Terraform).
● Experience with relational SQL and NoSQL databases, including Postgres and
Cassandra.
● Experience in systems management/automation tools (Puppet/Chef/Ansible, Terraform)
● Strong Linux System Admin Experience with excellent troubleshooting and problem solving skills
● Hands-on experience with languages (Bash/Python/Core Java/Scala)
● Experience with CI/CD pipeline (Jenkins, Git, Maven etc)
● Experience integrating solutions in a multi-region environment
● Self-motivate, learn quickly and deliver results with minimal supervision
● Experience with Agile/Scrum/DevOps software development methodologies.
Nice to Have:
● Experience in setting-up Elastic Logstash Kibana (ELK) stack.
● Having worked with large scale data.
● Experience with Monitoring tools such as Splunk, Nagios, Grafana, DataDog etc.
● Previously experience on working with distributed architectures like Hadoop, Mapreduce etc.

