
We are looking for an experienced DevOps engineer that will help our team establish DevOps practice. You will work closely with the technical lead to identify and establish DevOps practices in the company. You will also help us build scalable, efficient cloud infrastructure. You’ll implement monitoring for automated system health checks. Lastly, you’ll build our CI pipeline, and train and guide the team in DevOps practices.
Responsibilities
- Deployment, automation, management, and maintenance of production systems.
- Ensuring availability, performance, security, and scalability of production systems.
- Evaluation of new technology alternatives and vendor products.
- System troubleshooting and problem resolution across various application domains and platforms.
- Providing recommendations for architecture and process improvements.
- Definition and deployment of systems for metrics, logging, and monitoring on the AWS
platform.
- Manage the establishment and configuration of SaaS infrastructure in an agile way
by storing infrastructure as code and employing automated configuration management tools with a goal to be able to re-provision environments at any point in time.
- Be accountable for proper backup and disaster recovery procedures.
- Drive operational cost reductions through service optimizations and demand-based
auto-scaling.
- Have on-call responsibilities.
- Perform root cause analysis for production errors
- Uses open source technologies and tools to accomplish specific use cases encountered
within the project.
- Uses coding languages or scripting methodologies to solve a problem with a custom workflow.
Requirements
- Systematic problem-solving approach, coupled with strong communication skills and a sense of ownership and drive.
- Prior experience as a software developer in a couple of high-level programming
languages.
- Extensive experience in any Javascript-based framework since we will be deploying services to NodeJS on AWS Lambda (Serverless)
- Strong Linux system administration background.
- Ability to present and communicate the architecture in a visual form.
- Strong knowledge of AWS (e.g. IAM, EC2, VPC, ELB, ALB, Autoscaling, Lambda, NAT
gateway, DynamoDB)
- Experience maintaining and deploying highly-available, fault-tolerant systems at scale (~
1 Lakh users a day)
- A drive towards automating repetitive tasks (e.g. scripting via Bash, Python, Ruby, etc)
- Expertise with Git
- Experience implementing CI/CD (e.g. Jenkins, TravisCI)
- Strong experience with databases such as MySQL, NoSQL, Elasticsearch, Redis and/or
Mongo.
- Stellar troubleshooting skills with the ability to spot issues before they become problems.
- Current with industry trends, IT ops and industry best practices, and able to identify the
ones we should implement.
- Time and project management skills, with the capability to prioritize and multitask as
needed.

Similar jobs
Requirements
- 3+ years work experience writing clean production code
- Well versed with maintaining infrastructure as code (Terraform, Cloudformation etc). High proficiency with Terraform / Terragrunt is absolutely critical
- Experience of setting CI/CD pipelines from scratch
- Experience with AWS(EC2, ECS, RDS, Elastic Cache etc), AWS lambda, Kubernetes, Docker, ServiceMesh
- Experience with ETL pipelines, Bigdata infra
- Understanding of common security issues
Roles / Responsibilities:
- Write terraform modules for deploying different component of infrastructure in AWS like Kubernetes, RDS, Prometheus, Grafana, Static Website
- Configure networking, autoscaling. continuous deployment, security and multiple environments
- Make sure the infrastructure is SOC2, ISO 27001 and HIPAA compliant
- Automate all the steps to provide a seamless experience to developers.
- Configure, optimize, document, and support of the infrastructure components of software products (which are hosted in collocated facilities and cloud services such as AWS)
- Design and build tools and frameworks that support deployment and management and platforms
- Design, build, and deliver cloud computing solutions, hosted services, and underlying software infrastructures
- Build core functionality of our cloud-based platform product, deliver secure, reliable services and construct third party integrations
- Assist in coaching application developers on proper DevOps techniques for building scalable applications in the microservices paradigm
- Foster collaboration with software product development and architecture teams to ensure releases are delivered with repeatable and auditable processes
- Support and troubleshoot scalability, high availability, performance, monitoring, backup, and restores of different environments
- Work independently across multiple platforms and applications to understand dependencies
- Evaluate new tools, technologies, and processes to improve speed, efficiency, and scalability of continuous integration environments
- Design and architect solutions for existing client-facing applications as they are moved into cloud environments such as AWS
- Competencies
- Full understanding of scripting and automated process management in languages such as Shell, Ruby and/ or Python
- Working Knowledge SCM tools such as Git, GitHub, Bitbucket, etc.
- Working knowledge of Amazon Web Services and related APIs
- Ability to deliver and manage web or cloud-based services
- General familiarity with monitoring tools
- General familiarity with configuration/provisioning tools such as Terraform
- Experience
- Experience working within an Agile type environment
- 4+ years of experience with cloud-based provisioning (Azure, AWS, Google), monitoring, troubleshooting, and related DevOps technologies
- 4+ years of experience with containerization/orchestration technologies like Rancher, Docker and Kubernetes
Infra360 Solutions is a services company specializing in Cloud, DevSecOps, Security, and Observability solutions. We help technology companies adapt DevOps culture in their organization by focusing on long-term DevOps roadmap. We focus on identifying technical and cultural issues in the journey of successfully implementing the DevOps practices in the organization and work with respective teams to fix issues to increase overall productivity. We also do training sessions for the developers and make them realize the importance of DevOps. We provide these services - DevOps, DevSecOps, FinOps, Cost Optimizations, CI/CD, Observability, Cloud Security, Containerization, Cloud Migration, Site Reliability, Performance Optimizations, SIEM and SecOps, Serverless automation, Well-Architected Review, MLOps, Governance, Risk & Compliance. We do assessments of technology architecture, security, governance, compliance, and DevOps maturity model for any technology company and help them optimize their cloud cost, streamline their technology architecture, and set up processes to improve the availability and reliability of their website and applications. We set up tools for monitoring, logging, and observability. We focus on bringing the DevOps culture to the organization to improve its efficiency and delivery.
Job Description
Our Mission
Our mission is to help customers achieve their business objectives by providing innovative, best-in-class consulting, IT solutions and services and to make it a joy for all stakeholders to work with us. We function as a full stakeholder in business, offering a consulting-led approach with an integrated portfolio of technology-led solutions that encompass the entire Enterprise value chain.
Our Customer-centric Engagement Model defines how we engage with you, offering specialized services and solutions that meet the distinct needs of your business.
Our Culture
Culture forms the core of our foundation and our effort towards creating an engaging workplace has resulted in Infra360 Solution Pvt Ltd.
Our Tech-Stack:
- Azure DevOps, Azure Kubernetes Service, Docker, Active Directory (Microsoft Entra)
- Azure IAM and managed identity, Virtual network, VM Scale Set, App Service, Cosmos
- Azure, MySQL Scripting (PowerShell, Python, Bash),
- Azure Security, Security Documentation, Security Compliance,
- AKS, Blob Storage, Azure functions, Virtual Machines, Azure SQL
- AWS - IAM, EC2, EKS, Lambda, ECS, Route53, Cloud formation, Cloud front, S3
- GCP - GKE, Compute Engine, App Engine, SCC
- Kubernetes, Linux, Docker & Microservices Architecture
- Terraform & Terragrunt
- Jenkins & Argocd
- Ansible, Vault, Vagrant, SaltStack
- CloudFront, Apache, Nginx, Varnish, Akamai
- Mysql, Aurora, Postgres, AWS RedShift, MongoDB
- ElasticSearch, Redis, Aerospike, Memcache, Solr
- ELK, Fluentd, Elastic APM & Prometheus Grafana Stack
- Java (Spring/Hibernate/JPA/REST), Nodejs, Ruby, Rails, Erlang, Python
What does this role hold for you…??
- Infrastructure as a code (IaC)
- CI/CD and configuration management
- Managing Azure Active Directory (Entra)
- Keeping the cost of the infrastructure to the minimum
- Doing RCA of production issues and providing resolution
- Setting up failover, DR, backups, logging, monitoring, and alerting
- Containerizing different applications on the Kubernetes platform
- Capacity planning of different environments infrastructure
- Ensuring zero outages of critical services
- Database administration of SQL and NoSQL databases
- Setting up the right set of security measures
Requirements
Apply if you have…
- A graduation/post-graduation degree in Computer Science and related fields
- 2-4 years of strong DevOps experience in Azure with the Linux environment.
- Strong interest in working in our tech stack
- Excellent communication skills
- Worked with minimal supervision and love to work as a self-starter
- Hands-on experience with at least one of the scripting languages - Bash, Python, Go etc
- Experience with version control systems like Git
- Understanding of Azure cloud computing services and cloud computing delivery models (IaaS, PaaS, and SaaS)
- Strong scripting or programming skills for automating tasks (PowerShell/Bash)
- Knowledge and experience with CI/CD tools: Azure DevOps, Jenkins, Gitlab etc.
- Knowledge and experience in IaC at least one (ARM Templates/ Terraform)
- Strong experience with managing the Production Systems day in and day out
- Experience in finding issues in different layers of architecture in a production environment and fixing them
- Experience in automation tools like Ansible/SaltStack and Jenkins
- Experience in Docker/Kubernetes platform and managing OpenStack (desirable)
- Experience with Hashicorp tools i.e. Vault, Vagrant, Terraform, Consul, VirtualBox etc. (desirable)
- Experience in Monitoring tools like Prometheus/Grafana/Elastic APM.
- Experience in logging tools Like ELK/Loki.
- Experience in using Microsoft Azure Cloud services
If you are passionate about infrastructure, and cloud technologies, and want to contribute to innovative projects, we encourage you to apply. Infra360 offers a dynamic work environment and opportunities for professional growth.
Interview Process
Application Screening=>Test/Assessment=>2 Rounds of Tech Interview=>CEO Round=>Final Discussion
Acceldata is creating the Data observability space. We make it possible for data-driven enterprises to effectively monitor, discover, and validate Data platforms at Petabyte scale. Our customers are Fortune 500 companies including Asia's largest telecom company, a unicorn fintech startup of India, and many more. We are lean, hungry, customer-obsessed, and growing fast. Our Solutions team values productivity, integrity, and pragmatism. We provide a flexible, remote-friendly work environment.
We are building software that can provide insights into companies' data operations and allows them to focus on delivering data reliably with speed and effectiveness. Join us in building an industry-leading data observability platform that focuses on ensuring data reliability from every spectrum (compute, data and pipeline) of a cloud or on-premise data platform.
Position Summary-
This role will support the customer implementation of a data quality and reliability product. The candidate is expected to install the product in the client environment, manage proof of concepts with prospects, and become a product expert and troubleshoot post installation, production issues. The role will have significant interaction with the client data engineering team and the candidate is expected to have good communication skills.
Required experience
- 6-7 years experience providing engineering support to data domain/pipelines/data engineers.
- Experience in troubleshooting data issues, analyzing end to end data pipelines and in working with users in resolving issues
- Experience setting up enterprise security solutions including setting up active directories, firewalls, SSL certificates, Kerberos KDC servers, etc.
- Basic understanding of SQL
- Experience working with technologies like S3, Kubernetes experience preferred.
- Databricks/Hadoop/Kafka experience preferred but not required
- Essentail Skills:
- Docker
- Jenkins
- Python dependency management using conda and pip
- Base Linux System Commands, Scripting
- Docker Container Build & Testing
- Common knowledge of minimizing container size and layers
- Inspecting containers for un-used / underutilized systems
- Multiple Linux OS support for virtual system
- Has experience as a user of jupyter / jupyter lab to test and fix usability issues in workbenches
- Templating out various configurations for different use cases (we use Python Jinja2 but are open to other languages / libraries)
- Jenkins PIpeline
- Github API Understanding to trigger builds, tags, releases
- Artifactory Experience
- Nice to have: Kubernetes, ArgoCD, other deployment automation tool sets (DevOps)
DevOps Engineer, Plum & Empuls
As DevOps Engineer, you are responsible to setup and maintain GIT repository, DevOps tools like Jenkins, UCD, Docker, Kubernetes, Jfrog Artifactory, Cloud monitoring tools, Cloud security.
Key Responsibilities
- Setup, configure, and maintain GIT repos, Jenkins, UCD, etc. for multi hosting cloud environments.
- Architect and maintain the server infrastructure in AWS. Build highly resilient infrastructure following industry best practices.
- Working on Docker images and maintaining Kubernetes clusters.
- Develop and maintain the automation scripts using Ansible or other available tools.
- Maintain and monitor cloud Kubernetes Clusters and patching when necessary.
- Working on Cloud security tools to keep applications secured.
- Participate in software development lifecycle, specifically infra design, execution, and debugging required to achieve successful implementation of integrated solutions within the portfolio.
- Required Technical and Professional Expertise.
- Minimum 4-6 years of experience in IT industry.
- Expertise in implementing and managing Devops CI/CD pipeline.
- Experience in DevOps automation tools. And Very well versed with DevOps Frameworks, Agile.
- Working knowledge of scripting using shell, Python, Terraform, Ansible or puppet or chef.
- Experience and good understanding in any of Cloud like AWS, Azure, Google cloud.
- Knowledge of Docker and Kubernetes is required.
- Proficient in troubleshooting skills with proven abilities in resolving complex technical issues.
- Experience with working with ticketing tools.
- Middleware technologies knowledge or database knowledge is desirable.
- Experience and well versed with Jira tool is a plus.
We are
A fast-growing SaaS commerce company based in Bangalore with offices in Delhi, San Francisco, and Dublin. Empuls works with over 100 global clients. We help our clients engage and motivate their employees, sales teams, channel partners, or consumers for better business results.
Way forward
We look forward to connecting with you. As you may take time to review this opportunity, we will wait for a reasonable time of around 3-5 days before we screen the collected applications and start lining up job discussions with the hiring manager. We however assure you that we will attempt to maintain a reasonable time window for successfully closing this requirement. The candidates will be kept informed and updated on the feedback and application status

Skills required:
Strong knowledge and experience of cloud infrastructure (AWS, Azure or GCP), systems, network design, and cloud migration projects.
Strong knowledge and understanding of CI/CD processes tools (Jenkins/Azure DevOps) is a must.
Strong knowledge and understanding of Docker & Kubernetes is a must.
Strong knowledge of Python, along with one more language (Shell, Groovy, or Java).
Strong prior experience using automation tools like Ansible, Terraform.
Architect systems, infrastructure & platforms using Cloud Services.
Strong communication skills. Should have demonstrated the ability to collaborate across teams and organizations.
Benefits of working with OpsTree Solutions:
Opportunity to work on the latest cutting edge tools/technologies in DevOps
Knowledge focused work culture
Collaboration with very enthusiastic DevOps experts
High growth trajectory
Opportunity to work with big shots in the IT industry
Karkinos Healthcare Pvt. Ltd.
The fundamental principle of Karkinos healthcare is democratization of cancer care in a participatory fashion with existing health providers, researchers and technologists. Our vision is to provide millions of cancer patients with affordable and effective treatments and have India become a leader in oncology research. Karkinos will be with the patient every step of the way, to advise them, connect them to the best specialists, and to coordinate their care.
Karkinos has an eclectic founding team with strong technology, healthcare and finance experience, and a panel of eminent clinical advisors in India and abroad.
Roles and Responsibilities:
- Critical role that involves in setting up and owning the dev, staging, and production infrastructure for the platform that uses micro services, data warehouses and a datalake.
- Demonstrate technical leadership with incident handling and troubleshooting.
- Provide software delivery operations and application release management support, including scripting, automated build and deployment processing and process reengineering.
- Build automated deployments for consistent software releases with zero downtime
- Deploy new modules, upgrades and fixes to the production environment.
- Participate in the development of contingency plans including reliable backup and restore procedures.
- Participate in the development of the end to end CI / CD process and follow through with other team members to ensure high quality and predictable delivery
- Participate in development of CI / CD processes
- Work on implementing DevSecOps and GitOps practices
- Work with the Engineering team to integrate more complex testing into a containerized pipeline to ensure minimal regressions
- Build platform tools that rest of the engineering teams can use.
Apply only if you have:
- 2+ years of software development/technical support experience.
- 1+ years of software development, operations experience deploying and maintaining multi-tiered infrastructure and applications at scale.
- 2+ years of experience in public cloud services: AWS (VPC, EC2, ECS, Lambda, Redshift, S3, API Gateway) or GCP (Kubernetes Engine, Cloud SQL, Cloud Storage, BIG Query, API Gateway, Container Registry) - preferably in GCP.
- Experience managing infra for distributed NoSQL system (Kafka/MongoDB), Containers, Micro services, deployment and service orchestration using Kubernetes.
- Experience and a god understanding of Kubernetes, Service Mesh (Istio preferred), API Gateways, Network proxies, etc.
- Experience in setting up infra for central monitoring of infrastructure, ability to debug, trace
- Experience and deep understanding of Cloud Networking and Security
- Experience in Continuous Integration and Delivery (Jenkins / Maven Github/Gitlab).
- Strong scripting language knowledge, such as Python, Shell.
- Experience in Agile development methodologies and release management techniques.
- Excellent analytical and troubleshooting.
- Ability to continuously learn and make decisions with minimal supervision. You understand that making mistakes means that you are learning.
Interested Applicants can share their resume at sajal.somani[AT]karkinos[DOT]in with subject as "DevOps Engineer".
Work Location: Andheri East, Mumbai
Experience: 2-4 Years
About the Role:
At Bizongo, we believe in delivering excellence which drives business efficiency for our customers. As Software Engineer’s at Bizongo, you will be working on developing next generation of technology that will impact how businesses take care of their process and derive process excellence. We are looking for engineers who can bring fresh ideas, function at scale and are passionate about technology. We expect our engineers to be multidimensional, display leadership and have a zeal for learning as well as experimentation as we push business efficiency through our technology. As a DevOps Engineer, you should have hands-on experience as a DevOps engineer with strong technical proficiency on public clouds, Linux an programming/scripting.
Job Responsibilities:
Gather and analyse cloud infrastructure requirements
Automate obsessively
Support existing infrastructure, analyse problem areas and come up with solutions
Optimise stack performance and costs
Write code for new and existing tools
Must-haves:
Experience with DevOps techniques and philosophies
Passion to work in an exciting fast paced environment
Self-starter who can implement with minimal guidance
Good conceptual understanding of the building blocks of modern web-based infrastructure: DNS, TCP/IP, Networking, HTTP, SSL/TLS
Strong Linux skills
Experience with automation of code builds and deployments
Experience in nginx configurations for dynamic web application
Help with cost optimisations of infrastructure requirements
Assist development teams with any infrastructure needs
Strong command line skills to automate routine system administration tasks
An eye for monitoring. The ideal candidate should be able to look at complex infrastructure and be able to figure out what to monitor and how.
Databases: MySQL, PostgreSQL and cloud-based relational database solutions like Amazon RDS. Database replication and scalability
High Availability: Load Balancing (ELB), Reverse Proxies, CDNs etc.
Scripting Languages: Python/Bash/Shell/Perl
Version control with Git. Exposure to various branching workflows and collaborative development
Virtualisation and Docker
AWS core components (or their GCP/Azure equivalents) and their management: EC2, ELB, NAT, VPC, IAM Roles and policies, EBS and S3, CloudFormation, Elasticache, Route53, etc.
Configuration Management: SaltStack/Ansible/Puppet
CI/CD automation experience
Understanding of Agile development practices is a plus
Bachelor’s degree in Information Science / Information Technology, Computer Science, Engineering, Mathematics, Physics, or a related field
Why work with us?
Opportunity to work with "India’s leading B2B" E-commerce venture. The company grew its revenue by more than 12x last year to reach to a 200 Cr annual revenue run rate scale in March 2018. We invite you to be part of the upcoming growth story of B2B sector through Bizongo
Having formed a strong base in Ecommerce, Supply Chain, Retail industries; exploring FMCG, Food Processing, Engineering, Consumer Durable, Auto Ancillary and Chemical industries
Design and Development launched for Bizongo recently and seeing tremendous growth as a steady and additional revenue stream
Opportunity to work with most dynamic individuals in Asia recognized under Forbes 30 Under 30 and industry stalwarts from across companies like Microsoft, Paypal, Gravitas, Parksons, ITC, Snapdeal, Fedex, Deloitte and HUL
Working in Bizongo translates into being a part of a dynamic start-up with some of the most enthusiastic, hardworking and intelligent people in a fast paced and electrifying environment
Bizongo has been awarded as the most Disruptive Procurement Startup of the year - 2017
Being a company that is expanding itself every day and working towards exploring newer avenues in the market, every employee grows with the company
The position provides a chance to build on existing talents, learn new skills and gain valuable experience in the field of Ecommerce
About the Company:
Company Website: https://www.bizongo.com/
Any solution worth anything is unfailingly preceded by clear articulation of a problem worth solving. Even a modest study of Indian Packaging industry would lead someone to observe the enormous fragmentation, chaos and rampant unreliability pervading the ecosystem. When businesses are unable to cope even with these basic challenges, how can they even think of materializing an eco-friendly & resource-efficient packaging economy? These are some hardcore problems with real-world consequences which our country is hard-pressed to solve.
Bizongo was conceived as an answer to these first level challenges of disorganization in the industry. We employed technology to build a business model that can streamline the packaging value-chain & has enormous potential to scale sustainably. Our potential to fill this vacuum was recognized early on by Accel Partners and IDG Ventures who jointly led our Series A funding. Most recently, B Capital group, a global tech fund led by Facebook co-founder Mr. Eduardo Savarin, invested in our technological capabilities when it jointly led our Series B funding with IFC.
The International Finance Corporation (IFC), the private-sector investment arm of the World Bank, cited our positive ecosystem impact towards the network of 30,000 SMEs operating in the packaging industry, as one of the core reasons for their investment decision. Beyond these bastions of support, we are extremely grateful to have found validation by various authoritative institutions including Forbes 30 Under 30 Asia. Being the only major B2B player in the country with such an unprecedented model has lent us enormous scope of experimentation in our efforts to break new grounds. Dreaming and learning together thus, we have grown from a team of 3, founded in 2015, to a 250+ strong family with office presence across Mumbai, Gurgaon and Bengaluru. So those who strive for opportunities to rise above their own limitations, who seek to build an ecosystem of positive change and to find remarkable solutions to challenges where none existed before, such creators would find a welcome abode in Bizongo.
- Solve complex Cloud Infrastructure problems.
- Drive DevOps culture in the organization by working with engineering and product teams.
- Be a trusted technical advisor to developers and help them architect scalable, robust, and highly-available systems.
- Frequently collaborate with developers to help them learn how to run and maintain systems in production.
- Drive a culture of CI/CD. Find bottlenecks in the software delivery pipeline. Fix bottlenecks with developers to help them deliver working software faster. Develop and maintain infrastructure solutions for automation, alerting, monitoring, and agility.
- Evaluate cutting edge technologies and build PoCs, feasibility reports, and implementation strategies.
- Work with engineering teams to identify and remove infrastructure bottlenecks enabling them to move fast. (In simple words you'll be a bridge between tech, operations & product)
Skills required:
Must have:
- Deep understanding of open source DevOps tools.
- Scripting experience in one or more among Python, Shell, Go, etc.
- Strong experience with AWS (EC2, S3, VPC, Security, Lambda, Cloud Formation, SQS, etc)
- Knowledge of distributed system deployment.
- Deployed and Orchestrated applications with Kubernetes.
- Implemented CI/CD for multiple applications.
- Setup monitoring and alert systems for services using ELK stack or similar.
- Knowledge of Ansible, Jenkins, Nginx.
- Worked with Queue based systems.
- Implemented batch jobs and automated recurring tasks.
- Implemented caching infrastructure and policies.
- Implemented central logging.
Good to have:
- Experience dealing with PI information security.
- Experience conducting internal Audits and assisting External Audits.
- Experience implementing solutions on-premise.
- Experience with blockchain.
- Experience with Private Cloud setup.
Required Experience:
- B.Tech. / B.E. degree in Computer Science or equivalent software engineering degree/experience.
- You need to have 2-4 years of DevOps & Automation experience.
- Need to have a deep understanding of AWS.
- Need to be an expert with Git or similar version control systems.
- Deep understanding of at least one open-source distributed systems (Kafka, Redis, etc)
- Ownership attitude is a must.
We offer a suite of memberships and subscriptions to spice up your lifestyle. We believe in practicing an ultimate work life balance and satisfaction. Working hard doesn’t mean clocking in extra hours, it means having a zeal to contribute the best of your talents. Our people culture helps us inculcate measures and benefits which help you feel confident and happy each and every day. Whether you’d like to skill up, go off the grid, attend your favourite events or be an epitome of fitness. We have you covered round and about.
- Health Memberships
- Sports Subscriptions
- Entertainment Subscriptions
- Key Conferences and Event Passes
- Learning Stipend
- Team Lunches and Parties
- Travel Reimbursements
- ESOPs
Thats what we think would bloom up your personal life, as a gesture for helping us with your talents.
Join us to be a part of our Exciting journey to Build one Digital Identity Platform!!!

