
Kutumb is the first and largest communities platform for Bharat. We are growing at an exponential trajectory. More than 1 Crore users use Kutumb to connect with their community. We are backed by world-class VCs and angel investors. We are growing and looking for exceptional Infrastructure Engineers to join our Engineering team.
More on this here - https://kutumbapp.com/why-join-us.html">https://kutumbapp.com/why-join-us.html
We’re excited if you have:
- Recent experience designing and building unified observability platforms that enable companies to use the sometimes-overwhelming amount of available data (metrics, logs, and traces) to determine quickly if their application or service is operating as desired
- Expertise in deploying and using open-source observability tools in large-scale environments, including Prometheus, Grafana, ELK (ElasticSearch + Logstash + Kibana), Jaeger, Kiali, and/or Loki
- Familiarity with open standards like OpenTelemetry, OpenTracing, and OpenMetrics
- Familiarity with Kubernetes and Istio as the architecture on which the observability platform runs, and how they integrate and scale. Additionally, the ability to contribute improvements back to the joint platform for the benefit of all teams
- Demonstrated customer engagement and collaboration skills to curate custom dashboards and views, and identify and deploy new tools, to meet their requirements
- The drive and self-motivation to understand the intricate details of a complex infrastructure environment
- Using CICD tools to automatically perform canary analysis and roll out changes after passing automated gates (think Argo & keptn)
- Hands-on experience working with AWS
- Bonus points for knowledge of ETL pipelines and Big data architecture
- Great problem-solving skills & takes pride in your work
- Enjoys building scalable and resilient systems, with a focus on systems that are robust by design and suitably monitored
- Abstracting all of the above into as simple of an interface as possible (like Knative) so developers don't need to know about it unless they choose to open the escape hatch
What you’ll be doing:
- Design and build automation around the chosen tools to make onboarding new services easy for developers (dashboards, alerts, traces, etc)
- Demonstrate great communication skills in working with technical and non-technical audiences
- Contribute new open-source tools and/or improvements to existing open-source tools back to the CNCF ecosystem
Tools we use:
Kops, Argo, Prometheus/ Loki/ Grafana, Kubernetes, AWS, MySQL/ PostgreSQL, Apache Druid, Cassandra, Fluentd, Redis, OpenVPN, MongoDB, ELK
What we offer:
- High pace of learning
- Opportunity to build the product from scratch
- High autonomy and ownership
- A great and ambitious team to work with
- Opportunity to work on something that really matters
- Top of the class market salary and meaningful ESOP ownership

About Kutumb
About
Kutumb is the first and largest communities platform for Bharat. We are growing at an exponential trajectory. More than 1 Crore users use Kutumb to connect with their community. We are backed by world-class VCs and angel investors.
Connect with the team
Similar jobs
Job Summary :
We are looking for a proactive and skilled Senior DevOps Engineer to join our team and play a key role in building, managing, and scaling infrastructure for high-performance systems. The ideal candidate will have hands-on experience with Kubernetes, Docker, Python scripting, cloud platforms, and DevOps practices around CI/CD, monitoring, and incident response.
Key Responsibilities :
- Design, build, and maintain scalable, reliable, and secure infrastructure on cloud platforms (AWS, GCP, or Azure).
- Implement Infrastructure as Code (IaC) using tools like Terraform, Cloud Formation, or similar.
- Manage Kubernetes clusters, configure namespaces, services, deployments, and auto scaling.
CI/CD & Release Management :
- Build and optimize CI/CD pipelines for automated testing, building, and deployment of services.
- Collaborate with developers to ensure smooth and frequent deployments to production.
- Manage versioning and rollback strategies for critical deployments.
Containerization & Orchestration using Kubernetes :
- Containerize applications using Docker, and manage them using Kubernetes.
- Write automation scripts using Python or Shell for infrastructure tasks, monitoring, and deployment flows.
- Develop utilities and tools to enhance operational efficiency and reliability.
Monitoring & Incident Management :
- Analyze system performance and implement infrastructure scaling strategies based on load and usage trends.
- Optimize application and system performance through proactive monitoring and configuration tuning.
Desired Skills and Experience :
- Experience Required - 8+ yrs.
- Hands-on experience on cloud services like AWS, EKS etc.
- Ability to design a good cloud solution.
- Strong Linux troubleshooting, Shell Scripting, Kubernetes, Docker, Ansible, Jenkins Skills.
- Design and implement the CI/CD pipeline following the best industry practices using open-source tools.
- Use knowledge and research to constantly modernize our applications and infrastructure stacks.
- Be a team player and strong problem-solver to work with a diverse team.
- Having good communication skills.
Required Skills:
- Experience in systems administration, SRE or DevOps focused role
- Experience in handling production support (on-call)
- Good understanding of the Linux operating system and networking concepts.
- Demonstrated competency with the following AWS services: ECS, EC2, EBS, EKS, S3, RDS, ELB, IAM, Lambda.
- Experience with Docker containers and containerization concepts
- Experience with managing and scaling Kubernetes clusters in a production environment
- Experience building scalable infrastructure in AWS with Terraform.
- Strong knowledge of Protocol-level such as HTTP/HTTPS, SMTP, DNS, and LDAP
- Experience monitoring production systems
- Expertise in leveraging Automation / DevOps principles, experience with operational tools, and able to apply best practices for infrastructure and software deployment (Ansible).
- HAProxy, Nginx, SSH, MySQL configuration and operation experience
- Ability to work seamlessly with software developers, QA, project managers, and business development
- Ability to produce and maintain written documentation
About Company:
The company is a global leader in secure payments and trusted transactions. They are at the forefront of the digital revolution that is shaping new ways of paying, living, doing business and building relationships that pass on trust along the entire payments value chain, enabling sustainable economic growth. Their innovative solutions, rooted in a rock-solid technological base, are environmentally friendly, widely accessible and support social transformation.
- Role Overview
- Senior Engineer with a strong background and experience in cloud related technologies and architectures. Can design target cloud architectures to transform existing architectures together with the in-house team. Can actively hands-on configure and build cloud architectures and guide others.
- Key Knowledge
- 3-5+ years of experience in AWS/GCP or Azure technologies
- Is likely certified on one or more of the major cloud platforms
- Strong experience from hands-on work with technologies such as Terraform, K8S, Docker and orchestration of containers.
- Ability to guide and lead internal agile teams on cloud technology
- Background from the financial services industry or similar critical operational experience
What the role needs
● Review of current DevOps infrastructure & redefine code merging strategy as per product roll out objectives
● Define deploy frequency strategy based on product roadmap document and ongoing product market fit relate tweaks and changes
● Architect benchmark docker configurations based on planned stack
● Establish uniformity of environment across developer machine to multiple production environments
● Plan & execute test automation infrastructure
● Setup automated stress testing environment
● Plan and execute logging & stack trace tools
● Review DevOps orchestration tools & choices
● Coordination with external data centers and AWS in the event of provisioning, outages or maintenance.
Requirements
● Extensive experience with AWS cloud infrastructure deployment and monitoring
● Advanced knowledge of programming languages such as Python and golang, and writing code and scripts
● Experience with Infrastructure as code & devops management tools - Terraform, Packer for devops asset management for monitoring, infrastructure cost estimations, and Infrastructure version management
● Configure and manage data sources like MySQL, MongoDB, Elasticsearch, Redis, Cassandra, Hadoop, etc
● Experience with network, infrastructure and OWASP security standards
● Experience with web server configurations - Nginx, HAProxy, SSL configurations with AWS, understanding & management of sub-domain based product rollout for clients .
● Experience with deployment and monitoring of event streaming & distributing technologies and tools - Kafka, RabbitMQ, NATS.io, socket.io
● Understanding & experience of Disaster Recovery Plan execution
● Working with other senior team members to devise and execute strategies for data backup and storage
● Be aware of current CVEs, potential attack vectors, and vulnerabilities, and apply patches as soon as possible
● Handle incident responses, troubleshooting and fixes for various services
Designation - DevOps Engineer
Urgently required. (NP of maximum 15 days)
Location:- Mumbai
Experience:- 3-5 years.
Package Offered:- Rs.4,00,000/- to Rs.7,00,000/- pa.
DevOps Engineer Job Description:-
Responsibilities
- Deploy updates and fixes
- Provide Level 2 technical support
- Build tools to reduce occurrences of errors and improve customer & client experience
- Develop software to integrate with internal back-end systems
- Perform root cause analysis for production errors
- Investigate and resolve technical issues
- Develop scripts to automate visualization
- Design procedures for system troubleshooting and maintenance
Requirements
- Work experience as a DevOps Engineer or similar software engineering role
- Work experience on AWS
- Good knowledge of Ruby or Python
- Working knowledge of databases and SQL
- Problem-solving attitude
- Team spirit
- BSc in Computer Science, Engineering or relevant field
- Work towards improving the following 4 verticals - scalability, availability, security, and cost, for company's workflows and products.
- Help in provisioning, managing, optimizing cloud infrastructure in AWS (IAM, EC2, RDS, CloudFront, S3, ECS, Lambda, ELK etc.)
- Work with the development teams to design scalable, robust systems using cloud architecture for both 0-to-1 and 1-to-100 products.
- Drive technical initiatives and architectural service improvements.
- Be able to predict problems and implement solutions that detect and prevent outages.
- Mentor/manage a team of engineers.
- Design solutions with failure scenarios in mind to ensure reliability.
- Document rigorously to keep track of all changes/upgrades to the infrastructure and as well share knowledge with the rest of the team
- Identify vulnerabilities during development with actionable information to empower developers to remediate vulnerabilities
- Automate the build and testing processes to consistently integrate code
- Manage changes to documents, software, images, large web sites, and other collections of code, configuration, and metadata among disparate teams
We are front runners of the technological revolution with an inexhaustible passion for technology! DevOn is the technical organization that originated from Prowareness. We are the company at the forefront of leading DevOps transformations and setting up High Performance Distributed DevOps teams with leading companies worldwide. DevOn helps market leaders to take the next step in software delivery. We consist of a dynamic team, in which personal growth is central!
About You
You have 6+ years of experience in AWS infra Automation. This is a fantastic opportunity to work in a fast-paced operations environment and to develop your career in Cloud technologies, particularly Amazon Web Services.
You are building and monitoring CI/CD pipeline in AWS cloud. This is a highly scalable backend application building on Java platform. We need a resource who can troubleshoot, diagnose and rectify system service issues.
You’re cloud native with Terraform as an orchestration. You would use Terraform as a key Orchestration in Infrastructure as Code.
You're comfortable driving. You prefer to own your work streams and enjoy working in autonomy to progress towards your goals.
You provide an incredible support to the team. You sweat the small stuff but keep the big picture in mind. You know that a pair programming can give better result
An ideal candidate is/are:
This is a key role within our DevOps team and will involve working as part of a collaborative agile team in a shared services DevOps organization to support and deliver innovative technology solutions that directly align with the delivery of business value and enhanced customer experience. The primary objective is to provide support to Amazon Web Services hosted environment, ensure continuous availability, working closely with development teams to ensure best value for money, and effective estate management.
- Setup CI/CD Pipeline from scratch along with integration of appropriate quality gates.
- Expertise level knowledge in AWS cloud. Provision and configure infrastructure as code using Terraform
- Build and configure Kubernetes-based infrastructure, networking policies, LBs, and cluster security. Define autoscaling and cost strategies.
- Automate the build of containerized systems with CI/CD tooling, Helm charts, and more
- Manage deployments and rollbacks of applications
- Implement monitoring and metrics with Cloud watch, Newrelic
- Troubleshoot and optimize containerized workload deployments for clients
- Automate operational tasks, and assist in the transition to service ownership models.
Location – Pune
Experience - 1.5 to 3 YR
Payroll: Direct with Client
Salary Range: 3 to 5 Lacs (depending on existing)
Role and Responsibility
• Good understanding and Experience on AWS CloudWatch for ES2, Amazon Web Services, and Resources, and other sources.
• Collect and Store logs
• Monitor and Store Logs
• Log Analyze
• Configure Alarm
• Configure Dashboard
• Preparation and following of SOP's, Documentation.
• Good understanding AWS in DevOps.
• Experience with AWS services ( EC2, ECS, CloudWatch, VPC, Networking )
• Experience with a variety of infrastructure, application, and log monitoring tools ~ Prometheus, Grafana,
• Familiarity with Docker, Linux, and Linux security
• Knowledge and experience with container-based architectures like Docker
• Experience on performing troubleshooting on AWS service.
• Experience in configuring services in AWS like EC2, S3, ECS
• Experience with Linux system administration and engineering skills on Cloud infrastructure
• Knowledge of Load Balancers, Firewalls, and network switching components
• Knowledge of Internet-based technologies - TCP/IP, DNS, HTTP, SMTP & Networking concepts
• Knowledge of security best practices
• Comfortable 24x7 supporting Production environments
• Strong communication skills
MTX Group Inc. is seeking a motivated Lead DevOps Engineer to join our team. MTX Group Inc. is a global implementation partner enabling organizations to become fit enterprises. MTX provides expertise across various platforms and technologies, including Google Cloud, Salesforce, artificial intelligence/machine learning, data integration, data governance, data quality, analytics, visualization and mobile technology. MTX’s very own Artificial Intelligence platform Maverick, enables clients to accelerate processes and critical decisions by leveraging a Cognitive Decision Engine, a collection of purpose-built Artificial Neural Networks designed to leverage the power of Machine Learning. The Maverick Platform includes Smart Asset Detection and Monitoring, Chatbot Services, Document Verification, to name a few.
Responsibilities:
- Be responsible for software releases, configuration, monitoring and support of production system components and infrastructure.
- Troubleshoot technical or functional issues in a complex environment to provide timely resolution, with various applications and platforms that are global.
- Bring experience on Google Cloud Platform.
- Write scripts and automation tools in languages such as Bash/Python/Ruby/Golang.
- Configure and manage data sources like PostgreSQL, MySQL, Mongo, Elasticsearch, Redis, Cassandra, Hadoop, etc
- Build automation and tooling around Google Cloud Platform using technologies such as Anthos, Kubernetes, Terraform, Google Deployment Manager, Helm, Cloud Build etc.
- Bring a passion to stay on top of DevOps trends, experiment with and learn new CI/CD technologies.
- Work with users to understand and gather their needs in our catalogue. Then participate in the required developments
- Manage several streams of work concurrently
- Understand how various systems work
- Understand how IT operations are managed
What you will bring:
- 5 years of work experience as a DevOps Engineer.
- Must possess ample knowledge and experience in system automation, deployment, and implementation.
- Must possess experience in using Linux, Jenkins, and ample experience in configuring and automating the monitoring tools.
- Experience in the software development process and tools and languages like SaaS, Python, Java, MongoDB, Shell scripting, Python, MySQL, and Git.
- Knowledge in handling distributed data systems. Examples: Elasticsearch, Cassandra, Hadoop, and others.
What we offer:
- Group Medical Insurance (Family Floater Plan - Self + Spouse + 2 Dependent Children)
- Sum Insured: INR 5,00,000/-
- Maternity cover upto two children
- Inclusive of COVID-19 Coverage
- Cashless & Reimbursement facility
- Access to free online doctor consultation
- Personal Accident Policy (Disability Insurance) -
- Sum Insured: INR. 25,00,000/- Per Employee
- Accidental Death and Permanent Total Disability is covered up to 100% of Sum Insured
- Permanent Partial Disability is covered as per the scale of benefits decided by the Insurer
- Temporary Total Disability is covered
- An option of Paytm Food Wallet (up to Rs. 2500) as a tax saver benefit
- Monthly Internet Reimbursement of upto Rs. 1,000
- Opportunity to pursue Executive Programs/ courses at top universities globally
- Professional Development opportunities through various MTX sponsored certifications on multiple technology stacks including Salesforce, Google Cloud, Amazon & others
*******************









