
About Kiru:
Kiru is a forward-thinking payments startup on a mission to revolutionise the digital payments landscape in Africa and beyond. Our innovative solutions will reshape how people transact, making payments safer, faster, and more accessible. Join us on our journey to redefine the future of payments.
Position Overview:
We are searching for a highly skilled and motivated DevOps Engineer to join our dynamic team in Pune, India. As a DevOps Engineer at Kiru, you will play a critical role in ensuring our payment infrastructure's reliability, scalability, and security.
Key Responsibilities:
- Utilize your expertise in technology infrastructure configuration to manage and automate infrastructure effectively.
- Collaborate with cross-functional teams, including Software Developers and technology management, to design and implement robust and efficient DevOps solutions.
- Configure and maintain a secure backend environment focusing on network isolation and VPN access.
- Implement and manage monitoring solutions like ZipKin, Jaeger, New Relic, or DataDog and visualisation and alerting solutions like Prometheus and Grafana.
- Work closely with developers to instrument code for visualisation and alerts, ensuring system performance and stability.
- Contribute to the continuous improvement of development and deployment pipelines.
- Collaborate on the selection and implementation of appropriate DevOps tools and technologies.
- Troubleshoot and resolve infrastructure and deployment issues promptly to minimize downtime.
- Stay up-to-date with emerging DevOps trends and best practices.
- Create and maintain comprehensive documentation related to DevOps processes and configurations.
Qualifications:
- Bachelor's degree in Computer Science, Information Technology, or a related field (or equivalent work experience).
- Proven experience as a DevOps Engineer or in a similar role.
- Experience configuring infrastructure on Microsoft Azure
- Experience with Kubernetes as a container orchestration technology
- Experience with Terraform and Azure ARM or Bicep templates for infrastructure provisioning and management.
- Experience configuring and maintaining secure backend environments, including network isolation and VPN access.
- Proficiency in setting up and managing monitoring and visualization tools such as ZipKin, Jaeger, New Relic, DataDog, Prometheus, and Grafana.
- Ability to collaborate effectively with developers to instrument code for visualization and alerts.
- Strong problem-solving and troubleshooting skills.
- Excellent communication and teamwork skills.
- A proactive and self-motivated approach to work.
Desired Skills:
- Experience with Azure Kubernetes Services and managing identities across Azure services.
- Previous experience in a financial or payment systems environment.
About Kiru:
At Kiru, we believe that success is achieved through collaboration. We recognise that every team member has a vital role to play, and it's the partnerships we build within our organisation that drive our customers' success and our growth as a business.
We are more than just a team; we are a close-knit partnership. By bringing together diverse talents and fostering powerful collaborations, we innovate, share knowledge, and continually learn from one another. We take pride in our daily achievements but never stop challenging ourselves and supporting each other. Together, we reach new heights and envision a brighter future.
Regardless of your career journey, we provide the guidance and resources you need to thrive. You will have everything required to excel through training programs, mentorship, and ongoing support. At Kiru, your success is our success, and that success matters because we are the essential partners for the world's most critical businesses. These companies manufacture, transport, and supply the world's essential goods.
Equal Opportunities and Accommodations Statement:
Kiru is committed to fostering a workplace and global community where inclusion is celebrated and where you can bring your authentic selfbecause that's who we're interested in. If you are interested in this role but don't meet every qualification in the job description, don't hesitate to apply. We are an equal opportunity employer.

About Kiru
About
Similar jobs

- Experience with Infrastructure-as-Code tools(IaS) like Terraform and Cloud Formation.
- Proficiency in cloud-native technologies and architectures (Docker/ Kubernetes), Ci/CD pipelines.
- Good experience in Javascript.
- Expertise in Linux / Windows environment.
- Good Experience in Scripting languages like PowerShell / Bash/ Python.
- Proficiency in revision control and DevOps best practices like Git
- Minimum 3+ yrs of Experience in DevOps with AWS Platform
- • Strong AWS knowledge and experience
- • Experience in using CI/CD automation tools (Git, Jenkins, Configuration deployment tools ( Puppet/Chef/Ansible)
- • Experience with IAC tools Terraform
- • Excellent experience in operating a container orchestration cluster (Kubernetes, Docker)
- • Significant experience with Linux operating system environments
- • Experience with infrastructure scripting solutions such as Python/Shell scripting
- • Must have experience in designing Infrastructure automation framework.
- • Good experience in any of the Setting up Monitoring tools and Dashboards ( Grafana/kafka)
- • Excellent problem-solving, Log Analysis and troubleshooting skills
- • Experience in setting up centralized logging for system (EKS, EC2) and application
- • Process-oriented with great documentation skills
- • Ability to work effectively within a team and with minimal supervision
Job Role - DevOps Infra Lead Engineer
About LenDenClub
LenDenClub is a leading peer-to-peer lending platform that provides an alternate investment opportunity to investors or lenders looking for high returns with creditworthy borrowers looking for short-term personal loans. With a total of 8 million users and 2 million+ investors on board, LenDenClub has become a go-to platform to earn returns in the range of 10%-12%. LenDenClub offers investors a convenient medium to browse thousands of borrower profiles to achieve better returns than traditional asset classes. Moreover, LenDenClub is safeguarded by
market volatility and inflation. LenDenClub provides a great way to diversify one’s investment portfolio.
LenDenClub has raised US $10 million in a Series A round from an association of investors. With the new round of funding, LenDenClub was valued at more than US $51 million in the last round and has grown multifold since then.
Why work at LenDenClub
LenDenClub is a certified great place to work. The certification comes from the Great Place to Work Institute, Inc., a globally renowned firm dedicated to evaluating companies for their employee satisfaction on the grounds of high trust and high-performance culture at workplaces.
As a LenDenite, you will be a part of an enthusiastic and passionate group of individuals who own and love what they do. At LenDenClub we believe in creating leaders and with you coming on board you get to work with complete freedom to chase your ultimate career goal without any inhibitions.
Website - https://www.lendenclub.com
Location - Mumbai (Goregaon)
Responsibilities of a DevOps Infra Lead Engineer:
● Responsible for creating software deployment strategies that are essential for the successful deployment of software in the work environment. Identify and implement data storage methods like clustering to improve the performance of the team.
● Responsible for coming up with solutions for managing a vast number of documents in real-time and enables quick search and analysis. Identifies issues in the production phase and system and implements monitoring solutions to overcome those issues.
● Stay abreast of industry trends and best practices. Conduct research, tests, and execute new techniques which could be reused and applied to the software development project.
● Accountable for designing, building, and optimizing automation systems that help to execute business web and data infrastructure platforms.
● Creating technology infrastructure, automation tools, and maintaining configuration management.
● To cater to the engineering department’s quality and standards, implement lifecycle infrastructure solutions and documentation operations.
● Implementation and maintaining of CI/CD pipelines.
● Containerisation of applications
● Construct and improve the security on the infrastructure
● Infrastructure As A Code
● Maintaining Environments
● Nat and ACL's
● Setup of ECS and ELB for HA
● WAF and Firewall and DMZ
● Deployment strategies for high uptime
● Setup up monitoring and policies for infra and applications
Required Skills
● Communication Skills
● Interpersonal Skills
● Infrastructure
● Aware of technologies like Python, MYSQL, MongoDB, and so on.
● Sound knowledge of cloud infrastructure.
● Should possess knowledge of fundamental Unix/Linux, monitoring, editing, and command-based tools is essential.
● Versed in scripting languages such as Ruby and Shell
● Google Cloud Platforms, Hadoop, NoSQL databases, and big data clusters.
● Knowledge of open source technologies
environments: AWS / Azure / GCP
• Must have strong work experience (2 + years) developing IaC (i.e. Terraform)
• Must have strong work experience in Ansible development and deployment.
• Bachelor’s degree with a background in math will be a PLUS.
• Must have 8+ years experience with a mix of Linux and Window systems in a medium to large business
environment.
• Must have command level fluency and shell scripting experience in a mix of Linux and Windows
environments.
•
• Must enjoy the experience of working in small, fast-paced teams
• Identify opportunities for improvement in existing process and automate the process using Ansible Flows.
• Fine tune performance and operation issues that arise with Automation flows.
• Experience administering container management systems like Kubernetes would be plus.
• Certification with Red Hat or any other Linux variant will be a BIG PLUS.
• Fluent in the use of Microsoft Office Applications (Outlook / Word / Excel).
• Possess a strong aptitude towards automating and timely completion of standard/routine tasks.
• Experience with automation and configuration control systems like Puppet or Chef is a plus.
• Experience with Docker, Kubernetes (or container orchestration equivalent) is nice to have
About Hive
Hive is the leading provider of cloud-based AI solutions for content understanding,
trusted by the world’s largest, fastest growing, and most innovative organizations. The
company empowers developers with a portfolio of best-in-class, pre-trained AI models, serving billions of customer API requests every month. Hive also offers turnkey software applications powered by proprietary AI models and datasets, enabling breakthrough use cases across industries. Together, Hive’s solutions are transforming content moderation, brand protection, sponsorship measurement, context-based ad targeting, and more.
Hive has raised over $120M in capital from leading investors, including General Catalyst, 8VC, Glynn Capital, Bain & Company, Visa Ventures, and others. We have over 250 employees globally in our San Francisco, Seattle, and Delhi offices. Please reach out if you are interested in joining the future of AI!
About Role
Our unique machine learning needs led us to open our own data centers, with an
emphasis on distributed high performance computing integrating GPUs. Even with these data centers, we maintain a hybrid infrastructure with public clouds when the right fit. As we continue to commercialize our machine learning models, we also need to grow our DevOps and Site Reliability team to maintain the reliability of our enterprise SaaS offering for our customers. Our ideal candidate is someone who is
able to thrive in an unstructured environment and takes automation seriously. You believe there is no task that can’t be automated and no server scale too large. You take pride in optimizing performance at scale in every part of the stack and never manually performing the same task twice.
Responsibilities
● Create tools and processes for deploying and managing hardware for Private Cloud Infrastructure.
● Improve workflows of developer, data, and machine learning teams
● Manage integration and deployment tooling
● Create and maintain monitoring and alerting tools and dashboards for various services, and audit infrastructure
● Manage a diverse array of technology platforms, following best practices and
procedures
● Participate in on-call rotation and root cause analysis
Requirements
● Minimum 5 - 10 years of previous experience working directly with Software
Engineering teams as a developer, DevOps Engineer, or Site Reliability
Engineer.
● Experience with infrastructure as a service, distributed systems, and software design at a high-level.
● Comfortable working on Linux infrastructures (Debian) via the CLIAble to learn quickly in a fast-paced environment.
● Able to debug, optimize, and automate routine tasks
● Able to multitask, prioritize, and manage time efficiently independently
● Can communicate effectively across teams and management levels
● Degree in computer science, or similar, is an added plus!
Technology Stack
● Operating Systems - Linux/Debian Family/Ubuntu
● Configuration Management - Chef
● Containerization - Docker
● Container Orchestrators - Mesosphere/Kubernetes
● Scripting Languages - Python/Ruby/Node/Bash
● CI/CD Tools - Jenkins
● Network hardware - Arista/Cisco/Fortinet
● Hardware - HP/SuperMicro
● Storage - Ceph, S3
● Database - Scylla, Postgres, Pivotal GreenPlum
● Message Brokers: RabbitMQ
● Logging/Search - ELK Stack
● AWS: VPC/EC2/IAM/S3
● Networking: TCP / IP, ICMP, SSH, DNS, HTTP, SSL / TLS, Storage systems,
RAID, distributed file systems, NFS / iSCSI / CIFS
Who we are
We are a group of ambitious individuals who are passionate about creating a revolutionary AI company. At Hive, you will have a steep learning curve and an opportunity to contribute to one of the fastest growing AI start-ups in San Francisco. The work you do here will have a noticeable and direct impact on the
development of the company.
Thank you for your interest in Hive and we hope to meet you soon
Job Description
- Implement IAM policies and configure VPCs to create a scalable and secure network for the application workloads
- Will be client point of contact for High Priority technical issues and new requirements
- Should act as Tech Lead and guide the junior members of team and mentor them
- Work with client application developers to build, deploy and run both monolithic and microservices based applications on AWS Cloud
- Analyze workload requirements and work with IT stakeholders to define proper sizing for cloud workloads on AWS
- Build, Deploy and Manage production workloads including applications on EC2 instance, APIs on Lambda Functions and more
- Work with IT stakeholders to monitor system performance and proactively improve the environment for scale and security
Qualifications
- Prefer to have at least 5+ years of IT experience implementing enterprise applications
- Should be AWS Solution Architect Associate Certified
- Must have at least 3+ years of working as a Cloud Engineer focused on AWS services such as EC2, CloudFront, VPC, CloudWatch, RDS, DynamoDB, Systems Manager, Route53, WAF, API Gateway, Elastic beanstalk, ECS, ECR, Lambda, SQS, SNS, S3 bucket, Elastic Search, DocumentDB IAM, etc.
- Must have a strong understanding of EC2 instances, types and deploying applications to the cloud
- Must have a strong understanding of IAM policies, VPC creation, and other security/networking principles
- Must have through experience in doing on prem to AWS cloud workload migration
- Should be comfortable in using AWS and other migrations tools
- Should have experience is working on AWS performance, Cost and Security optimisation
- Should be experience in implementing automated patching and hardening of the systems
- Should be involved in P1 tickets and also guide team wherever needed
- Creating Backups and Managing Disaster Recovery
- Experience in using Infra as a code automation using scripts & tools like CloudFormation and Terraform
- Any exposure towards creating CI/CD pipelines on AWS using CodeBuild, CodeDeploy, etc. is an advantage
- Experience with Docker, Bitbucket, ELK and deploying applications on AWS
- Good understanding of Containerisation technologies like Docker, Kubernetes etc.
- Should be experience in using and configuring cloud monitoring tools and ITSM ticketing tools
- Good exposure to Logging & Monitoring tools like Dynatrace, Prometheus, Grafana, ELF/EFK
We are looking for a DevOps Engineer (individual contributor) to maintain and build upon our next-generation infrastructure. We aim to ensure that our systems are secure, reliable and high-performing by constantly striving to achieve best-in-class infrastructure and security by:
- Leveraging a variety of tools to ensure all configuration is codified (using tools like Terraform and Flux) and applied in a secure, repeatable way (via CI)
- Routinely identifying new technologies and processes that enable us to streamline our operations and improve overall security
- Holistically monitoring our overall DevOps setup and health to ensure our roadmap constantly delivers high-impact improvements
- Eliminating toil by automating as many operational aspects of our day-to-day work as possible using internally created, third party and/or open-source tools
- Maintain a culture of empowerment and self-service by minimizing friction for developers to understand and use our infrastructure through a combination of innovative tools, excellent documentation and teamwork
Tech stack: Microservices primarily written in JavaScript, Kotlin, Scala, and Python. The majority of our infrastructure sits within EKS on AWS, using Istio. We use Terraform and Helm/Flux when working with AWS and EKS (k8s). Deployments are managed with a combination of Jenkins and Flux. We rely heavily on Kafka, Cassandra, Mongo and Postgres and are increasingly leveraging AWS-managed services (e.g. RDS, lambda).
Requirements:
● Should have at least 2+ years of DevOps experience
● Should have experience with Kubernetes
● Should have experience with Terraform/Helm
● Should have experience in building scalable server-side systems
● Should have experience in cloud infrastructure and designing databases
● Having experience with NodeJS/TypeScript/AWS is a bonus
● Having experience with WebRTC is a bonus
Roles and Responsibilities
- 5 - 8 years of experience in Infrastructure setup on Cloud, Build/Release Engineering, Continuous Integration and Delivery, Configuration/Change Management.
- Good experience with Linux/Unix administration and moderate to significant experience administering relational databases such as PostgreSQL, etc.
- Experience with Docker and related tools (Cassandra, Rancher, Kubernetes etc.)
- Experience of working in Config management tools (Ansible, Chef, Puppet, Terraform etc.) is a plus.
- Experience with cloud technologies like Azure
- Experience with monitoring and alerting (TICK, ELK, Nagios, PagerDuty)
- Experience with distributed systems and related technologies (NSQ, RabbitMQ, SQS, etc.) is a plus
- Experience with scaling data store technologies is a plus (PostgreSQL, Scylla, Redis) is a plus
- Experience with SSH Certificate Authorities and Identity Management (Netflix BLESS) is a plus
- Experience with multi-domain SSL certs and provisioning a plus (Let's Encrypt) is a plus
- Experience with chaos or similar methodologies is a plus

Radical is a platform connecting data, medicine and people -- through machine learning, and usable, performant products. Software has never been the strong suit of the medical industry -- and we are changing that. We believe that the same sophistication and performance that powers our daily needs through millions of consumer applications -- be it your grocery, your food delivery or your movie tickets -- when applied to healthcare, has a massive potential to transform the industry, and positively impact lives of patients and doctors. Radical works with some of the largest hospitals and public health programmes in India, and has a growing footprint both inside the country and abroad.
As a DevOps Engineer at Radical, you will:
Work closely with all stakeholders in the healthcare ecosystem - patients, doctors, paramedics and administrators - to conceptualise and bring to life the ideal set of products that add value to their time
Work alongside Software Developers and ML Engineers to solve problems and assist in architecture design
Work on systems which have an extraordinary emphasis on capturing data that can help build better workflows, algorithms and tools
Work on high performance systems that deal with several million transactions, multi-modal data and large datasets, with a close attention to detail
We’re looking for someone who has:
Familiarity and experience with writing working, well-documented and well-tested scripts, Dockerfiles, Puppet/Ansible/Chef/Terraform scripts.
Proficiency with scripting languages like Python and Bash.
Knowledge of systems deployment and maintainence, including setting up CI/CD and working alongside Software Developers, monitoring logs, dashboards, etc.
Experience integrating with a wide variety of external tools and services
Experience navigating AWS and leveraging appropriate services and technologies rather than DIY solutions (such as hosting an application directly on EC2 vs containerisation, or an Elastic Beanstalk)
It’s not essential, but great if you have:
An established track record of deploying and maintaining systems.
Experience with microservices and decomposition of monolithic architectures
Proficiency in automated tests.
Proficiency with the linux ecosystem
Experience in deploying systems to production on cloud platforms such as AWS
The position is open now, and we are onboarding immediately.
Please write to us with an updated resume, and one thing you would like us to see as part of your application. This one thing can be anything that you think makes you stand apart among candidates.
Radical is based out of Delhi NCR, India, and we look forward to working with you!
We're looking for people who may not know all the answers, but are obsessive about finding them, and take pride in the code that they write. We are more interested in the ability to learn fast, think rigorously and for people who aren’t afraid to challenge assumptions, and take large bets -- only to work hard and prove themselves correct. You're encouraged to apply even if your experience doesn't precisely match the job description. Join us.

