At Karza technologies, we take pride in building one of the most comprehensive digital onboarding & due-diligence platforms by profiling millions of entities and trillions of associations amongst them using data collated from more than 700 publicly available government sources. Primarily in the B2B Fintech Enterprise space, we are headquartered in Mumbai in Lower Parel with 100+ strong workforce. We are truly furthering the cause of Digital India by providing the entire BFSI ecosystem with tech products and services that aid onboarding customers, automating processes and mitigating risks seamlessly, in real-time and at fraction of the current cost.
A few recognitions:
- Recognized as Top25 startups in India to work with 2019 by LinkedIn
- Winner of HDFC Bank's Digital Innovation Summit 2020
- Super Winners (Won every category) at Tecnoviti 2020 by Banking Frontiers
- Winner of Amazon AI Award 2019 for Fintech
- Winner of FinTech Spot Pitches at Fintegrate Zone 2018 held at BSE
- Winner of FinShare 2018 challenge held by ShareKhan
- Only startup in Yes Bank Global Fintech Accelerator to win the account during the Cohort
- 2nd place Citi India FinTech Challenge 2018 by Citibank
- Top 3 in Viacom18's Startup Engagement Programme VStEP
What your average day would look like:
- Deploy and maintain mission-critical information extraction, analysis, and management systems
- Manage low cost, scalable streaming data pipelines
- Provide direct and responsive support for urgent production issues
- Contribute ideas towards secure and reliable Cloud architecture
- Use open source technologies and tools to accomplish specific use cases encountered within the project
- Use coding languages or scripting methodologies to solve automation problems
- Collaborate with others on the project to brainstorm about the best way to tackle a complex infrastructure, security, or deployment problem
- Identify processes and practices to streamline development & deployment to minimize downtime and maximize turnaround time
What you need to work with us:
- Proficiency in at least one of the general-purpose programming languages like Python, Java, etc.
- Experience in managing the IAAS and PAAS components on popular public Cloud Service Providers like AWS, Azure, GCP etc.
- Proficiency in Unix Operating systems and comfortable with Networking concepts
- Experience with developing/deploying a scalable system
- Experience with the Distributed Database & Message Queues (like Cassandra, ElasticSearch, MongoDB, Kafka, etc.)
- Experience in managing Hadoop clusters
- Understanding of containers and have managed them in production using container orchestration services.
- Solid understanding of data structures and algorithms.
- Applied exposure to continuous delivery pipelines (CI/CD).
- Keen interest and proven track record in automation and cost optimization.
Experience:
- 1-4 years of relevant experience
- BE in Computer Science / Information Technology

About Karza
About
Connect with the team
Similar jobs
About MyOperator
MyOperator is a Business AI Operator, a category leader that unifies WhatsApp, Calls, and AI-powered chat & voice bots into one intelligent business communication platform. Unlike fragmented communication tools, MyOperator combines automation, intelligence, and workflow integration to help businesses run WhatsApp campaigns, manage calls, deploy AI chatbots, and track performance — all from a single, no-code platform. Trusted by 12,000+ brands including Amazon, Domino's, Apollo, and Razorpay, MyOperator enables faster responses, higher resolution rates, and scalable customer engagement — without fragmented tools or increased headcount.
Job Summary
We are looking for a skilled and motivated DevOps Engineer with 3+ years of hands-on experience in AWS cloud infrastructure, CI/CD automation, and Kubernetes-based deployments. The ideal candidate will have strong expertise in Infrastructure as Code, containerization, monitoring, and automation, and will play a key role in ensuring high availability, scalability, and security of production systems.
Key Responsibilities
- Design, deploy, manage, and maintain AWS cloud infrastructure, including EC2, RDS, OpenSearch, VPC, S3, ALB, API Gateway, Lambda, SNS, and SQS.
- Build, manage, and operate Kubernetes (EKS) clusters and containerized workloads.
- Containerize applications using Docker and manage deployments with Helm charts
- Develop and maintain CI/CD pipelines using Jenkins for automated build and deployment processes
- Provision and manage infrastructure using Terraform (Infrastructure as Code)
- Implement and manage monitoring, logging, and alerting solutions using Prometheus and Grafana
- Write and maintain Python scripts for automation, monitoring, and operational tasks
- Ensure high availability, scalability, performance, and cost optimization of cloud resources
- Implement and follow security best practices across AWS and Kubernetes environments
- Troubleshoot production issues, perform root cause analysis, and support incident resolution
- Collaborate closely with development and QA teams to streamline deployment and release processes
Required Skills & Qualifications
- 3+ years of hands-on experience as a DevOps Engineer or Cloud Engineer.
- Strong experience with AWS services, including:
- EC2, RDS, OpenSearch, VPC, S3
- Application Load Balancer (ALB), API Gateway, Lambda
- SNS and SQS.
- Hands-on experience with AWS EKS (Kubernetes)
- Strong knowledge of Docker and Helm charts
- Experience with Terraform for infrastructure provisioning and management
- Solid experience building and managing CI/CD pipelines using Jenkins
- Practical experience with Prometheus and Grafana for monitoring and alerting
- Proficiency in Python scripting for automation and operational tasks
- Good understanding of Linux systems, networking concepts, and cloud security
- Strong problem-solving and troubleshooting skills
Good to Have (Preferred Skills)
- Exposure to GitOps practices
- Experience managing multi-environment setups (Dev, QA, UAT, Production)
- Knowledge of cloud cost optimization techniques
- Understanding of Kubernetes security best practices
- Experience with log aggregation tools (e.g., ELK/OpenSearch stack)
Language Preference
- Fluency in English is mandatory.
- Fluency in Hindi is preferred.
Job Role : DevOps Engineer (Python + DevOps)
Experience : 4 to 10 Years
Location : Hyderabad
Work Mode : Hybrid
Mandatory Skills : Python, Ansible, Docker, Kubernetes, CI/CD, Cloud (AWS/Azure/GCP)
Job Description :
We are looking for a skilled DevOps Engineer with expertise in Python, Ansible, Docker, and Kubernetes.
The ideal candidate will have hands-on experience automating deployments, managing containerized applications, and ensuring infrastructure reliability.
Key Responsibilities :
- Design and manage containerization and orchestration using Docker & Kubernetes.
- Automate deployments and infrastructure tasks using Ansible & Python.
- Build and maintain CI/CD pipelines for streamlined software delivery.
- Collaborate with development teams to integrate DevOps best practices.
- Monitor, troubleshoot, and optimize system performance.
- Enforce security best practices in containerized environments.
- Provide operational support and contribute to continuous improvements.
Required Qualifications :
- Bachelor’s in Computer Science/IT or related field.
- 4+ years of DevOps experience.
- Proficiency in Python and Ansible.
- Expertise in Docker and Kubernetes.
- Hands-on experience with CI/CD tools and pipelines.
- Experience with at least one cloud provider (AWS, Azure, or GCP).
- Strong analytical, communication, and collaboration skills.
Preferred Qualifications :
- Experience with Infrastructure-as-Code tools like Terraform.
- Familiarity with monitoring/logging tools like Prometheus, Grafana, or ELK.
- Understanding of Agile/Scrum practices.
Managing cloud-based serverless infrastructure on AWS, GCP(firebase) with IaC
(Terraform, CloudFormation etc.,)
Deploying and maintaining products, services, and network components with a focus
on security, reliability, and zero downtime
Automating and streamlining existing processes to aid the development team
Working with the development team to create ephemeral environments, simplifying
the development lifecycle
Driving forward our blockchain infrastructure by creating and managing validators for
a wide variety of new and existing blockchains
Requirements:
1-3+ years in a SRE / DevOps / DevSecOps or Infrastructure Engineering role
Strong working knowledge of Amazon Web Services (AWS) or GCP or similar cloud
ecosystem
Experience working with declarative Infrastructure-as-Code frameworks(Terraform,
CloudFormation)
Experience with containerization technologies and tools (Docker, Kubernetes), CI/CD
pipelines and Linux/Unix administration
Bonus points - if you know more about crypto, staking, defi, proof-of-stake,
validators, delegations
Benefits:
Competitive CTC on par with market along with ESOPs/Tokens

Bachelor's degree in information security, computer science, or related.
A Strong Devops experience of at least 4+ years
Strong Experience in Unix/Linux/Python scripting
Strong networking knowledge,vSphere networking stack knowledge desired.
Experience on Docker and Kubernetes
Experience with cloud technologies (AWS/Azure)
Exposure to Continuous Development Tools such as Jenkins or Spinnaker
Exposure to configuration management systems such as Ansible
Knowledge of resource monitoring systems
Ability to scope and estimate
Strong verbal and communication skills
Advanced knowledge of Docker and Kubernetes.
Exposure to Blockchain as a Service (BaaS) like - Chainstack/IBM blockchain platform/Oracle Blockchain Cloud/Rubix/VMWare etc.
Capable of provisioning and maintaining local enterprise blockchain platforms for Development and QA (Hyperledger fabric/Baas/Corda/ETH).
About Navis
- Work towards improving the following 4 verticals - scalability, availability, security, and cost, for company's workflows and products.
- Help in provisioning, managing, optimizing cloud infrastructure in AWS (IAM, EC2, RDS, CloudFront, S3, ECS, Lambda, ELK etc.)
- Work with the development teams to design scalable, robust systems using cloud architecture for both 0-to-1 and 1-to-100 products.
- Drive technical initiatives and architectural service improvements.
- Be able to predict problems and implement solutions that detect and prevent outages.
- Mentor/manage a team of engineers.
- Design solutions with failure scenarios in mind to ensure reliability.
- Document rigorously to keep track of all changes/upgrades to the infrastructure and as well share knowledge with the rest of the team
- Identify vulnerabilities during development with actionable information to empower developers to remediate vulnerabilities
- Automate the build and testing processes to consistently integrate code
- Manage changes to documents, software, images, large web sites, and other collections of code, configuration, and metadata among disparate teams
Requirements
Searce is a niche’ Cloud Consulting business with futuristic tech DNA. We do new-age tech
to realise the “Next” in the “Now” for our Clients. We specialise in Cloud Data Engineering,
AI/Machine Learning and Advanced Cloud infra tech such as Anthos and Kubernetes. We
are one of the top & the fastest growing partners for Google Cloud and AWS globally with
over 2,500 clients successfully moved to cloud.
What we believe?
1. Best practices are overrated
○ Implementing best practices can only make one n ‘average’ .
2. Honesty and Transparency
○ We believe in naked truth. We do what we tell and tell what we do.
3. Client Partnership
○ Client - Vendor relationship: No. We partner with clients instead.
○ And our sales team comprises 100% of our clients.
How we work?
It’s all about being Happier first. And rest follows. Searce work culture is defined by
HAPPIER.
1. Humble: Happy people don’t carry ego around. We listen to understand; not to
respond.
2. Adaptable: We are comfortable with uncertainty. And we accept changes well. As
that’s what life's about.
3. Positive: We are super positive about work & life in general. We love to forget and
forgive. We don’t hold grudges. We don’t have time or adequate space for it.
4. Passionate: We are as passionate about the great street-food vendor across the
street as about Tesla’s new model and so on. Passion is what drives us to work and
makes us deliver the quality we deliver.
5. Innovative: Innovate or Die. We love to challenge the status quo.
6. Experimental: We encourage curiosity & making mistakes.
7. Responsible: Driven. Self motivated. Self governing teams. We own it.
Are you the one? Quick self-discovery test:
1. Love for cloud: When was the last time your dinner entailed an act on “How would
‘Jerry Seinfeld’ pitch Cloud platform & products to this prospect” and your friend did
the ‘Sheldon’ version of the same thing.
2. Passion for sales: When was the last time you went at a remote gas station while on
vacation, and ended up helping the gas station owner saasify his 7 gas stations
across other geographies.
3. Compassion for customers: You listen more than you speak. When you do speak,
people feel the need to listen.
4. Humor for life: When was the last time you told a concerned CEO, ‘If Elon Musk can
attempt to take humanity to Mars, why can’t we take your business to run on cloud ?
Your bucket of undertakings:
This position will be responsible to consult with clients and propose architectural solutions
to help move & improve infra from on-premise to cloud or help optimize cloud spend from
one public cloud to the other.
1. Be the first one to experiment on new age cloud offerings, help define the best
practise as a thought leader for cloud, automation & Dev-Ops, be a solution
visionary and technology expert across multiple channels.
2. Continually augment skills and learn new tech as the technology and client needs
evolve
3. Use your experience in Google cloud platform, AWS or Microsoft Azure to build
hybrid-cloud solutions for customers.
4. Provide leadership to project teams, and facilitate the definition of project
deliverables around core Cloud based technology and methods.
5. Define tracking mechanisms and ensure IT standards and methodology are met;
deliver quality results.
6. Participate in technical reviews of requirements, designs, code and other artifacts
7. Identify and keep abreast of new technical concepts in google cloud platform
8. Security, Risk and Compliance - Advise customers on best practices around access
management, network setup, regulatory compliance and related areas.
Accomplishment Set
● Passionate, persuasive, articulate Cloud professional capable of quickly establishing
interest and credibility
● Good business judgment, a comfortable, open communication style, and a
willingness and ability to work with customers and teams.
● Strong service attitude and a commitment to quality.
● Highly organised and efficient.
● Confident working with others to inspire a high-quality standard.
Education, Experience, etc.
1. Is Education overrated? Yes. We believe so. However there is no way to locate you
otherwise. So unfortunately we might have to look for a Bachelor's or Master's
degree in engineering from a reputed institute or you should be programming from
12. And the latter is better. We will find you faster if you specify the latter in some
manner. Not just degree, but we are not too thrilled by tech certifications too ... :)
2. To reiterate: Passion to tech-awesome, insatiable desire to learn the latest of the
new-age cloud tech, highly analytical aptitude and a strong ‘desire to deliver’ outlives
those fancy degrees!
3. 1 - 5 years of experience with at least 2 - 3 years of hands-on experience in Cloud
Computing (AWS/GCP/Azure) and IT operational experience in a global enterprise
environment.
4. Good analytical, communication, problem solving, and learning skills.
5. Knowledge on programming against cloud platforms such as Google Cloud Platf
Experience: 12 - 20 years
Responsibilities :
The Cloud Solution Architect/Engineer specializing in migrations is a cloud role in the project delivery cycle with hands on experience migrating customers to the cloud.
Demonstrated experience in cloud infrastructure project deals for hands on migration to public clouds such as Azure.
Strong background in linux/Unix and/or Windows administration
Ability to use wide variety of open source technologies.
Closely work with Architects and customer technical teams in migrating applications to Azure cloud in Architect Role.
Mentor and monitor the junior developers and track their work.
Design as per best practices and insustry standard coding practices
Ensure services are built for performance, scalability, fault tolerance and security with reusable patterns.
Recommend best practises and standards for Azure migrations
Define coding best practices for high performance and guide the team in adopting the same
Skills:
Mandatory:
Experience with cloud migration technologies such as Azure Migrate
Azure trained / certified architect – Associate or Professional Level
Understanding of hybrid cloud solutions and experience of integrating public cloud into tradition hosting/delivery models
Strong understanding of cloud migration techniques and workflows (on premise to Cloud Platforms)
Configuration, migration and deployment experience in Azure apps technologies.
High Availability and Disaster recovery implementations
Experience architecting and deploying multi-tiered applications.
Experience building and deploying multi-tier, scalable, and highly available applications using Java, Microsoft and Database technologies
Experience in performance tuning, including the following ; (load balancing, web servers, content delivery Networks, Caching (Content and API))
Experience in large scale data center migration
Experience of implementing architectural governance and proactively managing issues and risks throughout the delivery lifecycle.
Good familiarity with the disciplines of enterprise software development such as configuration & release management, source code & version controls, and operational considerations such as monitoring and instrumentation
Experience of consulting or service provider roles (internal, or external);
Experience using database technologies like Oracle, MySQL and understanding of NoSQL is preferred.
Experience in designing or implementing data warehouse solutions is highly preferred.
Experience in automation/configuration management using Puppet, Chef, Ansible, Saltstack, Bosh, Terraform or an equivalent.
Experience with source code management tools such as GitHub, GitLab, Bitbucket or equivalent
Experience with SQL and NoSQL DBs such as SQL, MySQL.
Solid understanding of networking and core Internet Protocols such as TCP/IP, DNS, SMTP, HTTP and routing in distributed networks.
A working understanding of code and script such as: PHP, Python, Perl and/or Ruby.
A working understanding with CI/CD tools such as Jenkins or equivalent
A working understanding of scheduling and orchestration with tools such as: kubernetes, Mesos swarm or equivalent.
Radical is a platform connecting data, medicine and people -- through machine learning, and usable, performant products. Software has never been the strong suit of the medical industry -- and we are changing that. We believe that the same sophistication and performance that powers our daily needs through millions of consumer applications -- be it your grocery, your food delivery or your movie tickets -- when applied to healthcare, has a massive potential to transform the industry, and positively impact lives of patients and doctors. Radical works with some of the largest hospitals and public health programmes in India, and has a growing footprint both inside the country and abroad.
As a DevOps Engineer at Radical, you will:
Work closely with all stakeholders in the healthcare ecosystem - patients, doctors, paramedics and administrators - to conceptualise and bring to life the ideal set of products that add value to their time
Work alongside Software Developers and ML Engineers to solve problems and assist in architecture design
Work on systems which have an extraordinary emphasis on capturing data that can help build better workflows, algorithms and tools
Work on high performance systems that deal with several million transactions, multi-modal data and large datasets, with a close attention to detail
We’re looking for someone who has:
Familiarity and experience with writing working, well-documented and well-tested scripts, Dockerfiles, Puppet/Ansible/Chef/Terraform scripts.
Proficiency with scripting languages like Python and Bash.
Knowledge of systems deployment and maintainence, including setting up CI/CD and working alongside Software Developers, monitoring logs, dashboards, etc.
Experience integrating with a wide variety of external tools and services
Experience navigating AWS and leveraging appropriate services and technologies rather than DIY solutions (such as hosting an application directly on EC2 vs containerisation, or an Elastic Beanstalk)
It’s not essential, but great if you have:
An established track record of deploying and maintaining systems.
Experience with microservices and decomposition of monolithic architectures
Proficiency in automated tests.
Proficiency with the linux ecosystem
Experience in deploying systems to production on cloud platforms such as AWS
The position is open now, and we are onboarding immediately.
Please write to us with an updated resume, and one thing you would like us to see as part of your application. This one thing can be anything that you think makes you stand apart among candidates.
Radical is based out of Delhi NCR, India, and we look forward to working with you!
We're looking for people who may not know all the answers, but are obsessive about finding them, and take pride in the code that they write. We are more interested in the ability to learn fast, think rigorously and for people who aren’t afraid to challenge assumptions, and take large bets -- only to work hard and prove themselves correct. You're encouraged to apply even if your experience doesn't precisely match the job description. Join us.








