
Position: DevOps Lead
Job Description
● Research, evangelize and implement best practices and tools for GitOps, DevOps, continuous integration, build automation, deployment automation, configuration management, infrastructure as code.
● Develop software solutions to support DevOps tooling; including investigation of bug fixes, feature enhancements, and software/tools updates
● Participate in the full systems life cycle with solution design, development, implementation, and product support using Scrum and/or other Agile practices
● Evaluating, implementing, and streamlining DevOps practices.
● Design and drive the implementation of fully automated CI/CD pipelines.
● Designing and creating Cloud services and architecture for highly available and scalable environments. Lead the monitoring, debugging, and enhancing pipelines for optimal operation and performance. Supervising, examining, and handling technical operations.
Qualifications
● 5 years of experience in managing application development, software delivery lifecycle, and/or infrastructure development and/or administration
● Experience with source code repository management tools, code merge and quality checks, continuous integration, and automated deployment & management using tools like Bitbucket, Git, Ansible, Terraform, Artifactory, Service Now, Sonarqube, Selenium.
● Minimum of 4 years of experience with approaches and tooling for automated build, delivery, and release of the software
● Experience and/or knowledge of CI/CD tools: Jenkins, Bitbucket Pipelines, Gitlab CI, GoCD.
● Experience with Linux systems: CentOS, RHEL, Ubuntu, Secure Linux... and Linux Administration.
● Minimum of 4 years experience with managing medium/large teams including progress monitoring and reporting
● Experience and/or knowledge of Docker, Cloud, and Orchestration: GCP, AWS, Kubernetes.
● Experience and/or knowledge of system monitoring, logging, high availability, redundancy, autoscaling, and failover.
● Experience automating manual and/or repetitive processes.
● Experience and/or knowledge with networking and load balancing: Nginx, Firewall, IP network

About Indventur Partner
About
Connect with the team
Similar jobs
Job Title : DevOps Engineer
Experience : 3+ Years
Location : Indiranagar, Bengaluru (Work From Office – 5 Days)
Employment Type : Full-Time
Work Timings : 11:00 AM to 7:00 PM IST
Notice Period : Immediate Joiners Preferred
Role Overview :
We are seeking a skilled DevOps Engineer with 3+ years of experience in building and managing scalable cloud-native infrastructure.
The ideal candidate will have strong expertise in Kubernetes and Helm, along with hands-on experience in deploying and maintaining production-grade systems on cloud platforms.
This role offers an opportunity to work in a high-growth startup environment, contributing to both existing systems and new infrastructure development.
Key Responsibilities :
- Design, deploy, and manage scalable infrastructure using Kubernetes.
- Build and maintain CI/CD pipelines for efficient and automated deployments.
- Manage and optimize cloud environments (preferably GCP).
- Implement Infrastructure as Code using Helm/Terraform.
- Monitor system performance and ensure high availability and reliability.
- Handle bug fixes, system improvements, and performance optimization.
- Collaborate with engineering teams to design scalable microservices architecture.
- Implement logging, monitoring, and alerting solutions.
- Ensure security best practices including IAM, secrets management, and network policies.
Mandatory Skills :
- Strong hands-on experience with Kubernetes.
- Expertise in Helm Charts.
- Experience with Google Cloud Platform (GCP).
- Hands-on experience with ArgoCD or similar CI/CD tools.
- Knowledge of CI/CD tools like Jenkins, GitHub Actions, GitLab CI.
- Experience in database hosting and scaling.
Nice to Have :
- Exposure to other cloud platforms (AWS/Azure).
- Experience with modern DevOps and automation tools.
- Ability to quickly learn and adapt to new technologies.
Team & Work Scope :
- No dedicated DevOps team currently – high ownership role.
- Work on both existing systems (maintenance & improvements) and new system builds (greenfield projects).
- Opportunity to shape DevOps practices and infrastructure from scratch.
Preferred Candidate Profile :
- 3+ years of relevant DevOps experience.
- Strong problem-solving and debugging skills.
- Experience working in fast-paced startup environments.
- Understanding of scalability, security, and performance optimization.
- Good communication and collaboration skills.
Hiring Process :
- Profile Screening
- GT Assessment
- Technical Interview – Round 1
- Technical Interview – Round 2
- Final Round (if required with US team)
Springer Capital is a cross-border asset management firm focused on real estate investment banking in China and the USA. We are offering a remote internship for individuals passionate about automation, cloud infrastructure, and CI/CD pipelines. Start and end dates are flexible, and applicants may be asked to complete a short technical quiz or assignment as part of the application process.
Responsibilities:
▪ Assist in building and maintaining CI/CD pipelines to automate development workflows
▪ Monitor and improve system performance, reliability, and scalability
▪ Manage cloud-based infrastructure (e.g., AWS, Azure, or GCP)
▪ Support containerization and orchestration using Docker and Kubernetes
▪ Implement infrastructure as code using tools like Terraform or CloudFormation
▪ Collaborate with software engineering and data teams to streamline deployments
▪ Troubleshoot system and deployment issues across development and production environments
Location: Bangalore, India
Experience: 3 Years
Company: Tradelab Technologies
About Tradelab Technologies:
Tradelab Technologies is a leading fintech solutions provider building high-performance trading platforms, brokerage infrastructure, and financial technology products. Our systems handle real-time market data, order management, and analytics for clients across the trading ecosystem.
Role Overview:
We are looking for a skilled DevOps Engineer to manage, optimize, and scale our trading infrastructure. The ideal candidate should have strong experience with CI/CD pipelines, cloud infrastructure, containerization, and system automation, with an emphasis on reliability and performance in production environments.
Key Responsibilities:
- Design, implement, and maintain CI/CD pipelines for automated deployment and monitoring.
- Manage and scale cloud infrastructure (AWS, GCP, or Azure) for high-availability trading systems.
- Work closely with development and QA teams to ensure smooth integration and release processes.
- Automate provisioning, configuration, and monitoring using tools like Ansible, Terraform, or similar.
- Implement logging, alerting, and monitoring systems for proactive issue detection.
- Ensure system reliability, security, and performance in production environments.
- Manage version control and containerized environments (Git, Docker, Kubernetes).
- Troubleshoot infrastructure issues and optimize deployment performance.
Required Skills & Qualifications:
- Bachelor’s degree in Computer Science, Engineering, or equivalent.
- Minimum 3 years of experience in DevOps, SRE, or Infrastructure Engineering roles.
- Strong hands-on experience with AWS / GCP / Azure.
- Proficiency in CI/CD tools like Jenkins, GitLab CI, or GitHub Actions.
- Expertise in Docker, Kubernetes, and container orchestration.
- Experience with infrastructure-as-code tools like Terraform, Ansible, or CloudFormation.
- Proficient with Linux administration, shell scripting, and Python or Go for automation.
- Knowledge of monitoring tools like Prometheus, Grafana, ELK Stack, or Datadog.
- Familiarity with networking, security, and load balancing concepts.
Nice-to-Have Skills:
- Experience working with trading or low-latency systems.
- Knowledge of message queues (Kafka, RabbitMQ).
- Exposure to microservices architecture and API management.
- Experience with incident management and disaster recovery planning.
Why Join Tradelab Technologies:
- Be part of a fast-paced fintech environment working on scalable trading infrastructure.
- Collaborate with talented teams solving real-world financial technology challenges.
- Competitive pay, flexible work culture, and opportunities for growth.
What does a successful Senior DevOps Engineer do at Fiserv?
This role’s focus will be on contributing and enhancing our DevOps environment within Issuer Solution group, where our cross functional Scrum teams are delivering solutions built on cutting-edge mobile technology and products. You will be expected to support across the wider business unit, leading DevOps practices and initiatives.
What will you do:
• Build, manage, and deploy CI/CD pipelines.
• DevOps Engineer - Helm Chart, Rundesk, Openshift
• Strive for continuous improvement and build continuous integration, continuous development, and constant deployment pipeline.
• Implementing various development, testing, automation tools, and IT infrastructure
• Optimize and automate release/development cycles and processes.
• Be part of and help promote our DevOps culture.
• Identify and implement continuous improvements to the development practice
What you must have:
• 3+ years of experience in devops with hands-on experience in the following:
- Writing automation scripts for deployments and housekeeping using shell scripts (bash) and ansible playbooks
- Building docker images and running/managing docker instances
- Building Jenkins pipelines using groovy scripts
- Working knowledge on kubernetes including application deployments, managing application configurations and persistence volumes
• Has good understanding on infrastructure as code
• Ability to write and update documentation
• Demonstrate a logical, process orientated approach to problems and troubleshooting
• Ability to collaborate with multi development teams
What you are preferred to have:
• 8+ years of development experience
• Jenkins administration experience
• Hands-on experience in building and deploying helm charts
Process Skills:
• Should have worked in Agile Project
We are looking for a DevOps Engineer for managing the interchange of data between the server and the users. Your primary responsibility will be the development of all server-side logic, definition, and maintenance of the central database, and ensuring high performance and responsiveness to request from the frontend. You will also be responsible for integrating the front-end elements built by your co-workers into the application. Therefore, a basic understanding of frontend technologies is necessary as well.
What we are looking for
- Must have strong knowledge of Kubernetes and Helm3
- Should have previous experience in Dockerizing the applications.
- Should be able to automate manual tasks using Shell or Python
- Should have good working knowledge on AWS and GCP clouds
- Should have previous experience working on Bitbucket, Github, or any other VCS.
- Must be able to write Jenkins Pipelines and have working knowledge on GitOps and ArgoCD.
- Have hands-on experience in Proactive monitoring using tools like NewRelic, Prometheus, Grafana, Fluentbit, etc.
- Should have a good understanding of ELK Stack.
- Exposure on Jira, confluence, and Sprints.
What you will do:
- Mentor junior Devops engineers and improve the team’s bar
- Primary owner of tech best practices, tech processes, DevOps initiatives, and timelines
- Oversight of all server environments, from Dev through Production.
- Responsible for the automation and configuration management
- Provides stable environments for quality delivery
- Assist with day-to-day issue management.
- Take lead in containerising microservices
- Develop deployment strategies that allow DevOps engineers to successfully deploy code in any environment.
- Enables the automation of CI/CD
- Implement dashboard to monitors various
- 1-3 years of experience in DevOps
- Experience in setting up front end best practices
- Working in high growth startups
- Ownership and Be Proactive.
- Mentorship & upskilling mindset.
- systems and applications
what you’ll get- Health Benefits
- Innovation-driven culture
- Smart and fun team to work with
- Friends for life
About Us
At Digilytics™, we build and deliver easy to use AI products to the secured lending and consumer industry sectors. In an ever-crowded world of clever technology solutions looking for a problem to solve, our solutions start with a keen understanding of what creates and what destroys value in our clients’ business.
Founded by Arindom Basu (Founding member of Infosys Consulting), the leadership of Digilytics™ is deeply rooted in leveraging disruptive technology to drive profitable business growth. With over 50 years of combined experience in technology-enabled change, the Digilytics™ leadership is focused on building a values-first firm that will stand the test of time.
We are currently focused on developing a product, Revel FS, to revolutionise loan origination for mortgages and secured lending. We are also developing a second product, Revel CI, focused on improving trade (secondary) sales to consumer industry clients like auto and FMCG players.
The leadership strongly believes in the ethos of enabling intelligence across the organization. Digiliytics AI is headquartered in London, with a presence across India.
Website: http://www.digilytics.ai">www.digilytics.ai
- Know about our product
- https://www.digilytics.ai/RevEL/Digilytics">Digilytics RelEL
- https://www.digilytics.ai/RevUP/">Digilytics RelUP
- What's it like working at Digilytics https://www.digilytics.ai/about-us.html">https://www.digilytics.ai/about-us.html
- Digilytics featured in Forbes: https://bit.ly/3zDQc4z">https://bit.ly/3zDQc4z
Responsibilities
- Experience with Azure services (Virtual machines, Containers, Databases, Security/Firewall, Function Apps etc)
- Hands-on experience on Kubernetes/Docker/helm.
- Deployment of java Builds & administration/configuration of Nginx/Reverse Proxy, Load balancer, Ms-SQL, Github, Disaster Recovery,
- Linux – Must have basic knowledge- User creation/deletion, ACL, LVM etc.
- CI/CD - Azure DevOps or any other automation tool like Terraform, Jenkins etc.
- Experience with SharePoint and O365 administration
- Azure/Kubernetes certification will be preferred.
- Microsoft Partnership experience is good to have.
- Excellent understanding of required technologies
- Good interpersonal skills and the ability to communicate ideas clearly at all levels
- Ability to work in unfamiliar business areas and to use your skills to create solutions
- Ability to both work in and lead a team and to deliver and accept peer review
- Flexible approach to working environment and hours to meet the needs of the business and clients
Must Haves:
- Hands-on experience on Kubernetes/Docker/helm.
- Experience on Azure/Aws or any other cloud provider.
- Linux & CI/CD tools knowledge.
Experience & Education:
- A start up mindset with proven experience working in both smaller and larger organizations having multicultural exposure
- Between 4-9 years of experience working closely with the relevant technologies, and developing world-class software and solutions
- Domain and industry experience by serving customers in one or more of these industries - Financial Services, Professional Services, other Retail Consumer Services
- A bachelor's degree, or equivalent, in Software Engineering and Computer Science
Roles and Responsibilities
● Managing Availability, Performance, Capacity of infrastructure and applications.
● Building and implementing observability for applications health/performance/capacity.
● Optimizing On-call rotations and processes.
● Documenting “tribal” knowledge.
● Managing Infra-platforms like
- Mesos/Kubernetes
- CICD
- Observability(Prometheus/New Relic/ELK)
- Cloud Platforms ( AWS/ Azure )
- Databases
- Data Platforms Infrastructure
● Providing help in onboarding new services with the production readiness review process.
● Providing reports on services SLO/Error Budgets/Alerts and Operational Overhead.
● Working with Dev and Product teams to define SLO/Error Budgets/Alerts.
● Working with the Dev team to have an in-depth understanding of the application architecture and its bottlenecks.
● Identifying observability gaps in product services, infrastructure and working with stake owners to fix it.
● Managing Outages and doing detailed RCA with developers and identifying ways to avoid that situation.
● Managing/Automating upgrades of the infrastructure services.
● Automate toil work.
Experience & Skills
● 3+ Years of experience as an SRE/DevOps/Infrastructure Engineer on large scale microservices and infrastructure.
● A collaborative spirit with the ability to work across disciplines to influence, learn, and deliver.
● A deep understanding of computer science, software development, and networking principles.
● Demonstrated experience with languages, such as Python, Java, Golang etc.
● Extensive experience with Linux administration and good understanding of the various linux kernel subsystems (memory, storage, network etc).
● Extensive experience in DNS, TCP/IP, UDP, GRPC, Routing and Load Balancing.
● Expertise in GitOps, Infrastructure as a Code tools such as Terraform etc.. and Configuration Management Tools such as Chef, Puppet, Saltstack, Ansible.
● Expertise of Amazon Web Services (AWS) and/or other relevant Cloud Infrastructure solutions like Microsoft Azure or Google Cloud.
● Experience in building CI/CD solutions with tools such as Jenkins, GitLab, Spinnaker, Argo etc.
● Experience in managing and deploying containerized environments using Docker,
Mesos/Kubernetes is a plus.
● Experience with multiple datastores is a plus (MySQL, PostgreSQL, Aerospike,
Couchbase, Scylla, Cassandra, Elasticsearch).
● Experience with data platforms tech stacks like Hadoop, Hive, Presto etc is a plus
At Karza technologies, we take pride in building one of the most comprehensive digital onboarding & due-diligence platforms by profiling millions of entities and trillions of associations amongst them using data collated from more than 700 publicly available government sources. Primarily in the B2B Fintech Enterprise space, we are headquartered in Mumbai in Lower Parel with 100+ strong workforce. We are truly furthering the cause of Digital India by providing the entire BFSI ecosystem with tech products and services that aid onboarding customers, automating processes and mitigating risks seamlessly, in real-time and at fraction of the current cost.
A few recognitions:
- Recognized as Top25 startups in India to work with 2019 by LinkedIn
- Winner of HDFC Bank's Digital Innovation Summit 2020
- Super Winners (Won every category) at Tecnoviti 2020 by Banking Frontiers
- Winner of Amazon AI Award 2019 for Fintech
- Winner of FinTech Spot Pitches at Fintegrate Zone 2018 held at BSE
- Winner of FinShare 2018 challenge held by ShareKhan
- Only startup in Yes Bank Global Fintech Accelerator to win the account during the Cohort
- 2nd place Citi India FinTech Challenge 2018 by Citibank
- Top 3 in Viacom18's Startup Engagement Programme VStEP
What your average day would look like:
- Deploy and maintain mission-critical information extraction, analysis, and management systems
- Manage low cost, scalable streaming data pipelines
- Provide direct and responsive support for urgent production issues
- Contribute ideas towards secure and reliable Cloud architecture
- Use open source technologies and tools to accomplish specific use cases encountered within the project
- Use coding languages or scripting methodologies to solve automation problems
- Collaborate with others on the project to brainstorm about the best way to tackle a complex infrastructure, security, or deployment problem
- Identify processes and practices to streamline development & deployment to minimize downtime and maximize turnaround time
What you need to work with us:
- Proficiency in at least one of the general-purpose programming languages like Python, Java, etc.
- Experience in managing the IAAS and PAAS components on popular public Cloud Service Providers like AWS, Azure, GCP etc.
- Proficiency in Unix Operating systems and comfortable with Networking concepts
- Experience with developing/deploying a scalable system
- Experience with the Distributed Database & Message Queues (like Cassandra, ElasticSearch, MongoDB, Kafka, etc.)
- Experience in managing Hadoop clusters
- Understanding of containers and have managed them in production using container orchestration services.
- Solid understanding of data structures and algorithms.
- Applied exposure to continuous delivery pipelines (CI/CD).
- Keen interest and proven track record in automation and cost optimization.
Experience:
- 1-4 years of relevant experience
- BE in Computer Science / Information Technology











-(1).png&w=256&q=75)
