
Company Description
Rootflo is an innovative AI-driven company transforming finance back offices through advanced revenue intelligence and workflow automation solutions. Specializing in empowering financial institutions, Rootflo partners with leading banks, lending platforms, and insurance companies to revolutionize their operations. Leveraging the power of artificial intelligence, Rootflo provides efficient and intelligent systems that drive business outcomes. The company is dedicated to delivering impactful advancements for modern financial processes.
About the Role
We are looking for a motivated and technically sound Junior Cloud / Infrastructure Engineer with approximately 1 year of hands-on experience in designing, deploying, and managing cloud infrastructure on AWS, GCP, or Azure using Kubernetes. The candidate will work closely with senior engineers to support and maintain scalable, secure, and highly available cloud environments while gaining exposure across the full cloud stack.
Key Responsibilities
Cloud Infrastructure
- Assist in provisioning and managing cloud resources (VMs, storage, networking, databases) on AWS / GCP / Azure in Kubernetes
- Support infrastructure-as-code (IaC) implementation using Terraform
- Monitor cloud resource usage, costs, and performance; raise alerts for anomalies
- Set up log monitoring and other tracking
- Implementing cost optimizations and efficient resource utilisation
Networking & Security
- Configure VPCs, subnets, security groups, IAM roles, and access policies
- Assist in implementing firewalls, SSL/TLS certificates, and VPN connectivity
- Support compliance and security best practices (CIS benchmarks, least privilege)
CI/CD & DevOps Support
- Work with CI/CD pipelines (Jenkins, GitHub Actions, GitLab CI, or similar)
- Assist in containerization and orchestration using Docker and Kubernetes (EKS / GKE / AKS)
- Support deployment automation and version control workflows
Monitoring & Incident Response
- Use monitoring tools such as CloudWatch, Stackdriver, Azure Monitor, Grafana, or Datadog
- Respond to infrastructure alerts and assist in root cause analysis (RCA)
- Participate in on-call rotations and incident triage as required
Collaboration
- Work closely with development, QA, and product teams to support release cycles
- Maintain documentation for infrastructure, runbooks, and architecture diagrams
- Participate in code reviews for infrastructure scripts and configurations
Required Skills & Qualifications
Cloud Platforms (Any One Primary)
- AWS: EC2, S3, RDS, Lambda, VPC, IAM, ECS/EKS, CloudWatch, Route 53
- GCP: Compute Engine, GCS, Cloud SQL, GKE, Cloud Functions, VPC, IAM, Stackdriver
- Azure: VMs, Azure Storage, Azure SQL, AKS, Azure Functions, VNet, AAD, Azure Monitor
Infrastructure & Automation
- Hands-on with Terraform and/or CloudFormation / ARM Templates / GCP Deployment Manager
- Proficiency in Linux/Unix system administration
- Scripting skills in Bash, Python, or PowerShell
Containers & DevOps
- Working knowledge of Docker (build, push, run, Compose)
- Basic understanding of Kubernetes concepts (pods, deployments, services, ingress)
- Familiarity with at least one CI/CD tool
Networking Fundamentals
- Understanding of DNS, HTTP/HTTPS, TCP/IP, load balancers, CDNs
- Experience with cloud-native networking (VPCs, peering, NAT gateways)
Soft Skills
- Strong problem-solving ability and attention to detail
- Good written and verbal communication
- Eagerness to learn and adapt in a fast-paced environment
Good to Have
- Cloud certification: AWS Solutions Architect Associate / GCP ACE / Azure AZ-900 or AZ-104
- Experience with GitOps tools like ArgoCD or Flux
- Exposure to service mesh (Istio, Linkerd)
- Knowledge of Ansible or Chef/Puppet for configuration management
- Familiarity with observability tools: Prometheus, Grafana, ELK Stack
- Experience with cost optimization and FinOps practices

About Rootflo
About
Similar jobs
Springer Capital is a cross-border asset management firm focused on real estate investment banking in China and the USA. We are offering a remote internship for individuals passionate about automation, cloud infrastructure, and CI/CD pipelines. Start and end dates are flexible, and applicants may be asked to complete a short technical quiz or assignment as part of the application process.
Responsibilities:
▪ Assist in building and maintaining CI/CD pipelines to automate development workflows
▪ Monitor and improve system performance, reliability, and scalability
▪ Manage cloud-based infrastructure (e.g., AWS, Azure, or GCP)
▪ Support containerization and orchestration using Docker and Kubernetes
▪ Implement infrastructure as code using tools like Terraform or CloudFormation
▪ Collaborate with software engineering and data teams to streamline deployments
▪ Troubleshoot system and deployment issues across development and production environments
Location: Bangalore
Experience: 2–5 years
Type: Full-time | On-site
Open Roles: 1
Start: Immediate
Why this role exists
Most engineering teams choose between speed and stability.
We need both.
Today:
- Deployments carry risk
- Cloud costs are higher than they should be
- Compliance is reactive, not built-in
This role exists to build a platform where:
- We can deploy fast without breaking production
- We can scale without runaway cost
- We can pass enterprise InfoSec reviews without firefighting
What you’ll do
You will not just manage infrastructure.
You will build the platform that engineering runs on.
1. Drive cloud cost efficiency
- Reduce Azure compute spend by 40%
- Implement:
- Reserved Instances / savings plans
- Right-sizing of workloads
- Scheduling for non-critical workloads
- Continuously monitor and optimize cost vs performance
2. Build zero-downtime deployment systems
- Ship a deployment pipeline that supports:
- 5+ production deployments per week
- Zero customer-visible downtime
- Implement:
- Blue-green / canary deployments
- Automated health checks
- Safe rollout strategies
3. Enable fast and safe releases
- Reduce time-to-launch significantly
- Ensure:
- High reliability in every release
- Ability to rollback instantly if something breaks
- Create systems where:
- Scaling up is seamless when things go right
- Failures are contained when they don’t
4. Build disaster recovery and compliance readiness
- Create DR/BCP systems that pass enterprise audits from:
- HDFC Life, SBI Life
- Ensure:
- Backup and recovery processes are defined and tested
- Failover strategies are documented and executable
- Build compliance as part of the system, not an afterthought
5. Embed security into the pipeline
- Integrate:
- SAST (Static Application Security Testing)
- DAST (Dynamic Application Security Testing)
- SCA (Software Composition Analysis)
- Secret scanning
- Container scanning
- IaC scanning
- Ensure vulnerabilities are caught before deployment
6. Enforce policy-as-code
- Implement:
- OPA / Gatekeeper
- Azure Policy
- Prevent non-compliant infrastructure from being deployed
- Ensure consistency across environments
7. Build a scalable platform layer
- Create systems that:
- Support increasing deployment frequency
- Maintain reliability under scale
- Work closely with backend and SRE teams to:
- Improve system stability
- Reduce operational overhead
What success looks like
- Cloud costs reduce by ≥ 40%
- Deployments are:
- Frequent
- Safe
- Invisible to customers
- Rollbacks are instant and reliable
- DR/BCP passes enterprise audits in the first attempt
- Security is embedded in the pipeline, not patched later
- Engineering teams ship faster with confidence
Who you are
- You have 2-5 years of experience in DevOps / Platform Engineering
- You have worked with:
- Cloud platforms (Azure preferred)
- CI/CD systems
- Infrastructure as Code
- You think in:
- Systems
- Trade-offs (speed vs reliability vs cost)
- You are comfortable owning:
- Production infrastructure
- Deployment systems
What will make you stand out
- Experience with:
- High-frequency deployment systems
- Cost optimization at scale
- Security-first pipelines
- Strong understanding of:
- Kubernetes / container orchestration
- Monitoring and observability
- Distributed system reliability
- Experience passing enterprise security/compliance audits
Why join
- You will define how engineering ships and scales
- Your work directly impacts:
- Reliability
- Cost
- Deployment velocity
- You will build a platform that moves from:
- Fragile → predictable and scalable
What this role is not
- Not manual infra management
- Not reactive firefighting
- Not limited to CI/CD maintenance
What this role is
- A builder of deployment systems
- A driver of cost efficiency
- A guardian of reliability and compliance
One question to self-evaluate
Can you build a platform where we deploy faster, spend less, and never break production?
Job Summary:
We are seeking a highly skilled and proactive DevOps Engineer with 4+ years of experience to join our dynamic team. This role requires strong technical expertise across cloud infrastructure, CI/CD pipelines, container orchestration, and infrastructure as code (IaC). The ideal candidate should also have direct client-facing experience and a proactive approach to managing both internal and external stakeholders.
Key Responsibilities:
- Collaborate with cross-functional teams and external clients to understand infrastructure requirements and implement DevOps best practices.
- Design, build, and maintain scalable cloud infrastructure on AWS (EC2, S3, RDS, ECS, etc.).
- Develop and manage infrastructure using Terraform or CloudFormation.
- Manage and orchestrate containers using Docker and Kubernetes (EKS).
- Implement and maintain CI/CD pipelines using Jenkins or GitHub Actions.
- Write robust automation scripts using Python and Shell scripting.
- Monitor system performance and availability, and ensure high uptime and reliability.
- Execute and optimize SQL queries for MSSQL and PostgreSQL databases.
- Maintain clear documentation and provide technical support to stakeholders and clients.
Required Skills:
- Minimum 4+ years of experience in a DevOps or related role.
- Proven experience in client-facing engagements and communication.
- Strong knowledge of AWS services – EC2, S3, RDS, ECS, etc.
- Proficiency in Infrastructure as Code using Terraform or CloudFormation.
- Hands-on experience with Docker and Kubernetes (EKS).
- Strong experience in setting up and maintaining CI/CD pipelines with Jenkins or GitHub.
- Solid understanding of SQL and working experience with MSSQL and PostgreSQL.
- Proficient in Python and Shell scripting.
Preferred Qualifications:
AWS Certifications (e.g., AWS Certified DevOps Engineer) are a plus.
Experience working in Agile/Scrum environments.
Strong problem-solving and analytical skills.
Job Summary:
We are seeking a highly skilled and proactive DevOps Engineer with 4+ years of experience to join our dynamic team. This role requires strong technical expertise across cloud infrastructure, CI/CD pipelines, container orchestration, and infrastructure as code (IaC). The ideal candidate should also have direct client-facing experience and a proactive approach to managing both internal and external stakeholders.
Key Responsibilities:
- Collaborate with cross-functional teams and external clients to understand infrastructure requirements and implement DevOps best practices.
- Design, build, and maintain scalable cloud infrastructure on AWS (EC2, S3, RDS, ECS, etc.).
- Develop and manage infrastructure using Terraform or CloudFormation.
- Manage and orchestrate containers using Docker and Kubernetes (EKS).
- Implement and maintain CI/CD pipelines using Jenkins or GitHub Actions.
- Write robust automation scripts using Python and Shell scripting.
- Monitor system performance and availability, and ensure high uptime and reliability.
- Execute and optimize SQLqueries for MSSQL and PostgresQL databases.
- Maintain clear documentation and provide technical support to stakeholders and clients.
Required Skills:
- Minimum 4+ years of experience in a DevOps or related role.
- Proven experience in client-facing engagements and communication.
- Strong knowledge of AWS services – EC2, S3, RDS, ECS, etc.
- Proficiency in Infrastructure as Code using Terraform or CloudFormation.
- Hands-on experience with Docker and Kubernetes (EKS).
- Strong experience in setting up and maintaining CI/CD pipelines with Jenkins or GitHub.
- Solid understanding of SQL and working experience with MSSQL and PostgreSQL.
- Proficient in Python and Shell scripting.
Preferred Qualifications:
- AWS Certifications (e.g., AWS Certified DevOps Engineer) are a plus.
- Experience working in Agile/Scrum environments.
- Strong problem-solving and analytical skills.
Job Overview:
You will work in engineering and development teams to integrate and develop cloud solutions and virtualized deployment of software as a service product. This will require understanding the software system architecture function as well as performance and security requirements. The DevOps Engineer is also expected to have expertise in available cloud solutions and services, administration of virtual machine clusters, performance tuning and configuration of cloud computing resources, the configuration of security, scripting and automation of monitoring functions. This position requires the deployment and management of multiple virtual clusters and working with compliance organizations to support security audits. The design and selection of cloud computing solutions that are reliable, robust, extensible, and easy to migrate are also important.
Experience:
- Experience working on billing and budgets for a GCP project - MUST
- Experience working on optimizations on GCP based on vendor recommendations - NICE TO HAVE
- Experience in implementing the recommendations on GCP
- Architect Certifications on GCP - MUST
- Excellent communication skills (both verbal & written) - MUST
- Excellent documentation skills on processes and steps and instructions- MUST
- At least 2 years of experience on GCP.
Basic Qualifications:
- Bachelor’s/Master’s Degree in Engineering OR Equivalent.
- Extensive scripting or programming experience (Shell Script, Python).
- Extensive experience working with CI/CD (e.g. Jenkins).
- Extensive experience working with GCP, Azure, or Cloud Foundry.
- Experience working with databases (PostgreSQL, elastic search).
- Must have 2 years of minimum experience with GCP certification.
Benefits :
- Competitive salary.
- Work from anywhere.
- Learning and gaining experience rapidly.
- Reimbursement for basic working set up at home.
- Insurance (including top-up insurance for COVID).
Location :
Remote - work from anywhere.
Ideal joining preferences:
Immediate or 15 days
knowledge of EC2, RDS and S3.
● Good command of Linux environment
● Experience with tools such as Docker, Kubernetes, Redis, NodeJS and Nginx
Server configurations and deployment, Kafka, Elasticsearch, Ansible, Terraform,
etc
● Bonus: AWS certification is a plus
● Bonus: Basic understanding of database queries for relational databases such as
MySQL.
● Bonus: Experience with CI servers such as Jenkins, Travis or similar types
● Bonus: Demonstrated programming capability in a high-level programming
language such as Python, Go, or similar
● Develop, maintain and administer tools which will automate operational activities
and improve engineering productivity
● Automate continuous delivery and on-demand capacity management solutions
● Developing configuration and infrastructure solutions for internal deployments
● Troubleshooting, diagnosing and fixing software issues
● Updating, tracking and resolving technical issues
● Suggesting architecture improvements, recommending process improvements
● Evaluate new technology options and vendor products. Ensuring critical system
security through the use of best in class security solutions
● Technical experience or in a similar role supporting large scale production
distributed systems
● Must understand overall system architecture , improve design and implement new
processes.
About us:
HappyFox is a software-as-a-service (SaaS) support platform. We offer an enterprise-grade help desk ticketing system and intuitively designed live chat software.
We serve over 12,000 companies in 70+ countries. HappyFox is used by companies that span across education, media, e-commerce, retail, information technology, manufacturing, non-profit, government and many other verticals that have an internal or external support function.
To know more, Visit! - https://www.happyfox.com/
Responsibilities:
- Build and scale production infrastructure in AWS for the HappyFox platform and its products.
- Research, Build/Implement systems, services and tooling to improve uptime, reliability and maintainability of our backend infrastructure. And to meet our internal SLOs and customer-facing SLAs.
- Proficient in managing/patching servers with Unix-based operating systems like Ubuntu Linux.
- Proficient in writing automation scripts or building infrastructure tools using Python/Ruby/Bash/Golang
- Implement consistent observability, deployment and IaC setups
- Patch production systems to fix security/performance issues
- Actively respond to escalations/incidents in the production environment from customers or the support team
- Mentor other Infrastructure engineers, review their work and continuously ship improvements to production infrastructure.
- Build and manage development infrastructure, and CI/CD pipelines for our teams to ship & test code faster.
- Participate in infrastructure security audits
Requirements:
- At least 5 years of experience in handling/building Production environments in AWS.
- At least 2 years of programming experience in building API/backend services for customer-facing applications in production.
- Demonstrable knowledge of TCP/IP, HTTP and DNS fundamentals.
- Experience in deploying and managing production Python/NodeJS/Golang applications to AWS EC2, ECS or EKS.
- Proficient in containerised environments such as Docker, Docker Compose, Kubernetes
- Proficient in managing/patching servers with Unix-based operating systems like Ubuntu Linux.
- Proficient in writing automation scripts using any scripting language such as Python, Ruby, Bash etc.,
- Experience in setting up and managing test/staging environments, and CI/CD pipelines.
- Experience in IaC tools such as Terraform or AWS CDK
- Passion for making systems reliable, maintainable, scalable and secure.
- Excellent verbal and written communication skills to address, escalate and express technical ideas clearly
- Bonus points – if you have experience with Nginx, Postgres, Redis, and Mongo systems in production.
|
Numerator is looking for an experienced, talented and quick-thinking DevOps Manager to join our team and work with the Global DevOps groups to keep infrastructure up to date and continuously advancing. This is a unique opportunity where you will get the chance to work on the infrastructure of both established and greenfield products. Our technology harnesses consumer-related data in many ways including gamified mobile apps, sophisticated web crawling and enhanced Deep Learning algorithms to deliver an unmatched view of the consumer shopping experience. As a member of the Numerator DevOps Engineering team, you will make an immediate impact as you help build out and expand our technology platforms from on-premise to the cloud across a wide range of software ecosystems. Many of your daily tasks and engagement with applications teams will help shape how new projects are delivered at scale to meet our clients demands. This role requires a balance between hands-on infrastructure-as-code deployments with application teams as well as working with Global DevOps Team to roll out new initiatives. What you will get to do
|
|
Requirements |
|
Nice to have
|
DevOps Engineer
Notice Period: 45 days / Immediate Joining
Banyan Data Services (BDS) is a US-based Infrastructure services Company, headquartered in San Jose, California, USA. It provides full-stack managed services to support business applications and data infrastructure. We do provide the data solutions and services on bare metal, On-prem, and all Cloud platforms. Our engagement service is built on the DevOps standard practice and SRE model.
We are looking for a DevOps Engineer to help us build functional systems that improve customer experience. we offer you an opportunity to join our rocket ship startup, run by a world-class executive team. We are looking for candidates that aspire to be a part of the cutting-edge solutions and services we offer, that address next-gen data evolution challenges. Candidates who are willing to use their experience in areas directly related to Infrastructure Services, Software as Service, and Cloud Services and create a niche in the market.
Key Qualifications
· 4+ years of experience as a DevOps Engineer with monitoring, troubleshooting, and diagnosing infrastructure systems.
· Experience in implementation of continuous integration and deployment pipelines using Jenkins, JIRA, JFrog, etc
· Strong experience in Linux/Unix administration.
· Experience with automation/configuration management using Puppet, Chef, Ansible, Terraform, or other similar tools.
· Expertise in multiple coding and scripting languages including Shell, Python, and Perl
· Hands-on experience Exposure to modern IT infrastructure (eg. Docker swarm/Mesos/Kubernetes/Openstack)
· Exposure to any of relation database technologies MySQL/Postgres/Oracle or any No-SQL database
· Worked on open-source tools for logging, monitoring, search engine, caching, etc.
· Professional Certificates in AWS or any other cloud is preferable
· Excellent problem solving and troubleshooting skills
· Must have good written and verbal communication skills
Key Responsibilities
Ambitious individuals who can work under their own direction towards agreed targets/goals.
Must be flexible to work on the office timings to accommodate the multi-national client timings.
Will be involved in solution designing from the conceptual stages through development cycle and deployments.
Involve development operations & support internal teams
Improve infrastructure uptime, performance, resilience, reliability through automation
Willing to learn new technologies and work on research-orientated projects
Proven interpersonal skills while contributing to team effort by accomplishing related results as needed.
Scope and deliver solutions with the ability to design solutions independently based on high-level architecture.
Independent thinking, ability to work in a fast-paced environment with creativity and brainstorming
http://www.banyandata.com" target="_blank">www.banyandata.com
Responsibilities
- Building and maintenance of resilient and scalable production infrastructure
- Improvement of monitoring systems
- Creation and support of development automation processes (CI / CD)
- Participation in infrastructure development
- Detection of problems in architecture and proposing of solutions for solving them
- Creation of tasks for system improvements for system scalability, performance and monitoring
- Analysis of product requirements in the aspect of devops
- Managing a team of DevOps, control of task deliveries
- Incident analysis and fixing
Technology stack
Linux, Bash, Salt/Ansible, LXC, libvirt, IPsec, VXLAN, Open vSwitch, OpenVPN, OSPF, BIRD, Cisco NX-OS, Multicast, PIM, LVM, software RAID, LUKS, PostgreSQL, nginx, haproxy, Prometheus, Grafana, Zabbix, GitLab, Capistrano
Skills and Experience
- Understanding of the distributed systems principles
- Understanding of principles for building a resistant network infrastructure
- Experience of Ubuntu Linux administration (Debian-like will be a plus)
- Strong knowledge of Bash
- Experience of working with LXC-containers
- Understanding and experience with infrastructure as a code approach
- Experience of development idempotent Ansible roles
- Experience with relational databases (PostgeSQL), ability to create simple SQL queries
- Experience with git
- Experience with monitoring and metric collect systems (Prometheus, Grafana, Zabbix)
- Understanding of dynamic routing (OSPF)
Preferred experience
- Experience of working with highload zero-downtown environments
- Experience of coding on Python
- Experience of working with IPsec, VXLAN, Open vSwitch
- Knowledge and experience of working with network equipment Cisco
- Experience of working with Cisco NX-OS
- Knowledge of principles of multicast protocols IGMP, PIM
- Experience of setting multicast on Cisco equipment
- Experience of working with Solarflare Onload
- Experience administering Atlassian products










