REVOS is a smart micro-mobility platform that works with enterprises across the automotive shared mobility value chain to enable and accelerate their smart vehicle journeys. Founded in 2017, it aims to empower all 2 and 3 wheeler vehicles through AI-integrated IoT solutions that will make them smart, safe, connected. We are backed by investors like USV and Prime Venture.
Duties and Responsibilities :
- Automating various tasks in cloud operations, deployment, monitoring, and performance optimization for big data stack.
- Build, release, and configuration management of production systems.
- System troubleshooting and problem-solving across platform and application domains.
- Suggesting architecture improvements, recommending process improvements.
- Evaluate new technology options and vendor products.
- Function well in a fast-paced, rapidly-changing environment
- Communicate effectively with people at all levels of the organization
Qualifications and Required Skills:
- Overall 3+ years of experience in various software engineering roles.
- 3+ years of experience in building applications and tools in any tech stack, preferably deployed on cloud
- Recent 3 years’ experience must be on Serverless/cloud-native development in AWS (preferred)/Azure
- Expertise in any of the programming languages – (NodeJS or Python preferable)
- Must have hands-on experience in using AWS/Azure - SDK/APIs.
- Must have experience in deploying, releasing, and managing production systems
- MCA or a degree in engineering in Computer Science, IT, or Electronics stream

Similar jobs
Key Qualifications :
- At least 2 years of hands-on experience with cloud infrastructure on AWS or GCP
- Exposure to configuration management and orchestration tools at scale (e.g. Terraform, Ansible, Packer)
- Knowledge in DevOps tools (e.g. Jenkins, Groovy, and Gradle)
- Familiarity with monitoring and alerting tools(e.g. CloudWatch, ELK stack, Prometheus)
- Proven ability to work independently or as an integral member of a team
Preferable Skills :
- Familiarity with standard IT security practices such as encryption, credentials and key management
- Proven ability to acquire various coding languages (Java, Python- ) to support DevOps operation and cloud transformation
- Familiarity in web standards (e.g. REST APIs, web security mechanisms)
- Multi-cloud management experience with GCP / Azure
- Experience in performance tuning, services outage management and troubleshooting
About CoverSelf: We are an InsurTech start-up based out of Bangalore, with a focus on Healthcare. CoverSelf empowers healthcare insurance companies with a truly NEXT-GEN cloud-native, holistic & customizable platform preventing and adapting to the ever-evolving claims & payment inaccuracies. Reduce complexity and administrative costs with a unified healthcare dedicated platform.
Overview about the role: We are looking for a Junior DevOps Engineer who would be working on the bleeding edge of technologies. The role would be primarily to achieve various functions like maintaining, monitoring, securing and automating our cloud infrastructure and applications. If you have a solid background in kubernetes and terraform, we’d love to speak with you.
Responsibilities:
➔ Implement, and maintain application infrastructure, databases, and networks.
➔ Develop and implement automation scripts using Terraform for infrastructure deployment.
➔ Implement and maintain containerized applications using Docker and Kubernetes.
➔ Work with other DevOps Engineers in the team on deploying applications, provisioning infrastructure, Automation, routine audits, upgrading systems, capacity planning, and benchmarking.
➔ Work closely with our Engineering Team to ensure seamless integration of new tools and perform day-to-day activities which can help developers deploy and release their code seamlessly.
➔ Respond to service outages/incidents and ensure system uptime requirements are met.
➔ Ensure the security and compliance of our applications and infrastructure.
Requirements:
➔ Must have a B.Tech degree
➔ Must have at least 2 years experience as a devops engineer
➔ Operating Systems: Good understanding in any of the UNIX/Linux platforms and good to have windows.
➔ Source Code Management: Expertise in GIT for version control and managing branching strategies.
➔ Networking: Basic understanding of network fundamentals like Networks, DNS, PORTS, ROUTES,NAT GATEWAYS and VPN.
➔ Cloud Platforms: Should have minimum 2 years of experience in working with AWS and good to have understanding of other cloud platforms, such as Microsoft Azure and Google Cloud Platform.
➔ Infrastructure Automation: Experience with Terraform to automate infrastructure provisioning and configuration.
➔ Container Orchestration: Must have at least 1 year of experience in managing Kubernetes clusters.
➔ Containerization: Experience in containerization of applications using Docker.
➔ CI/CD and Scripting: Experience with CI/CD concepts and tools (e.g., Gitlab CI) and scripting languages like Python or Shell for automation.
➔ Monitoring and Observability: Familiarity with monitoring tools like Prometheus, Grafana, CloudWatch, and troubleshooting using logs and metrics analysis.
➔ Security Practices: Basic understanding of security best practices in a DevOps environment and Integration of security into the CI/CD pipeline (DevSecOps).
➔ Databases: Good to have knowledge on one of the databases like MySQL, Postgres, mongo.
➔ Problem Solving and Troubleshooting: Debugging and troubleshooting skills for resolving issues in development, testing, and production environments.
Work Location: Jayanagar - Bangalore.
Work Mode: Work from Office.
Benefits: Best in the Industry Compensation, Friendly & Flexible Leave Policy, Health Benefits, Certifications & Courses Reimbursements, Chance to be part of rapidly growing start-up & the next success story, and many more.
Additional Information: At CoverSelf, we are creating a global workplace that enables everyone to find their true potential, purpose, and passion irrespective of their background, gender, race, sexual orientation, religion and ethnicity. We are committed to providing equal opportunity for all and believe that diversity in the workplace creates a more vibrant, richer work environment that advances the goals of our employees, communities and the business.
Job Position: DevOps Engineer
Experience Range: 2 - 3 years
Type:Full Time
Location:India (Remote)
Desired Skills: DevOps, Kubernetes (EKS), Docker, Kafka, HAProxy, MQTT brokers, Redis, PostgreSQL, TimescaleDB, Shell Scripting, Terraform, AWS (API Gateway, ALB, ECS, EKS, SNS, SES, CloudWatch Logs), Prometheus, Grafana, Jenkins, GitHub
Your key responsibilities:
- Collaborate with developers to design and implement scalable, secure, and reliable infrastructure.
- Manage and automate CI/CD pipelines (Jenkins - Groovy Scripts, GitHub Actions), ensuring smooth deployments.
- Containerise applications using Docker and manage workloads on Kubernetes (EKS).
- Work with AWS services (ECS, EKS, API Gateway, SNS, SES, CloudWatch Logs) to provision and maintain infrastructure.
- Implement infrastructure as code using Terraform.
- Set up and manage monitoring and alerting using Prometheus and Grafana.
- Manage and optimize Kafka, Redis, PostgreSQL, TimescaleDB deployments.
- Troubleshoot issues in distributed systems and ensure high availability using HAProxy, load balancing, and failover strategies.
- Drive automation initiatives across development, testing, and production environments.
What you’ll bring
Required:
- 2–3 years of hands-on DevOps experience.
- Strong proficiency in Shell Scripting.
- Practical experience with Docker and Kubernetes (EKS).
- Knowledge of Terraform or other IaC tools.
- Experience with Jenkins pipelines (Groovy scripting preferred).
- Exposure to AWS cloud services (ECS, EKS, API Gateway, SNS, SES, CloudWatch).
- Understanding of microservices deployment and orchestration.
- Familiarity with monitoring/observability tools (Prometheus, Grafana).
- Good communication and collaboration skills.
Nice to have:
- Experience with Kafka, HAProxy, MQTT brokers.
- Knowledge of Redis, PostgreSQL, TimescaleDB.
- Exposure to DevOps best practices in agile environments.
We are an on-demand E-commerce technology and Services Company and a tech-enabled 3PL (third party logistics). We unlocks ecommerce for companies by managing the entire operations lifecycle:
Sell, Fulfil & Reconcile.
Using us, companies can: -
• Store their inventory in our fulfilment centers (FCs)
• Sell their products on multiple sales channels (online marketplaces like Amazon, Flipkart, and their own website)
• Get their orders processed within a defined SLA
• Reconcile payments against the sales company combines infrastructure and dedicated experts to give brands: accountability, peace of mind, and control over the ecommerce journey.
The company is working on a remarkable concept for running an E-commerce business- starting from establishing an online presence for many enterprises. It offers a combination of products and services to create a comprehensive platform and manage all aspects of running a brand online, including the development of an exclusive web store, handling logistics, integrating all marketplaces and so on.
Who are we looking for?
We are looking for a skilled and passionate DevOps Engineer to join our Centre of Excellence to build and scale effective software solutions for our Ecommerce Domain.
Wondering what your Responsibilities would be?
• Building and setting up new development tools and infrastructure
• Provide full support to the software development teams to deploy, run and roll out new services and new capabilities in Cloud infrastructure
• Implement CI/CD and DevOps best practices for software application teams and assist in executing the integration and operation processes
• Build proactive monitoring and alerting infrastructure services to support operations and system health
• Be hands-on in developing prototypes and conducting Proof of Concepts
• Work in an agile, collaborative environment, partnering with other engineers to bring new solutions to the table
• Join the DevOps Chapter where you’ll have the opportunity to investigate and share information about technologies within the DevOps Engineering Community
What Makes you Eligible?
• Bachelor’s Degree or higher in Computer Science or Software Engineering with appropriate experience
• Minimum of 1 year of proven experience as DevOps Engineer
• Experience in working with a DevOps culture, following Agile Software Development methodologies of Scrum
• Proven experience in source code management tools like Bitbucket and Git
• Solid experience in CI/CD pipelines like Jenkins
• Shown ability with configuration management tools (e.g., Terraform, Ansible, Docker and Kubernetes) and repository tools like Artifactory
• Experience in Cloud architecture & provisioning
• Knowledge in Programming / Querying NoSQL databases
• Teamwork skills with a problem-solving attitude
Responsibilities
- Designing and building infrastructure to support AWS, Azure, and GCP-based Cloud services and infrastructure.
- Creating and utilizing tools to monitor our applications and services in the cloud including system health indicators, trend identification, and anomaly detection.
- Working with development teams to help engineer scalable, reliable, and resilient software running in the cloud.
- Participating in on-call escalation to troubleshoot customer-facing issues
- Analyzing and monitoring performance bottlenecks and key metrics to optimize software and system performance.
- Providing analytics and forecasts for cloud capacity, troubleshooting analysis, and uptime.
Skills
- Should have strong experience of a couple of years, in leading DevOps team and planning, defining DevOps roadmap and executing as per the same along with the team
- Familiarity with AWS cloud and JSON templates, Python, AWS Cloud formation templates
- Designing solutions using one or more AWS features, tools, and technologies such as EC2, EBS, Glacier, S3, ELB, CloudFormation, Lambada, CloudWatch, VPC, RDS, Direct Connect, AWS CLI, REST API
- Design and implement system architecture with AWS cloud - Develop automation scripts, ARM templates, Ansible, Chef, Python, Powershell Knowledge of AWS services and cloud design patterns- Knowledge on Cloud fundamentals like autoscaling, serverless
- Have experience with DevOps and Infrastructure as Code: AWS environment and application automation utilizing CloudFormation and third-party tools. CI/CD pipeline setup utilizing
- CI experience with the following is a must: Jenkins, Bitbucket/GIT, Nexus or Artifactory, SonarQube, WireMock or other mocking solution
- Expert knowledge on Windows/Linux OS/Mac with at least 5-6 years of system administration experience
- Should have strong skills in using JIRA build tool
- Should have knowledge in managing the CI/CD pipeline on public cloud deployments using AWS
- Should have strong skills in using tools like Jenkins, Docker, Kubernetes (AWS EKS, Azure AKS), and Cloudformation.
- Experience in monitoring tools like Pingdom, Nagios, etc.
- Experience in reverse proxy services like Nginx and Apache
- Desirable experience in Bitbucket with version control tools like GIT/SVN
- Experience of manual/automated testing desired application deployments
- Experience in database technologies such as PostgreSQL, MySQL
- Knowledge of helm and terraform
Who we are?
Searce is a niche’ Cloud Consulting business with futuristic tech DNA. We do new-age tech to realise the “Next” in the “Now” for our Clients. We specialise in Cloud Data Engineering, AI/Machine Learning and Advanced Cloud infra tech such as Anthos and Kubernetes. We are one of the top & the fastest growing partners for Google Cloud and AWS globally with over 2,500 clients successfully moved to cloud.
What we believe?
- Best practices are overrated
- Implementing best practices can only make one n ‘average’ .
- Honesty and Transparency
- We believe in naked truth. We do what we tell and tell what we do.
- Client Partnership
- Client - Vendor relationship: No. We partner with clients instead.
- And our sales team comprises 100% of our clients.
How we work?
It’s all about being Happier first. And rest follows. Searce work culture is defined by HAPPIER.
- Humble: Happy people don’t carry ego around. We listen to understand; not to respond.
- Adaptable: We are comfortable with uncertainty. And we accept changes well. As that’s what life's about.
- Positive: We are super positive about work & life in general. We love to forget and forgive. We don’t hold grudges. We don’t have time or adequate space for it.
- Passionate: We are as passionate about the great street-food vendor across the street as about Tesla’s new model and so on. Passion is what drives us to work and makes us deliver the quality we deliver.
- Innovative: Innovate or Die. We love to challenge the status quo.
- Experimental: We encourage curiosity & making mistakes.
- Responsible: Driven. Self motivated. Self governing teams. We own it.
Are you the one? Quick self-discovery test:
- Love for cloud: When was the last time your dinner entailed an act on “How would ‘Jerry Seinfeld’ pitch Cloud platform & products to this prospect” and your friend did the ‘Sheldon’ version of the same thing.
- Passion for sales: When was the last time you went at a remote gas station while on vacation, and ended up helping the gas station owner saasify his 7 gas stations across other geographies.
- Compassion for customers: You listen more than you speak. When you do speak, people feel the need to listen.
- Humor for life: When was the last time you told a concerned CEO, ‘If Elon Musk can attempt to take humanity to Mars, why can’t we take your business to run on cloud ?
Your bucket of undertakings:
- This position will be responsible to consult with clients and propose architectural solutions to help move & improve infra from on-premise to cloud or help optimize cloud spend from one public cloud to the other
- Be the first one to experiment on new age cloud offerings, help define the best practise as a thought leader for cloud, automation & Dev-Ops, be a solution visionary and technology expert across multiple channels.
- Continually augment skills and learn new tech as the technology and client needs evolve
- Demonstrate knowledge of cloud architecture and implementation features (OS, multi-tenancy, virtualization, orchestration, elastic scalability)
- Use your experience in Google cloud platform, AWS or Microsoft Azure to build hybrid-cloud solutions for customers.
- Provide leadership to project teams, and facilitate the definition of project deliverables around core Cloud based technology and methods.
- Define tracking mechanisms and ensure IT standards and methodology are met; deliver quality results.
- Define optimal design patterns and solutions for high availability and disaster recovery for applications
- Participate in technical reviews of requirements, designs, code and other artifacts Identify and keep abreast of new technical concepts in AWS
- Security, Risk and Compliance - Advise customers on best practices around access management, network setup, regulatory compliance and related areas
- Develop solutions architecture and evaluate architectural alternatives for private, public and hybrid cloud models, including IaaS, PaaS, and other cloud services
- Demonstrate leadership ability to back decisions with research and the “why,” and articulate several options, the pros and cons for each, and a recommendation • Maintain overall industry knowledge on latest trends, technology, etc. • • Contribute to DevOps development activities and complex development tasks
- Act as a Subject Matter Expert on cloud end-to-end architecture, including AWS and future providers, networking, provisioning, and management
Accomplishment Set
- Passionate, persuasive, articulate Cloud professional capable of quickly establishing interest and credibility
- Good business judgment, a comfortable, open communication style, and a willingness and ability to work with customers and teams.
- Strong service attitude and a commitment to quality. Highly organised and efficient. Confident working with others to inspire a high-quality standard.
Education, Experience, etc.
- To reiterate: Passion to tech-awesome, insatiable desire to learn the latest of the new-age cloud tech, highly analytical aptitude and a strong ‘desire to deliver’ outlives those fancy degrees!
- 6 - 10 years of experience with at least 5 - 6 years of hands-on experience in Cloud Computing
- (AWS/GCP/Azure) and IT operational experience in a global enterprise environment.
- Good analytical, communication, problem solving, and learning skills.
- Knowledge on programming against cloud platforms such as AWS and lean development methodologies.
1. Should have worked with AWS, Dockers and Kubernetes.
2. Should have worked with a scripting language.
3. Should know how to monitor system performance, CPU, Memory.
4. Should be able to do troubleshooting.
5. Should have knowledge of automated deployment
6. Proficient in one programming knowledge - python preferred.
Your skills and experience should cover:
-
5+ years of experience with developing, deploying, and debugging solutions on the AWS platform using ALL AWS services such as S3, IAM, Lambda, API Gateway, RDS, Cognito, Cloudtrail, CodePipeline, Cloud Formation, Cloudwatch and WAF (Web Application Firewall).
-
Amazon Web Services (AWS) Certified Developer: Associate, is required; Amazon Web Services (AWS) DevOps Engineer: Professional, preferred.
-
5+ years of experience using one or more modern programming languages (Python, Node.js).
-
Hands-on experience migrating data to the AWS cloud platform
-
Experience with Scrum/Agile methodology.
-
Good understanding of core AWS services, uses, and basic AWS architecture best practices (including security and scalability)
-
Experience with AWS Data Storage Tools.
-
Experience in Configure and implement AWS tools such as CloudWatch, CloudTrail and direct system logs for monitoring.
-
Experience working with GIT, or similar tools.
-
Ability to communicate and represent AWS Recommendations and Standards.
The following areas are highly advantageous:
-
Experience with Docker
-
Experience with PostgreSQL database

