
Job Summary:
As an RPA Lead, you will be responsible for setting up and managing our RPA infrastructure using Microsoft Power Automate. With 4+ years of hands-on experience, you’ll collaborate with cross-functional teams to identify, design, and execute automation strategies that enhance efficiency and scalability.
Key Responsibilities:
- Design and implement scalable RPA solutions using Microsoft Power Automate
- Set up and manage RPA infrastructure and environments
- Work with stakeholders to identify automation opportunities
- Develop, test, and deploy automation workflows
- Troubleshoot and optimize automation processes for performance and reliability
- Ensure adherence to best practices in RPA development and governance
Required Skills & Qualifications:
- 4+ years of experience in RPA or automation engineering
- Proven expertise in Microsoft Power Automate (cloud & desktop flows)
- Solid understanding of automation design, triggers, and integrations
- Experience setting up RPA environments from scratch
- Strong analytical and problem-solving skills
Preferred Skills:
- Familiarity with Power Platform (Power Apps, Power BI)
- Knowledge of scripting (PowerShell, Python) for advanced automation
- Exposure to Agile methodologies and CI/CD pipelines

Similar jobs
Job Summary :
We are looking for a proactive and skilled Senior DevOps Engineer to join our team and play a key role in building, managing, and scaling infrastructure for high-performance systems. The ideal candidate will have hands-on experience with Kubernetes, Docker, Python scripting, cloud platforms, and DevOps practices around CI/CD, monitoring, and incident response.
Key Responsibilities :
- Design, build, and maintain scalable, reliable, and secure infrastructure on cloud platforms (AWS, GCP, or Azure).
- Implement Infrastructure as Code (IaC) using tools like Terraform, Cloud Formation, or similar.
- Manage Kubernetes clusters, configure namespaces, services, deployments, and auto scaling.
CI/CD & Release Management :
- Build and optimize CI/CD pipelines for automated testing, building, and deployment of services.
- Collaborate with developers to ensure smooth and frequent deployments to production.
- Manage versioning and rollback strategies for critical deployments.
Containerization & Orchestration using Kubernetes :
- Containerize applications using Docker, and manage them using Kubernetes.
- Write automation scripts using Python or Shell for infrastructure tasks, monitoring, and deployment flows.
- Develop utilities and tools to enhance operational efficiency and reliability.
Monitoring & Incident Management :
- Analyze system performance and implement infrastructure scaling strategies based on load and usage trends.
- Optimize application and system performance through proactive monitoring and configuration tuning.
Desired Skills and Experience :
- Experience Required - 8+ yrs.
- Hands-on experience on cloud services like AWS, EKS etc.
- Ability to design a good cloud solution.
- Strong Linux troubleshooting, Shell Scripting, Kubernetes, Docker, Ansible, Jenkins Skills.
- Design and implement the CI/CD pipeline following the best industry practices using open-source tools.
- Use knowledge and research to constantly modernize our applications and infrastructure stacks.
- Be a team player and strong problem-solver to work with a diverse team.
- Having good communication skills.
Job Title: Senior Devops Engineer (Full-time)
Location: Mumbai, Onsite
Experience Required: 5+ Years
Required Qualifications
● Experience:
○ 5+ years of hands-on experience as a DevOps Engineer or similar role, with
proven expertise in building and customizing Helm charts from scratch (not just
using pre-existing ones).
○ Demonstrated ability to design and whiteboard DevOps pipelines, including
CI/CD workflows for microservices applications.
○ Experience packaging and deploying applications with stateful dependencies
(e.g., databases, persistent storage) in varied environments: on-prem (air-gapped
and non-air-gapped), single-tenant cloud, multi-tenant cloud, and developer trials.
○ Proficiency in managing deployments in Kubernetes clusters, including offline
installations, upgrades via Helm, and adaptations for client restrictions (e.g., no
additional tools or VMs).
○ Track record of handling client interactions, such as asking probing questions
about infrastructure (e.g., OS versions, storage solutions, network restrictions)
and explaining technical concepts clearly.
● Technical Skills:
○ Strong knowledge of Helm syntax and functionalities (e.g., Go templating, hooks,
subcharts, dependency management).
○ Expertise in containerization with Docker, including image management
(save/load, registries like Harbor or ECR).
○ Familiarity with CI/CD tools such as Jenkins, ArgoCD, GitHub Actions, and
GitOps for automated and manual deployments.
○ Understanding of storage solutions for on-prem and cloud, including object/file
storage (e.g., MinIO, Ceph, NFS, cloud-native like S3/EBS).
○ In-depth knowledge of Kubernetes concepts: StatefulSets, PersistentVolumes,
namespaces, HPA, liveness/readiness probes, network policies, and RBAC.
○ Solid grasp of cloud networking: VPCs (definition, boundaries, virtualization via
SDN, differences from private clouds), bare metal vs. virtual machines
(advantages like resource efficiency, flexibility, and scalability).
○ Ability to work in air-gapped environments, preparing offline artifacts and
ensuring self-contained deployment
About us:
HappyFox is a software-as-a-service (SaaS) support platform. We offer an enterprise-grade help desk ticketing system and intuitively designed live chat software.
We serve over 12,000 companies in 70+ countries. HappyFox is used by companies that span across education, media, e-commerce, retail, information technology, manufacturing, non-profit, government and many other verticals that have an internal or external support function.
To know more, Visit! - https://www.happyfox.com/
Responsibilities:
- Build and scale production infrastructure in AWS for the HappyFox platform and its products.
- Research, Build/Implement systems, services and tooling to improve uptime, reliability and maintainability of our backend infrastructure. And to meet our internal SLOs and customer-facing SLAs.
- Proficient in managing/patching servers with Unix-based operating systems like Ubuntu Linux.
- Proficient in writing automation scripts or building infrastructure tools using Python/Ruby/Bash/Golang
- Implement consistent observability, deployment and IaC setups
- Patch production systems to fix security/performance issues
- Actively respond to escalations/incidents in the production environment from customers or the support team
- Mentor other Infrastructure engineers, review their work and continuously ship improvements to production infrastructure.
- Build and manage development infrastructure, and CI/CD pipelines for our teams to ship & test code faster.
- Participate in infrastructure security audits
Requirements:
- At least 5 years of experience in handling/building Production environments in AWS.
- At least 2 years of programming experience in building API/backend services for customer-facing applications in production.
- Demonstrable knowledge of TCP/IP, HTTP and DNS fundamentals.
- Experience in deploying and managing production Python/NodeJS/Golang applications to AWS EC2, ECS or EKS.
- Proficient in containerised environments such as Docker, Docker Compose, Kubernetes
- Proficient in managing/patching servers with Unix-based operating systems like Ubuntu Linux.
- Proficient in writing automation scripts using any scripting language such as Python, Ruby, Bash etc.,
- Experience in setting up and managing test/staging environments, and CI/CD pipelines.
- Experience in IaC tools such as Terraform or AWS CDK
- Passion for making systems reliable, maintainable, scalable and secure.
- Excellent verbal and written communication skills to address, escalate and express technical ideas clearly
- Bonus points – if you have experience with Nginx, Postgres, Redis, and Mongo systems in production.
- Understanding customer requirements and project KPIs
- Implementing various development, testing, automation tools, and IT infrastructure
- Planning the team structure, activities, and involvement in project management activities.
- Managing stakeholders and external interfaces
- Setting up tools and required infrastructure
- Defining and setting development, test, release, update, and support processes for DevOps operation
- Have the technical skill to review, verify, and validate the software code developed in the project.
- Troubleshooting techniques and fixing the code bugs
- Monitoring the processes during the entire lifecycle for its adherence and updating or creating new processes for improvement and minimizing the wastage
- Encouraging and building automated processes wherever possible
- Identifying and deploying cybersecurity measures by continuously performing vulnerability assessment and risk management
- Incidence management and root cause analysis
- Coordination and communication within the team and with customers
- Selecting and deploying appropriate CI/CD tools
- Strive for continuous improvement and build continuous integration, continuous development, and constant deployment pipeline (CI/CD Pipeline)
- Mentoring and guiding the team members
- Monitoring and measuring customer experience and KPIs
- Managing periodic reporting on the progress to the management and the customer
Job Description :
Acceldata is creating the Data observability space. We make it possible for data-driven enterprises to effectively monitor, discover, and validate Data pipelines at Petabyte scale. Our customers include a Fortune 500 company, one of Asia's largest telecom companies, and a unicorn fintech startup. We are lean, hungry, customer-obsessed, and growing fast. Our Engineering team values productivity, integrity, and pragmatism. We provide a flexible, remote-friendly work environment.
Roles & responsibilities:
- Champion engineering and operational excellence.
- Establish a solid infrastructure framework and excellent development and deployment processes.
- Provide technical guidance to both your team members and your peers from the development team.
- Work with the development teams closely to gather system requirements, new service proposals and large system improvements and come up with the infrastructure architecture leading to stable, well-monitored fly, performant and secure systems.
- Be part of and help create a positive work environment based on accountability.
- Communicate across functions and drive engineering initiatives.
- Initiate cross team collaboration with product development teams to develop high quality, polished products and services.
Must haves:
- 5+ years of professional experience developing, and launching software products on Cloud.
- Basic understanding Java/Go Programming
- Good Understanding of Container Technologies/Orchestration platforms (e. g Docker, Kubernetes)
- Deep understanding of AWS or Any Cloud.
- Good understanding of data stores like Postgres, Redis, Kafka, and Elasticsearch.
- Good Understanding of Operating systems
- Strong technical background with track record of individual technical accomplishments
- Ability to handle multiple competing priorities in a fast paced environment
- Ability to establish credibility with smart engineers quickly.
- Most importantly, ability to learn and urge to learn new things.
- B.Tech/M.Tech in Computer Science or a related technical field.
Good to Have:
- Hands-on knowledge of Configuration Management and Deployment tools like – Ansible, Terraform etc.
- Proficient in scripting, and Git and Git workflows
- Experience in developing Continuous Integration/ Continuous Delivery pipelines
- Knowledge of Big Data systems.
- Bachelor’s and/or master’s degree in Computer Science, Computer Engineering or related technical discipline
- About 5 years of professional experience supporting AWS cloud environments
- Certified Amazon Architect Associate or Architect
- Experience serving as lead (shift management, reporting) will be a plus
- AWS Architect Certified Solution Architect Professional (Must have)
- Minimum 4yrs experience, maximum 8 years’ experience.
- 100% work from office in Hyderabad
- Very fluent in English
What will you do?
- Setup, manage Applications with automation, DevOps, and CI/CD tools.
- Deploy, Maintain and Monitor Infrastructure and Services.
- Automate code and Infra Deployments.
- Tune, optimize and keep systems up to date.
- Design and implement deployment strategies.
- Setup infrastructure in cloud platforms like AWS, Azure, Google Cloud, IBM cloud, Digital Ocean etc as per requirement.
What you do :
- Developing automation for the various deployments core to our business
- Documenting run books for various processes / improving knowledge bases
- Identifying technical issues, communicating and recommending solutions
- Miscellaneous support (user account, VPN, network, etc)
- Develop continuous integration / deployment strategies
- Production systems deployment/monitoring/optimization
-
Management of staging/development environments
What you know :
- Ability to work with a wide variety of open source technologies and tools
- Ability to code/script (Python, Ruby, Bash)
- Experience with systems and IT operations
- Comfortable with frequent incremental code testing and deployment
- Strong grasp of automation tools (Chef, Packer, Ansible, or others)
- Experience with cloud infrastructure and bare-metal systems
- Experience optimizing infrastructure for high availability and low latencies
- Experience with instrumenting systems for monitoring and reporting purposes
- Well versed in software configuration management systems (git, others)
- Experience with cloud providers (AWS or other) and tailoring apps for cloud deployment
-
Data management skills
Education :
- Degree in Computer Engineering or Computer Science
- 1-3 years of equivalent experience in DevOps roles.
- Work conducted is focused on business outcomes
- Can work in an environment with a high level of autonomy (at the individual and team level)
-
Comfortable working in an open, collaborative environment, reaching across functional.
Our Offering :
- True start-up experience - no bureaucracy and a ton of tough decisions that have a real impact on the business from day one.
-
The camaraderie of an amazingly talented team that is working tirelessly to build a great OS for India and surrounding markets.
Perks :
- Awesome benefits, social gatherings, etc.
- Work with intelligent, fun and interesting people in a dynamic start-up environment.
Radical is a platform connecting data, medicine and people -- through machine learning, and usable, performant products. Software has never been the strong suit of the medical industry -- and we are changing that. We believe that the same sophistication and performance that powers our daily needs through millions of consumer applications -- be it your grocery, your food delivery or your movie tickets -- when applied to healthcare, has a massive potential to transform the industry, and positively impact lives of patients and doctors. Radical works with some of the largest hospitals and public health programmes in India, and has a growing footprint both inside the country and abroad.
As a DevOps Engineer at Radical, you will:
Work closely with all stakeholders in the healthcare ecosystem - patients, doctors, paramedics and administrators - to conceptualise and bring to life the ideal set of products that add value to their time
Work alongside Software Developers and ML Engineers to solve problems and assist in architecture design
Work on systems which have an extraordinary emphasis on capturing data that can help build better workflows, algorithms and tools
Work on high performance systems that deal with several million transactions, multi-modal data and large datasets, with a close attention to detail
We’re looking for someone who has:
Familiarity and experience with writing working, well-documented and well-tested scripts, Dockerfiles, Puppet/Ansible/Chef/Terraform scripts.
Proficiency with scripting languages like Python and Bash.
Knowledge of systems deployment and maintainence, including setting up CI/CD and working alongside Software Developers, monitoring logs, dashboards, etc.
Experience integrating with a wide variety of external tools and services
Experience navigating AWS and leveraging appropriate services and technologies rather than DIY solutions (such as hosting an application directly on EC2 vs containerisation, or an Elastic Beanstalk)
It’s not essential, but great if you have:
An established track record of deploying and maintaining systems.
Experience with microservices and decomposition of monolithic architectures
Proficiency in automated tests.
Proficiency with the linux ecosystem
Experience in deploying systems to production on cloud platforms such as AWS
The position is open now, and we are onboarding immediately.
Please write to us with an updated resume, and one thing you would like us to see as part of your application. This one thing can be anything that you think makes you stand apart among candidates.
Radical is based out of Delhi NCR, India, and we look forward to working with you!
We're looking for people who may not know all the answers, but are obsessive about finding them, and take pride in the code that they write. We are more interested in the ability to learn fast, think rigorously and for people who aren’t afraid to challenge assumptions, and take large bets -- only to work hard and prove themselves correct. You're encouraged to apply even if your experience doesn't precisely match the job description. Join us.









