Technical Experience/Knowledge Needed :
- Cloud-hosted services environment.
- Proven ability to work in a Cloud-based environment.
- Ability to manage and maintain Cloud Infrastructure on AWS
- Must have strong experience in technologies such as Dockers, Kubernetes, Functions, etc.
- Knowledge in orchestration tools Ansible
- Experience with ELK Stack
- Strong knowledge in Micro Services, Container-based architecture and the corresponding deployment tools and techniques.
- Hands-on knowledge of implementing multi-staged CI / CD with tools like Jenkins and Git.
- Sound knowledge on tools like Kibana, Kafka, Grafana, Instana and so on.
- Proficient in bash Scripting Languages.
- Must have in-depth knowledge of Clustering, Load Balancing, High Availability and Disaster Recovery, Auto Scaling, etc.
-
AWS Certified Solutions Architect or/and Linux System Administrator
- Strong ability to work independently on complex issues
- Collaborate efficiently with internal experts to resolve customer issues quickly
- No objection to working night shifts as the production support team works on 24*7 basis. Hence, rotational shifts will be assigned to the candidates weekly to get equal opportunity to work in a day and night shifts. But if you get candidates willing to work the night shift on a need basis, discuss with us.
- Early Joining
- Willingness to work in Delhi NCR

About Opoyi Inc
About
Connect with the team
Company social profiles
Similar jobs
Who you are
The possibility of having massive societal impact. Our software touches the lives of hundreds of millions of people.
Solving hard governance and societal challenges
Work directly with central and state government leaders and other dignitaries
Mentorship from world class people and rich ecosystems
Position : Architect or Technical Lead - DevOps
Location : Bangalore
Role:
Strong knowledge on architecture and system design skills for multi-tenant, multi-region, redundant and highly- available mission-critical systems.
Clear understanding of core cloud platform technologies across public and private clouds.
Lead DevOps practices for Continuous Integration and Continuous Deployment pipeline and IT operations practices, scaling, metrics, as well as running day-to-day operations from development to production for the platform.
Implement non-functional requirements needed for a world-class reliability, scale, security and cost-efficiency throughout the product development lifecycle.
Drive a culture of automation, and self-service enablement for developers.
Work with information security leadership and technical team to automate security into the platform and services.
Meet infrastructure SLAs and compliance control points of cloud platforms.
Define and contribute to initiatives to continually improve Solution Delivery processes.
Improve organisation’s capability to build, deliver and scale software as a service on the cloud
Interface with Engineering, Product and Technical Architecture group to meet joint objectives.
You are deeply motivated & have knowledge of solving hard to crack societal challenges with the stamina to see it all the way through.
You must have 7+ years of hands-on experience on DevOps, MSA and Kubernetes platform.
You must have 2+ years of strong kubernetes experience and must have CKA (Certified Kubernetes Administrator).
You should be proficient in tools/technologies involved for a microservices architecture and deployed in a multi cloud kubernetes environment.
You should have hands-on experience in architecting and building CI/CD pipelines from code check-in until production.
You should have strong knowledge on Dockers, Jenkins pipeline scripting, Helm charts, kubernetes objects and manifests, prometheus and demonstrate hands-on knowledge of multi cloud computing, storage and networking.
You should have experience in designing, provisioning and administrating using best practices across multiple public cloud offerings (Azure, AWS, GCP) and/or private cloud offerings ( OpenStack, VMware, Nutanix, NIC, bare metal Infra).
You should have experience in setting up logging, monitoring Cloud application performance, error rates, and error budgets while tracking adherence to SLOs and SLAs.
About Kiru:
Kiru is a forward-thinking payments startup on a mission to revolutionise the digital payments landscape in Africa and beyond. Our innovative solutions will reshape how people transact, making payments safer, faster, and more accessible. Join us on our journey to redefine the future of payments.
Position Overview:
We are searching for a highly skilled and motivated DevOps Engineer to join our dynamic team in Pune, India. As a DevOps Engineer at Kiru, you will play a critical role in ensuring our payment infrastructure's reliability, scalability, and security.
Key Responsibilities:
- Utilize your expertise in technology infrastructure configuration to manage and automate infrastructure effectively.
- Collaborate with cross-functional teams, including Software Developers and technology management, to design and implement robust and efficient DevOps solutions.
- Configure and maintain a secure backend environment focusing on network isolation and VPN access.
- Implement and manage monitoring solutions like ZipKin, Jaeger, New Relic, or DataDog and visualisation and alerting solutions like Prometheus and Grafana.
- Work closely with developers to instrument code for visualisation and alerts, ensuring system performance and stability.
- Contribute to the continuous improvement of development and deployment pipelines.
- Collaborate on the selection and implementation of appropriate DevOps tools and technologies.
- Troubleshoot and resolve infrastructure and deployment issues promptly to minimize downtime.
- Stay up-to-date with emerging DevOps trends and best practices.
- Create and maintain comprehensive documentation related to DevOps processes and configurations.
Qualifications:
- Bachelor's degree in Computer Science, Information Technology, or a related field (or equivalent work experience).
- Proven experience as a DevOps Engineer or in a similar role.
- Experience configuring infrastructure on Microsoft Azure
- Experience with Kubernetes as a container orchestration technology
- Experience with Terraform and Azure ARM or Bicep templates for infrastructure provisioning and management.
- Experience configuring and maintaining secure backend environments, including network isolation and VPN access.
- Proficiency in setting up and managing monitoring and visualization tools such as ZipKin, Jaeger, New Relic, DataDog, Prometheus, and Grafana.
- Ability to collaborate effectively with developers to instrument code for visualization and alerts.
- Strong problem-solving and troubleshooting skills.
- Excellent communication and teamwork skills.
- A proactive and self-motivated approach to work.
Desired Skills:
- Experience with Azure Kubernetes Services and managing identities across Azure services.
- Previous experience in a financial or payment systems environment.
About Kiru:
At Kiru, we believe that success is achieved through collaboration. We recognise that every team member has a vital role to play, and it's the partnerships we build within our organisation that drive our customers' success and our growth as a business.
We are more than just a team; we are a close-knit partnership. By bringing together diverse talents and fostering powerful collaborations, we innovate, share knowledge, and continually learn from one another. We take pride in our daily achievements but never stop challenging ourselves and supporting each other. Together, we reach new heights and envision a brighter future.
Regardless of your career journey, we provide the guidance and resources you need to thrive. You will have everything required to excel through training programs, mentorship, and ongoing support. At Kiru, your success is our success, and that success matters because we are the essential partners for the world's most critical businesses. These companies manufacture, transport, and supply the world's essential goods.
Equal Opportunities and Accommodations Statement:
Kiru is committed to fostering a workplace and global community where inclusion is celebrated and where you can bring your authentic selfbecause that's who we're interested in. If you are interested in this role but don't meet every qualification in the job description, don't hesitate to apply. We are an equal opportunity employer.
Job Description
We are seeking a seasoned DevOps Architect to join our dynamic team. The ideal candidate should possess a deep understanding of DevOps principles, system design, and architecture, with a focus on creating robust and scalable infrastructure solutions through automation. This role requires a candidate with hands-on experience in development, testing, and deployment processes. Additionally, the candidate should have a minimum of 5 years of experience in DevOps operations and should be proficient in team management, coordination, problem-solving, troubleshooting, and technical expertise.
About the company:
A rapidly growing omni-channel luxury retailer with eight stores across Mumbai, Delhi, Kolkata and a global e-commerce platform servicing 65+ countries worldwide. The 18-year-old company is an established market leader with considerable brand equity.
Location: Prabhadevi, Mumbai
Key Responsibilities:
- System Design and Architecture: Develop robust and scalable system designs that align with business requirements and industry best practices.
- Automation: Implement automation solutions to streamline processes and enhance system reliability.
- Development, Testing, and Deployment: Oversee the entire software development lifecycle, from code creation to testing and deployment.
- Coordination and Issue Resolution: Collaborate with cross-functional teams, resolve technical issues, and ensure smooth project execution.
- Troubleshooting: Apply your technical expertise to diagnose and resolve complex system issues efficiently.
- Interpersonal Skills: Communicate effectively with team members, stakeholders, and management to ensure project success.
- Ecommerce (B2C) Expertise: Bring in-depth knowledge of Ecommerce (B2C) operations to tailor DevOps solutions to our specific needs.
- Infrastructure Automation: Design and implement infrastructure automation tools and workflows to support CI/CD initiatives.
- CI/CD Pipeline Management: Build and operate complex CI/CD pipelines at scale, ensuring efficient software delivery.
- Cloud Expertise: Possess knowledge of handling GCP/AWS clouds, optimizing cloud resources, and managing cloud-based applications.
- Cybersecurity: Ensure that systems are safe and secure against cybersecurity threats, implementing best practices for data protection and compliance.
Requirements
Qualifications:
- Bachelor's degree in Computer Science or related field (Master's preferred).
- Minimum 5 years of hands-on experience in DevOps operations.
- Has worked to ensure system reliability, scale & performance in high growth environments.
- Experienced in designing and implementing scalable and robust IT solutions.
- Strong technical background and proficiency in DevOps tools and practices.
- Experience with Ecommerce (B2C) platforms is mandatory.
- Excellent team management, coordination, and interpersonal skills.
- Proficiency in troubleshooting and issue resolution.
- Familiarity with the latest open-source technologies.
- Expertise in CI/CD pipeline management.
- Knowledge of GCP/AWS cloud services.
- Understanding cybersecurity best practices.
Benefits
- Group Mediclaim cover 2.5 L sum assured (Employee + Spouse + 2 Children) & Group Personal Accident – 5 L sum assured.
- Rewards & Recognition programmes
About us:
HappyFox is a software-as-a-service (SaaS) support platform. We offer an enterprise-grade help desk ticketing system and intuitively designed live chat software.
We serve over 12,000 companies in 70+ countries. HappyFox is used by companies that span across education, media, e-commerce, retail, information technology, manufacturing, non-profit, government and many other verticals that have an internal or external support function.
To know more, Visit! - https://www.happyfox.com/
Responsibilities:
- Build and scale production infrastructure in AWS for the HappyFox platform and its products.
- Research, Build/Implement systems, services and tooling to improve uptime, reliability and maintainability of our backend infrastructure. And to meet our internal SLOs and customer-facing SLAs.
- Proficient in managing/patching servers with Unix-based operating systems like Ubuntu Linux.
- Proficient in writing automation scripts or building infrastructure tools using Python/Ruby/Bash/Golang
- Implement consistent observability, deployment and IaC setups
- Patch production systems to fix security/performance issues
- Actively respond to escalations/incidents in the production environment from customers or the support team
- Mentor other Infrastructure engineers, review their work and continuously ship improvements to production infrastructure.
- Build and manage development infrastructure, and CI/CD pipelines for our teams to ship & test code faster.
- Participate in infrastructure security audits
Requirements:
- At least 5 years of experience in handling/building Production environments in AWS.
- At least 2 years of programming experience in building API/backend services for customer-facing applications in production.
- Demonstrable knowledge of TCP/IP, HTTP and DNS fundamentals.
- Experience in deploying and managing production Python/NodeJS/Golang applications to AWS EC2, ECS or EKS.
- Proficient in containerised environments such as Docker, Docker Compose, Kubernetes
- Proficient in managing/patching servers with Unix-based operating systems like Ubuntu Linux.
- Proficient in writing automation scripts using any scripting language such as Python, Ruby, Bash etc.,
- Experience in setting up and managing test/staging environments, and CI/CD pipelines.
- Experience in IaC tools such as Terraform or AWS CDK
- Passion for making systems reliable, maintainable, scalable and secure.
- Excellent verbal and written communication skills to address, escalate and express technical ideas clearly
- Bonus points – if you have experience with Nginx, Postgres, Redis, and Mongo systems in production.

Experience of Linux
Experience using Python or Shell scripting (for Automation)
Hands-on experience with Implementation of CI/CD Processes
Experience working with one cloud platforms (AWS or Azure or Google)
Experience working with configuration management tools such as Ansible & Chef
Experience working with Containerization tool Docker.
Experience working with Container Orchestration tool Kubernetes.
Experience in source Control Management including SVN and/or Bitbucket
& GitHub
Experience with setup & management of monitoring tools like Nagios, Sensu & Prometheus or any other popular tools
Hands-on experience in Linux, Scripting Language & AWS is mandatory
Troubleshoot and Triage development, Production issues
Our client is a call management solutions company, which helps small to mid-sized businesses use its virtual call center to manage customer calls and queries. It is an AI and cloud-based call operating facility that is affordable as well as feature-optimized. The advanced features offered like call recording, IVR, toll-free numbers, call tracking, etc are based on automation and enhances the call handling quality and process, for each client as per their requirements. They service over 6,000 business clients including large accounts like Flipkart and Uber.
- Beng involved in Configuration Management, Web Services Architectures, DevOps Implementation, Build & Release Management, Database management, Backups, and Monitoring.
- Creating and managing CI/ CD pipelines for microservice architectures.
- Creating and managing application configuration.
- Researching and planning architectures and tools for smooth deployments.
- Logging, metrics and alerting management.
What you need to have:
- Proficient in Linux Commands line and troubleshooting.
- Proficient in designing CI/ CD pipelines using jenkins. Experience in deployment using Ansible.
- Experience in microservices architecture deployment, Hands-on experience on Docker, Kubernetes, EKS.
- Knowledge of infrastructure management tools (Infrastructure as cloud) such as terraform, AWS cloudformation etc.
- Proficient in AWS Services. Deployment, Monitoring and troubleshooting applications in AWS.
- Configuration management tools like ansible/chef/puppet.
- Proficient in deployment of applications behind load balancers and proxy servers such as nginx, apache.
- Proficient in bash scripting, python scripting is an advantage.
- Experience with Logging, Monitoring, and Alerting tools like ELK(Elastic-search, Logstash, Kibana), Nagios. Graylog, splunk Prometheus, Grafana is a plus.
- Proficient in Configuration Management.

Cloud Software Engineer
Notice Period: 45 days / Immediate Joining
Banyan Data Services (BDS) is a US-based Infrastructure services Company, headquartered in San Jose, California, USA. It provides full-stack managed services to support business applications and data infrastructure. We do provide the data solutions and services on bare metal, On-prem, and all Cloud platforms. Our engagement service is built on the DevOps standard practice and SRE model.
We offer you an opportunity to join our rocket ship startup, run by a world-class executive team. We are looking for candidates that aspire to be a part of the cutting-edge solutions and services we offer, that address next-gen data evolution challenges. Candidates who are willing to use their experience in areas directly related to Infrastructure Services, Software as Service, and Cloud Services and create a niche in the market.
Roles and Responsibilities
· A wide variety of engineering projects including data visualization, web services, data engineering, web-portals, SDKs, and integrations in numerous languages, frameworks, and clouds platforms
· Apply continuous delivery practices to deliver high-quality software and value as early as possible.
· Work in collaborative teams to build new experiences
· Participate in the entire cycle of software consulting and delivery from ideation to deployment
· Integrating multiple software products across cloud and hybrid environments
· Developing processes and procedures for software applications migration to the cloud, as well as managed services in the cloud
· Migrating existing on-premises software applications to cloud leveraging a structured method and best practices
Desired Candidate Profile : *** freshers can also apply ***
· 2+years of experience with 1 or more development languages such as Java, Python, or Spark.
· 1 year + of experience with private/public/hybrid cloud model design, implementation, orchestration, and support.
· Certification or any training's completion of any one of the cloud environments like AWS, GCP, Azure, Oracle Cloud, and Digital Ocean.
· Strong problem-solvers who are comfortable in unfamiliar situations, and can view challenges through multiple perspectives
· Driven to develop technical skills for oneself and team-mates
· Hands-on experience with cloud computing and/or traditional enterprise datacentre technologies, i.e., network, compute, storage, and virtualization.
· Possess at least one cloud-related certification from AWS, Azure, or equivalent
· Ability to write high-quality, well-tested code and comfort with Object-Oriented or functional programming patterns
· Past experience quickly learning new languages and frameworks
· Ability to work with a high degree of autonomy and self-direction
http://www.banyandata.com" target="_blank">www.banyandata.com
- 3-6 years of relevant work experience in a DevOps role.
- Deep understanding of Amazon Web Services or equivalent cloud platforms.
- Proven record of infra automation and programming skills in any of these languages - Python, Ruby, Perl, Javascript.
- Implement DevOps Industry best practices and the application of procedures to achieve a continuously deployable system
- Continuously improve and increase the capabilities of the CI/CD pipeline
- Support engineering teams in the implementation of life-cycle infrastructure solutions and documentation operations in order to meet the engineering departments quality and standards
- Participate in production outages and handle complex issues and works towards resolution


You will be responsible for
1. Setting up, maintaining cloud (AWS/GCP/Azure) and kubernetes cluster and automating
their operation
2. All operational aspects of devtron platform including maintenance, upgrades,
automation.
3. Providing kubernetes expertise to facilitate smooth and fast customer onboarding on
devtron platform
Responsibilities:
1. Manage devtron platform on multiple kubernetes clusters
2. Designing and embedding industry best practices for online services including disaster
recovery, business continuity, monitoring/alerting, and service health measurement
3. Providing operational support for day to day activities involving the deployment of
services
4. Identify opportunities for improving the security, reliability, and scalability of the platform
5. Facilitate smooth and fast customer onboarding on devtron platform
6. Drive customer engagement
Requirements:
● Bachelor's Degree in Computer Science or a related field.
● 2+ years working as a devops engineer
● Proficient in 1 or more programming languages (e.g. Python, Go, Ruby).
● Familiar with shell scripts, Linux commands, network fundamentals
● Understanding of large scale distributed systems
● Basic understanding of cloud computing (AWS/GCP/Azure)
Preferred Qualifications:
● Great analytical and interpersonal skills
● Passion for creating efficient, reliable, reusable programs/scripts.
● Excited about technology, have a strong interest in learning about and playing with the
latest technologies and doing POC.
● Strong customer focus, ownership, urgency and drive.
● Knowledge and experience with cloud native tools like prometheus, kubernetes, docker,
grafana.

