Internshala is a dot com business with the heart of dot org.
We are a technology company on a mission to equip students with relevant skills & practical exposure through internships, fresher jobs, and online trainings. Imagine a world full of freedom and possibilities. A world where you can discover your passion and turn it into your career. A world where your practical skills matter more than your university degree. A world where you do not have to wait till 21 to taste your first work experience (and get a rude shock that it is nothing like you had imagined it to be). A world where you graduate fully assured, fully confident, and fully prepared to stake a claim on your place in the world.
At Internshala, we are making this dream a reality!
👩🏻💻 Your responsibilities would include-
- Building and maintaining operational tools for monitoring and analysis of AWS infrastructure and systems
- Actively monitoring the health and performance of all systems and performing benchmarking and tuning of system applications and operating systems
- Setting up container orchestration using Kubernetes or other orchestration system for a monolithic application
- Continually working with development engineers to design the best system architectures and solutions
- Troubleshooting and resolving issues in our development, test, and production environments
- Maintaining reliability of the system and being on-call for mission-critical systems
- Performing infrastructure cost analysis and optimization
- Ensure systems’ compliance with operational risk standards (e.g. network, firewall, OS, logging, monitoring, availability, resiliency)
- Building, mentoring and leading a team of young professionals, if the need arises
🍒 You will get-
- A chance to build and lead an awesome team working on one of the best recruitment and online trainings products in the world that impact millions of lives for the better
- Awesome colleagues & a great work environment
- Loads of autonomy and freedom in your work
💯 You fit the bill if-
- You are proficient with bash, git and git workflows
- You have 3-5 years of experience as a DevOps Engineer or similar software engineering role
- You have excellent attention to detail
- AWS certification preferred but not mandatory

About Internshala
About
Internshala is six years old and on its way to solving a problem that is at least 50 years old - the problem of meaningful internships. Millions of students, just like you, struggle to find an internship every year - we are changing that.
Imagine a world full of freedom and possibilities. A world where you can discover your passion and turn it into your career. A world where your practical skills matter more than your university degree. A world where you do not have to wait till 21 to taste your first work experience (and get a rude shock that it is nothing like you had imagined it to be). A world where you graduate fully assured, fully confident, and fully prepared to stake the claim on your place in the world. At Internshala, we are making this dream a reality. Join us!
Wondering what it is like to work at Internshala? Catch a glimpse of Internshala work culture at https://internshala.com/culture
Connect with the team
Similar jobs
About Us :
CLOUDSUFI, a Google Cloud Premier Partner, a Data Science and Product Engineering organization building Products and Solutions for Technology and Enterprise industries. We firmly believe in the power of data to transform businesses and make better decisions. We combine unmatched experience in business processes with cutting edge infrastructure and cloud services. We partner with our customers to monetize their data and make enterprise data dance.
Our Values :
We are a passionate and empathetic team that prioritizes human values. Our purpose is to elevate the quality of lives for our family, customers, partners and the community.
Equal Opportunity Statement :
CLOUDSUFI is an equal opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all employees. All qualified candidates receive consideration for employment without regard to race, color, religion, gender, gender identity or expression, sexual orientation, and national origin status. We provide equal opportunities in employment, advancement, and all other areas of our workplace.
Job Title : Senior DevOps Engineer
Location : Noida(Hybrid)
The Opportunity :
We are seeking a highly skilled and experienced Senior DevOps Engineer to join our team. The ideal candidate will have extensive expertise in modern DevOps tools and practices, particularly in managing CI/CD pipelines, infrastructure as code, and cloud-native environments. This role involves designing, implementing, and maintaining robust, scalable, and efficient infrastructure and deployment pipelines to support our development and operations teams.
Mandatory Skills required: GCP, DevOps, Terraform, Kubernetes, Docker, CI/CD, GitHub Actions, Helm Charts
Secondary Skills required: - Experience with other CI/CD tools (e.g., Jenkins, GitLab CI/CD)
- Knowledge of additional cloud platforms (e.g., AWS, Azure)
- Certification in Kubernetes (CKA/CKAD) or Google Cloud (GCP Professional DevOps Engineer)
Required Skills and Experience:
- 4+ years of experience in DevOps, infrastructure automation, or related fields.
- Advanced expertise in Terraform for infrastructure as code.
- Solid experience with Helmfor managing Kubernetes applications.
- Proficient with GitHub for version control, repository management, and workflows.
- Extensive experience with Kubernetes for container orchestration and management.
- In-depth understanding of Google Cloud Platform (GCP) services and architecture.
- Strong scripting and automation skills (e.g., Python, Bash, or equivalent).
- Excellent problem-solving skills and attention to detail.
- Strong communication and collaboration abilities in agile development environments.
Preferred Qualifications:
- Experience with other CI/CD tools (e.g., Jenkins, GitLab CI/CD).
- Knowledge of additional cloud platforms (e.g., AWS, Azure).
- Certification in Kubernetes (CKA/CKAD) or Google Cloud (GCP Professional DevOps Engineer).
About Eazeebox
Eazeebox is India’s first B2B Quick Commerce platform for home electrical goods. We empower electrical retailers with access to 100+ brands, flexible credit options, and 4-hour delivery—making supply chains faster, smarter, and more efficient. Our tech-driven approach enables sub-3 hour inventory-aware fulfilment across micro-markets, with a goal of scaling to 50+ orders/day per store.
About the Role
We’re looking for a DevOps Engineer to help scale and stabilize the cloud-native backbone that powers Eazeebox. You’ll play a critical role in ensuring our microservices architecture remains reliable, responsive, and performant—especially during peak retailer ordering windows. This is a high-ownership role for an "all-rounder" who is passionate about
designing scalable architectures, writing robust code, and ensuring seamless deployments and operations.
What You'll Be Doing
As a critical member of our small, dedicated team, you will take on a versatile role encompassing development, infrastructure, and operations.
Cloud & DevOps Ownership
- Architect and implement containerized services on AWS (S3, EC2, ECS, ECR, CodeBuild, Lambda, Fargate, RDS, CloudWatch) under secure IAM policies.
- Take ownership of CI/CD pipelines, optimizing and managing GitHub Actions workflows.
- Configure and manage microservice versioning and CI/CD deployments.
- Implement secrets rotation and IP-based request rate limiting for enhanced security.
- Configure auto-scaling instances and Kubernetes for high-workload microservices to ensure performance and cost efficiency.
- Hands-on experience with Docker and Kubernetes/EKS fundamentals.
Backend & API Design
- Design, build, and maintain scalable REST/OpenAPI services in Django (DRF), WebSocket implementations, and asynchronous microservices in FastAPI.
- Model relational data in PostgreSQL 17 and optimize with Redis for caching and pub/sub.
- Orchestrate background tasks using Celery or RQ with Redis Streams or Amazon SQS.
- Collaborate closely with the frontend team (React/React Native) to define and build robust APIs.
Testing & Observability
- Ensure code quality via comprehensive testing using Pytest, React Testing Library, and Playwright.
- Instrument applications with CloudWatch metrics, contributing to our observability strategy.
- Maintain a Git-centric development workflow, including branching strategies and pull-request discipline.
Qualifications & Skills
Must-Have
- Experience: 2-4 years of hands-on experience delivering production-level full-stack applications with a strong emphasis on backend and DevOps.
- Backend Expertise: Proficiency in Python, with strong command of Django or FastAPI, including async Python patterns and REST best practices.
- Database Skills: Strong SQL skills with PostgreSQL; practical experience with Redis for caching and messaging.
- Cloud & DevOps Mastery: Hands-on experience with Docker and Kubernetes/EKS fundamentals.
- AWS Proficiency: Experience deploying and managing services on AWS (EC2, S3, RDS, Lambda, ECS Fargate, ECR, SQS).
- CI/CD: Deep experience with GitHub Actions or similar platforms, including semantic-release, Blue-Green Deployments, and artifact signing.
- Automation: Fluency in Python/Bash or Go for automation scripts; comfort with YAML.
- Ownership Mindset: Entrepreneurial spirit, strong sense of ownership, and ability to deliver at scale.
- Communication: Excellent written and verbal communication skills; comfortable in async and distributed team environments.
Nice-to-Have
- Frontend Familiarity: Exposure to React with Redux Toolkit and React Native.
- Event Streaming: Experience with Kafka or Amazon EventBridge.
- Serverless: Knowledge of AWS Lambda, Step Functions, CloudFront Functions, or Cloudflare Workers.
- Observability: Familiarity with Datadog, Posthog, Prometheus/Grafana/Loki.
- Emerging Tech: Interest in GraphQL (Apollo Federation) or generative AI frameworks (Amazon Bedrock, LangChain) and AI/ML.
Key Responsibilities
- Architectural Leadership: Design and lead the technical strategy for migrating our platform from a monolithic to a microservices architecture.
- System Design: Translate product requirements into scalable, secure, and reliable system designs.
- Backend Development: Build and maintain core backend services using Python (Django/FastAPI).
- CI/CD & Deployment: Own and manage CI/CD pipelines for multiple services using GitHub Actions, AWS CodeBuild, and automated deployments.
- Infrastructure & Operations: Deploy production-grade microservices using Docker, Kubernetes, and AWS EKS.
- FinOps & Performance: Drive cloud cost optimization and implement auto-scaling for performance and cost-efficiency.
- Security & Observability: Implement security, monitoring, and compliance using tools like Prometheus, Grafana, Datadog, Posthog, and Loki to ensure 99.99% uptime.
- Collaboration: Work with product and development teams to align technical strategy with business growth plans.

Staff DevOps Engineer with Azure
EGNYTE YOUR CAREER. SPARK YOUR PASSION.
Egnyte is a place where we spark opportunities for amazing people. We believe that every role has meaning, and every Egnyter should be respected. With 22,000+ customers worldwide and growing, you can make an impact by protecting their valuable data. When joining Egnyte, you’re not just landing a new career, you become part of a team of Egnyters that are doers, thinkers, and collaborators who embrace and live by our values:
Invested Relationships
Fiscal Prudence
Candid Conversations
ABOUT EGNYTE
Egnyte is the secure multi-cloud platform for content security and governance that enables organizations to better protect and collaborate on their most valuable content. Established in 2008, Egnyte has democratized cloud content security for more than 22,000 organizations, helping customers improve data security, maintain compliance, prevent and detect ransomware threats, and boost employee productivity on any app, any cloud, anywhere. For more information, visit www.egnyte.com.
Our Production Engineering team enables Egnyte to provide customers access to their data 24/7 by providing best in class infrastructure.
ABOUT THE ROLE
We store multibillion files and multiple petabytes of data. We observe more than 11K API requests per second on average. To make that possible and to provide the best possible experience, we rely on great engineers. For us, people who own their work, from start to finish, are integral. Our engineers are part of the process from design to code, to test, to deployment and back again for further iterations. You can, and will, touch every level of the infrastructure depending on the day and what project you are working on. The ideal candidate should be able to take a complex problem and execute end to end. Mentor and set higher standards for the rest of the team and for the new hires.
WHAT YOU’LL DO:
• Design, build and maintain self-hosted and cloud environments to serve our own applications and services.
• Collaborate with software developers to build stable, scalable and high-performance solutions.
• Taking part in big projects like migrating solutions from self-hosted environments to the cloud, from virtual machines to Kubernetes, from monolith to microservices.
- Proactively make our organization and technology better!
- Advising others as to how DevOps can make a positive impact on their work.
• Share knowledge, mentor more junior team members while also still learning and gaining new skills.
- Maintain consistently high standards of communication, productivity, and teamwork across all teams.
YOUR QUALIFICATIONS:
• 5+ years of proven experience in a DevOps Engineer, System Administrator or Developer role, working on infrastructure or build processes.
• Expert knowledge of Microsoft Azure.
• Programming prowess (Python, Golang).
• Knowledge and experience about deployment and maintenance of Java and Python apps using application and web servers (Tomcat, Nginx, etc.).
• Ability to solve complex problems with simple, elegant and clean code.
• Practical knowledge of CI/CD solutions, GitLab CI or similar.
• Practical knowledge of Docker as a tool for testing and building an environment.
• Knowledge of Kubernetes and related technologies.
• Experience with metric-based monitoring solutions.
• Solid English skills to effectively communicate with other team members.
• Good understanding of the Linux Operating System on the administration level.
• Drive to grow as a DevOps Engineer (we value open-mindedness and a can-do attitude).
• Strong sense of ownership and ability to drive big projects.
BONUS SKILLS:
• Work experience as a Microsoft Azure architect.
• Experience in Cloud migrations projects.
• Leadership skills and experience.
COMMITMENT TO DIVERSITY, EQUITY, AND INCLUSION:
At Egnyte, we celebrate our differences and thrive on our diversity for our employees, our products, our customers, our investors, and our communities. Egnyters are encouraged to bring their whole selves to work and to appreciate the many differences that collectively make Egnyte a higher-performing company and a great place to be.
· Strong knowledge on Windows and Linux
· Experience working in Version Control Systems like git
· Hands-on experience in tools Docker, SonarQube, Ansible, Kubernetes, ELK.
· Basic understanding of SQL commands
· Experience working on Azure Cloud DevOps

Bachelor's degree in information security, computer science, or related.
A Strong Devops experience of at least 4+ years
Strong Experience in Unix/Linux/Python scripting
Strong networking knowledge,vSphere networking stack knowledge desired.
Experience on Docker and Kubernetes
Experience with cloud technologies (AWS/Azure)
Exposure to Continuous Development Tools such as Jenkins or Spinnaker
Exposure to configuration management systems such as Ansible
Knowledge of resource monitoring systems
Ability to scope and estimate
Strong verbal and communication skills
Advanced knowledge of Docker and Kubernetes.
Exposure to Blockchain as a Service (BaaS) like - Chainstack/IBM blockchain platform/Oracle Blockchain Cloud/Rubix/VMWare etc.
Capable of provisioning and maintaining local enterprise blockchain platforms for Development and QA (Hyperledger fabric/Baas/Corda/ETH).
About Navis
Introduction
http://www.synapsica.com/">Synapsica is a https://yourstory.com/2021/06/funding-alert-synapsica-healthcare-ivycap-ventures-endiya-partners/">series-A funded HealthTech startup founded by alumni from IIT Kharagpur, AIIMS New Delhi, and IIM Ahmedabad. We believe healthcare needs to be transparent and objective while being affordable. Every patient has the right to know exactly what is happening in their bodies and they don't have to rely on cryptic 2 liners given to them as a diagnosis.
Towards this aim, we are building an artificial intelligence enabled cloud based platform to analyse medical images and create v2.0 of advanced radiology reporting. We are backed by IvyCap, Endia Partners, YCombinator and other investors from India, US, and Japan. We are proud to have GE and The Spinal Kinetics as our partners. Here’s a small sample of what we’re building: https://www.youtube.com/watch?v=FR6a94Tqqls">https://www.youtube.com/watch?v=FR6a94Tqqls
Your Roles and Responsibilities
The Lead DevOps Engineer will be responsible for the management, monitoring and operation of our applications and services in production. The DevOps Engineer will be a hands-on person who can work independently or with minimal guidance and has the ability to drive the team’s deliverables by mentoring and guiding junior team members. You will work with the existing teams very closely and build on top of tools like Kubernetes, Docker and Terraform and support our numerous polyglot services.
Introducing a strong DevOps ethic into the rest of the team is crucial, and we expect you to lead the team on best practices in deployment, monitoring, and tooling. You'll work collaboratively with software engineering to deploy and operate our systems, help automate and streamline our operations and processes, build and maintain tools for deployment, monitoring, and operations and troubleshoot and resolve issues in our development, test and production environments. The position is based in our Bangalore office.
Primary Responsibilities
- Providing strategies and creating pathways in support of product initiatives in DevOps and automation, with a focus on the design of systems and services that run on cloud platforms.
- Optimizations and execution of the CI/CD pipelines of multiple products and timely promotion of the releases to production environments
- Ensuring that mission critical applications are deployed and optimised for high availability, security & privacy compliance and disaster recovery.
- Strategize, implement and verify secure coding techniques, integrate code security tools for Continuous Integration
- Ensure analysis, efficiency, responsiveness, scalability and cross-platform compatibility of applications through captured metrics, testing frameworks, and debugging methodologies.
- Technical documentation through all stages of development
- Establish strong relationships, and proactively communicate, with team members as well as individuals across the organisation
Requirements
- Minimum of 6 years of experience on Devops tools.
- Working experience with Linux, container orchestration and management technologies (Docker, Kubernetes, EKS, ECS …).
- Hands-on experience with "infrastructure as code" solutions (Cloudformation, Terraform, Ansible etc).
- Background of building and maintaining CI/CD pipelines (Gitlab-CI, Jenkins, CircleCI, Github actions etc).
- Experience with the Hashicorp stack (Vault, Packer, Nomad etc).
- Hands-on experience in building and maintaining monitoring/logging/alerting stacks (ELK stack, Prometheus stack, Grafana etc).
- Devops mindset and experience with Agile / SCRUM Methodology
- Basic knowledge of Storage , Databases (SQL and noSQL)
- Good understanding of networking technologies, HAProxy, firewalling and security.
- Experience in Security vulnerability scans and remediation
- Experience in API security and credentials management
- Worked on Microservice configurations across dev/test/prod environments
- Ability to quickly adapt new languages and technologies
- A strong team player attitude with excellent communication skills.
- Very high sense of ownership.
- Deep interest and passion for technology
- Ability to plan projects, execute them and meet the deadline
- Excellent verbal and written English communication.
Engineering Leader, Cloud Infrastructure.
Bengaluru, Karnataka, India
Do you thrive on solving complex technical problems? Do you want to be at the cutting edge of technology? If so,we’re interested in speaking with you!
Your Impact:
We’re looking for a seasoned engineering leader in the Cloud team that is responsible for building, operating, and maintaining a customer-facing DBaaS service in multiple public clouds (AWS, GCP, and Azure). The service supports unified multiverse management of YugabyteDB, including fault-domain aware provisioning, rolling upgrades, security,
networking, monitoring, and day-2 operations (backups, scaling, billing etc). If you’re a strong leader who exemplifies collaboration, who is driven and thrive in a fast-paced startup environment, and who has a strong desire to build an internet-scale, extensible cloud based service with strong emphasis on simplicity and user experience, this job is for
you.
You Will:
Lead, inspire, and influence to make sure your team is successful
Partner with the recruiting team to attract and retain high-quality and diverse talent
Establish great rapport with other development teams, Product Managers, Sales and Customer Success tomaintain high levels of visibility, efficiency, and collaboration
Ensure teams have appropriate technical direction, leadership and balance between short-term impact andlong term architectural vision.
Occasionally contributing to development tasks such as coding and feature verifications to assist teamswith release commitments, to gain an understanding of the deeply technical product as well as to keepyour technical acumen sharp.
You'll need:
BS/MS degree in CS-or- a related field with 5+ years of engineering management experience leading productive, high-functioning teams
Strong fundamentals in distributed systems design and development
Ability to hire while ensuring a high hiring bar, keep engineers motivated, coach/mentor, and handle performance management
Experience running production services in Public Clouds such as AWS, GCP, and Azure
Experience with running large stateful data systems in the Cloud
Prior knowledge of Cloud architecture and implementation features (multi-tenancy, containerization,orchestration, elastic scalability)
A great track record of shipping features and hitting deadlines consistently; should be able to move fast,build in increments and iterate; have a sense of urgency, aggressive mindset towards achieving results and excellent prioritization skills; able to anticipate future technical needs for the product and craft plans to realize them
Ability to influence the team, peers, and upper management using effective communication and collaborative techniques; focused on building and maintaining a culture of collaboration within the team.

Skill: Python, Docker or Ansible , AWS
➢ Experience Building a multi-region highly available auto-scaling infrastructure that optimizes
performance and cost. plan for future infrastructure as well as Maintain & optimize existing
infrastructure.
➢ Conceptualize, architect and build automated deployment pipelines in a CI/CD environment like
Jenkins.
➢ Conceptualize, architect and build a containerized infrastructure using Docker,Mesosphere or
similar SaaS platforms.
Work with developers to institute systems, policies and workflows which allow for rollback of
deployments Triage release of applications to production environment on a daily basis.
➢ Interface with developers and triage SQL queries that need to be executed inproduction
environments.
➢ Maintain 24/7 on-call rotation to respond and support troubleshooting of issues in production.
➢ Assist the developers and on calls for other teams with post mortem, follow up and review of
issues affecting production availability.
➢ Establishing and enforcing systems monitoring tools and standards
➢ Establishing and enforcing Risk Assessment policies and standards
➢ Establishing and enforcing Escalation policies and standards

