11+ Routing protocols Jobs in Pune | Routing protocols Job openings in Pune
Apply to 11+ Routing protocols Jobs in Pune on CutShort.io. Explore the latest Routing protocols Job opportunities across top companies like Google, Amazon & Adobe.
🚀 RECRUITING BOND HIRING
Role: CLOUD OPERATIONS & MONITORING ENGINEER - (THE GUARDIAN OF UPTIME)
⚡ THIS IS NOT A MONITORING ROLE
THIS IS A COMMAND ROLE
You don’t watch dashboards.
You control outcomes.
You don’t react to incidents.
You eliminate them before they escalate.
This role powers an AI-driven SaaS + IoT platform where:
---> Uptime is non-negotiable
---> Latency is hunted
---> Failures are never allowed to repeat
Incidents don’t grow.
Problems don’t hide.
Uptime is enforced.
🧠 WHAT YOU’LL OWN
(Real Work. Real Impact.)
🔍 Total Observability
---> Real-time visibility across cloud, application, database & infrastructure
---> High-signal dashboards (Grafana + cloud-native tools)
---> Performance trends tracked before growth breaks systems
🚨 Smart Alerting (No Noise)
---> Alerts that fire only when action is required
---> Zero false positives. Zero alert fatigue
Right signal → right person → right time
⚙ Automation as a Weapon
---> End-to-end automation of operational tasks
---> Standardized logging, metrics & alerting
---> Systems that scale without human friction
🧯 Incident Command & Reliability
---> First responder for critical incidents (on-call rotation)
---> Root cause analysis across network, app, DB & storage
Fix fast — then harden so it never breaks the same way again
📘 Operational Excellence
---> Battle-tested runbooks
---> Documentation that actually works under pressure
Every incident → a stronger platform
🛠️ TECHNOLOGIES YOU’LL MASTER
☁ Cloud: AWS | Azure | Google Cloud
📊 Monitoring: Grafana | Metrics | Traces | Logs
📡 Alerting: Production-grade alerting systems
🌐 Networking: DNS | Routing | Load Balancers | Security
🗄 Databases: Production systems under real pressure
⚙ DevOps: Automation | Reliability Engineering
🎯 WHO WE’RE LOOKING FOR
Engineers who take uptime personally.
You bring:
---> 3+ years in Cloud Ops / DevOps / SRE
---> Live production SaaS experience
---> Deep AWS / Azure / GCP expertise
---> Strong monitoring & alerting experience
---> Solid networking fundamentals
---> Calm, methodical incident response
---> Bonus (Highly Preferred):
---> B2B SaaS + IoT / hybrid platforms
---> Strong automation mindset
---> Engineers who think in systems, not tickets
💼 JOB DETAILS
📍 Bengaluru
🏢 Hybrid (WFH)
💰 (Final CTC depends on experience & interviews)
🌟 WHY THIS ROLE?
Most cloud teams manage uptime. We weaponize it.
Your work won’t just keep systems running — it will keep customers confident, operations flawless, and competitors wondering how it all works so smoothly.
📩 APPLY / REFER : 🔗 Know someone who lives for reliability, observability & cloud excellence?
Job Description
Experience: 5 - 9 years
Location: Bangalore/Pune/Hyderabad
Work Mode: Hybrid(3 Days WFO)
Senior Cloud Infrastructure Engineer for Data Platform
The ideal candidate will play a critical role in designing, implementing, and maintaining cloud infrastructure and CI/CD pipelines to support scalable, secure, and efficient data and analytics solutions. This role requires a strong understanding of cloud-native technologies, DevOps best practices, and hands-on experience with Azure and Databricks.
Key Responsibilities:
Cloud Infrastructure Design & Management
Architect, deploy, and manage scalable and secure cloud infrastructure on Microsoft Azure.
Implement best practices for Azure Resource Management, including resource groups, virtual networks, and storage accounts.
Optimize cloud costs and ensure high availability and disaster recovery for critical systems
Databricks Platform Management
Set up, configure, and maintain Databricks workspaces for data engineering, machine learning, and analytics workloads.
Automate cluster management, job scheduling, and monitoring within Databricks.
Collaborate with data teams to optimize Databricks performance and ensure seamless integration with Azure services.
CI/CD Pipeline Development
Design and implement CI/CD pipelines for deploying infrastructure, applications, and data workflows using tools like Azure DevOps, GitHub Actions, or similar.
Automate testing, deployment, and monitoring processes to ensure rapid and reliable delivery of updates.
Monitoring & Incident Management
Implement monitoring and alerting solutions using tools like Dynatrace, Azure Monitor, Log Analytics, and Databricks metrics.
Troubleshoot and resolve infrastructure and application issues, ensuring minimal downtime.
Security & Compliance
Enforce security best practices, including identity and access management (IAM), encryption, and network security.
Ensure compliance with organizational and regulatory standards for data protection and cloud operations.
Collaboration & Documentation
Work closely with cross-functional teams, including data engineers, software developers, and business stakeholders, to align infrastructure with business needs.
Maintain comprehensive documentation for infrastructure, processes, and configurations.
Required Qualifications
Education: Bachelor’s degree in Computer Science, Engineering, or a related field.
Must Have Experience:
6+ years of experience in DevOps or Cloud Engineering roles.
Proven expertise in Microsoft Azure services, including Azure Data Lake, Azure Databricks, Azure Data Factory (ADF), Azure Functions, Azure Kubernetes Service (AKS), and Azure Active Directory.
Hands-on experience with Databricks for data engineering and analytics.
Technical Skills:
Proficiency in Infrastructure as Code (IaC) tools like Terraform, ARM templates, or Bicep.
Strong scripting skills in Python, or Bash.
Experience with containerization and orchestration tools like Docker and Kubernetes.
Familiarity with version control systems (e.g., Git) and CI/CD tools (e.g., Azure DevOps, GitHub Actions).
Soft Skills:
Strong problem-solving and analytical skills.
Excellent communication and collaboration abilities.
Role: DevOps Engineer
Experience: 7+ Years
Location: Pune / Trivandrum
Work Mode: Hybrid
𝐊𝐞𝐲 𝐑𝐞𝐬𝐩𝐨𝐧𝐬𝐢𝐛𝐢𝐥𝐢𝐭𝐢𝐞𝐬:
- Drive CI/CD pipelines for microservices and cloud architectures
- Design and operate cloud-native platforms (AWS/Azure)
- Manage Kubernetes/OpenShift clusters and containerized applications
- Develop automated pipelines and infrastructure scripts
- Collaborate with cross-functional teams on DevOps best practices
- Mentor development teams on continuous delivery and reliability
- Handle incident management, troubleshooting, and root cause analysis
𝐌𝐚𝐧𝐝𝐚𝐭𝐨𝐫𝐲 𝐒𝐤𝐢𝐥𝐥𝐬:
- 7+ years in DevOps/SRE roles
- Strong experience with AWS or Azure
- Hands-on with Docker, Kubernetes, and/or OpenShift
- Proficiency in Jenkins, Git, Maven, JIRA
- Strong scripting skills (Shell, Python, Perl, Ruby, JavaScript)
- Solid networking knowledge and troubleshooting skills
- Excellent communication and collaboration abilities
𝐏𝐫𝐞𝐟𝐞𝐫𝐫𝐞𝐝 𝐒𝐤𝐢𝐥𝐥𝐬:
- Experience with Helm, monitoring tools (Splunk, Grafana, New Relic, Datadog)
- Knowledge of Microservices and SOA architectures
- Familiarity with database technologies
Preferred Education & Experience: •
Bachelor’s or master’s degree in Computer Engineering,
Computer Science, Computer Applications, Mathematics, Statistics or related technical field or
equivalent practical experience. Relevant experience of at least 3 years in lieu of above if from a different stream of education.
• Well-versed in DevOps principals & practices and hands-on DevOps
tool-chain integration experience: Release Orchestration & Automation, Source Code & Build
Management, Code Quality & Security Management, Behavior Driven Development, Test Driven
Development, Continuous Integration, Continuous Delivery, Continuous Deployment, and
Operational Monitoring & Management; extra points if you can demonstrate your knowledge with
working examples.
• Hands-on experience with demonstrable working experience with DevOps tools
and platforms viz., Slack, Jira, GIT, Jenkins, Code Quality & Security Plugins, Maven, Artifactory,
Terraform, Ansible/Chef/Puppet, Spinnaker, Tekton, StackStorm, Prometheus, Grafana, ELK,
PagerDuty, VictorOps, etc.
• Well-versed in Virtualization & Containerization; must demonstrate
experience in technologies such as Kubernetes, Istio, Docker, OpenShift, Anthos, Oracle VirtualBox,
Vagrant, etc.
• Well-versed in AWS and/or Azure or and/or Google Cloud; must demonstrate
experience in at least FIVE (5) services offered under AWS and/or Azure or and/or Google Cloud in
any categories: Compute or Storage, Database, Networking & Content Delivery, Management &
Governance, Analytics, Security, Identity, & Compliance (or) equivalent demonstratable Cloud
Platform experience.
• Well-versed with demonstrable working experience with API Management,
API Gateway, Service Mesh, Identity & Access Management, Data Protection & Encryption, tools &
platforms.
• Hands-on programming experience in either core Java and/or Python and/or JavaScript
and/or Scala; freshers passing out of college or lateral movers into IT must be able to code in
languages they have studied.
• Well-versed with Storage, Networks and Storage Networking basics
which will enable you to work in a Cloud environment.
• Well-versed with Network, Data, and
Application Security basics which will enable you to work in a Cloud as well as Business
Applications / API services environment.
• Extra points if you are certified in AWS and/or Azure
and/or Google Cloud.
Required Experience: 5+ Years
Job Location: Remote/Pune
- Public clouds, such as AWS, Azure, or Google Cloud Platform
- Automation technologies, such as Kubernetes or Jenkins
- Configuration management tools, such as Puppet or Chef
- Scripting languages, such as Python or Ruby
- Essentail Skills:
- Docker
- Jenkins
- Python dependency management using conda and pip
- Base Linux System Commands, Scripting
- Docker Container Build & Testing
- Common knowledge of minimizing container size and layers
- Inspecting containers for un-used / underutilized systems
- Multiple Linux OS support for virtual system
- Has experience as a user of jupyter / jupyter lab to test and fix usability issues in workbenches
- Templating out various configurations for different use cases (we use Python Jinja2 but are open to other languages / libraries)
- Jenkins PIpeline
- Github API Understanding to trigger builds, tags, releases
- Artifactory Experience
- Nice to have: Kubernetes, ArgoCD, other deployment automation tool sets (DevOps)
Profile: DevOps Engineer
Experience: 5-8 Yrs
Notice Period: Immediate to 30 Days
Job Descrtiption:
Technical Experience (Must Have):
Cloud: Azure
DevOps Tool: Terraform, Ansible, Github, CI-CD pipeline, Docker, Kubernetes
Network: Cloud Networking
Scripting Language: Any/All - Shell Script, PowerShell, Python
OS: Linux (Ubuntu, RHEL etc)
Database: MongoDB
Professional Attributes: Excellent communication, written, presentation,
and problem-solving skills.
Experience: Minimum of 5-8 years of experience in Cloud Automation and
Application
Additional Information (Good to have):
Microsoft Azure Fundamentals AZ-900
Terraform Associate
Docker
Certified Kubernetes Administrator
Role:
Building and maintaining tools to automate application and
infrastructure deployment, and to monitor operations.
Design and implement cloud solutions which are secure, scalable,
resilient, monitored, auditable and cost optimized.
Implementing transformation from an as is state, to the future.
Coordinating with other members of the DevOps team, Development, Test,
and other teams to enhance and optimize existing processes.
Provide systems support, implement monitoring and logging alerting
solutions that enable the production systems to be monitored.
Writing Infrastructure as Code (IaC) using Industry standard tools and
services.
Writing application deployment automation using industry standard
deployment and configuration tools.
Design and implement continuous delivery pipelines that serve the
purpose of provisioning and operating client test as well as production
environments.
Implement and stay abreast of Cloud and DevOps industry best practices
and tooling.
Numerator is a data and technology company reinventing market research. Headquartered in Chicago, IL, Numerator has 1,600 employees worldwide. The company blends proprietary data with advanced technology to create unique insights for the market research industry that has been slow to change. The majority of Fortune 100 companies are Numerator clients.
Job Description
What We Do and How?
We are a market research company, revolutionizing how it's done! We mix fast paced development and unique approaches to bring best practices and strategy to our technology. Our tech stack is deep, leveraging several languages and frameworks including Python, C#, Java, Kotlin, React, Angular, and Django among others. Our engineering hurdles sit at the intersection of technologies ranging from mobile, computer vision and crowdsourcing, to machine learning and big data analytics.
Our Team
From San Francisco to Chicago to Ottawa, our R&D team is comprised of talented individuals spanning across a robust tech stack. The R&D team is comprised of product, data analytics, engineers across Front End, Back End, DevOps, Business Intelligence, ETL, Data Science, Mobile Apps, and much more. Across these different groups we work towards one common goal: To build products into efficient and seamless user experiences that help our clients succeed.
Numerator is looking for a Infrastructure Engineer to join our growing team. This is a unique opportunity where you will get a chance to work with an established and rapidly evolving platforms that handles millions of requests and massive amounts of data. In this position, you will be responsible for taking on new initiatives to automate, enhance, maintain, and scale services in a rapidly-scaling SaaS environment.
As a member of our team, you will make an immediate impact as you help build out and expand our technology platforms across several software products. This is a fast-paced role with high growth, visibility, impact, and where many of the decisions for new projects will be driven by you and your team from inception through production.
Some of the technologies we frequently use include: Terraform, Ansible, SumoLogic, Kubernetes, and many AWS-native services.
• Develop and test the cloud infrastructure to scale a rapidly growing ecosystem.
• Monitor and improve DevOps tools and processes, automate mundane tasks, and improve system reliability.
• Provide deep expertise to help steer scalability and stability improvements early in the life-cycle of development while working with the rest of the team to automate existing processes that deploy, test, and lead our production environments.
• Train teams to improve self-healing and self-service cloud-based ecosystems in an evolving AWS infrastructure.
• Build internal tools to demonstrate performance and operational efficiency.
• Develop comprehensive monitoring solutions to provide full visibility to the different platform components using tools and services like Kubernetes, Sumologic, Prometheus, Grafana.
• Identify and troubleshoot any availability and performance issues at multiple layers of deployment, from hardware, operating environment, network, and application.
• Work cross-functionally with various teams to improve Numerator’s infrastructure through automation.
• Work with other teams to assist with issue resolutions related to application configuration, deployment, or debugging.
• Lead by example and evangelize DevOps best practice within other engineering teams at Numerator.
Skills & Requirements
What you bring
• A minimum of 3 years of work experience in backend software, DevOps, or a related field.
• A passion for software engineering, automation and operations and are excited about reliability, availability and performance.
• Availability to participate in after-hours on-call support with your fellow engineers.
• Strong analytical and problem-solving mindset combined with experience troubleshooting large scale systems.
• Fundamental knowledge in networking; operating systems; package build system (IP subnets and routing, ACL’s, Core Ubuntu, PIP and NPM).
• Experience with automation technologies to build, deploy and integrate both infrastructure and applications (e.g., Terraform, Ansible).
• Experience using scripting languages like Python and *nix tools (Bash, sed/awk, Make).
• You enjoy developing and managing real-time distributed platforms and services that scale billions of requests.
• Have the ability to manage multiple systems across stratified environments.
• A deep enthusiasm for the Cloud and DevOps and keen to get other people involved.
• Experience with scaling and operationalizing distributed data stores, file systems and services.
• Running services in AWS or other cloud platforms, strong experience with Linux systems.
• Experience in modern software paradigms including cloud applications and serverless architectures.
• You look ahead to identify opportunities and foster a culture of innovation.
• BS, MS or Ph.D. in Computer Science or a related field, or equivalent work experience.
Nice to haves
• Previous experience working with a geographically distributed software engineering team.
• Experience working with Jenkins or Circle-CI
• Experience with storage optimizations and management
• Solid understanding of building scalable, highly performant systems and services
• Expertise with big data, analytics, machine learning, and personalization.
• Start-up or CPG industry experience
If this sounds like something you would like to be part of, we’d love for you to apply! Don't worry if you think that you don't meet all the qualifications here. The tools, technology, and methodologies we use are constantly changing and we value talent and interest over specific experience.
Disclaimer: We do not charge any fee for employment and the same applies to the Recruitment Partners who we work with. Numerator is an equal opportunity employer. Employment decisions are based on merit. Additionally, we do not ask for any refundable security deposit to be paid in bank accounts for employment purposes. We request candidates to be cautious of misleading communications and not pay any fee/ deposit to individuals/ agencies/ employment portals on the pretext of attending Numerator interview process or seeking employment with us. These would be fraudulent in nature. Anyone dealing with such individuals/agencies/
We are an equal opportunity employer and all qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, disability status, protected veteran status, or any other characteristic protected by law.

Hiring for one of the product based org for PAN India
Skills required:
Strong knowledge and experience of cloud infrastructure (AWS, Azure or GCP), systems, network design, and cloud migration projects.
Strong knowledge and understanding of CI/CD processes tools (Jenkins/Azure DevOps) is a must.
Strong knowledge and understanding of Docker & Kubernetes is a must.
Strong knowledge of Python, along with one more language (Shell, Groovy, or Java).
Strong prior experience using automation tools like Ansible, Terraform.
Architect systems, infrastructure & platforms using Cloud Services.
Strong communication skills. Should have demonstrated the ability to collaborate across teams and organizations.
Benefits of working with OpsTree Solutions:
Opportunity to work on the latest cutting edge tools/technologies in DevOps
Knowledge focused work culture
Collaboration with very enthusiastic DevOps experts
High growth trajectory
Opportunity to work with big shots in the IT industry
Below is the Job Description for the position of DevOps Azure Engineer in Xceedance co.
Qualifications BE/ B.Tech/ MCA in computer science
Key Requirement for the Position Develop Azure application design and connectivity patterns, Azure networking topologies, and Azure storage facilities.
• Run code conformance tools as part of releases.
• Design Azure app service web app by using Azure CLI, PowerShell, and other tools.
• Implement containerized solution using Docker and Azure Kubernetes Service
• Automating the build and deployment process through Azure DevOps approach and tools from development to production
• Design and implement CI/CD pipelines
• Script and update build and deployments.
• Coordinate environment usage and alignment.
• Develop, maintain, and optimize automated deployments code for development, test, staging and production environments.
• Configure the application and container platform with proactive monitoring tools and trigger alerts through communication channels
• Develop infrastructure and platform code
• Effectively contribute to building the overall knowledge and expertise of the technical team
• Provide Level 2/3 technical support
Location Noida or Gurgaon
Minimum 4 years exp
Skillsets:
- Build automation/CI: Jenkins
- Secure repositories: Artifactory, Nexus
- Build technologies: Maven, Gradle
- Development Languages: Python, Java, C#, Node, Angular, React/Redux
- SCM systems: Git, Github, Bitbucket
- Code Quality: Fisheye, Crucible, SonarQube
- Configuration Management: Packer, Ansible, Puppet, Chef
- Deployment: uDeploy, XLDeploy
- Containerization: Kubernetes, Docker, PCF, OpenShift
- Automation frameworks: Selenium, TestNG, Robot
- Work Management: JAMA, Jira
- Strong problem solving skills, Good verbal and written communication skills
- Good knowledge of Linux environment: RedHat etc.
- Good in shell scripting
- Good to have Cloud Technology : AWS, GCP and Azure

