11+ STP Jobs in Pune | STP Job openings in Pune
Apply to 11+ STP Jobs in Pune on CutShort.io. Explore the latest STP Job opportunities across top companies like Google, Amazon & Adobe.

Primary Skills – SDA, DNAC, Cisco ISE, Routing and switching and Troubleshooting NAC, 802.1x, Supplicant configuration,
Switching Skills- IOS upgrade, SNMP, VLAN, STP, VSL
Routing Skills-OSPF, BGP, ISIS
Network Monitoring Tools- Cisco Prime (Integration, monitoring, and troubleshooting)
- Must have hands on experience in Implementing NAC,802.1x (Wired & Wireless)
- Strong knowledge on CAMPUS LAN Architecture and implementation
- Hands on experience in Routing, Switching, Cisco SDA, DNAC, Cisco ISE, Assurance
- Working experience in Cisco prime infra, Monitoring, Integration and Heat map generation
- Should have detailed technical understanding, troubleshooting of STP protocols (STP, PVST, RSTP, MSTP)
- Perform IOS upgrades on Switches and WLCs
- Troubleshoot skills on Quality of Service, Multicast and HSRP, Dot1x protocols, IPSLA
- Troubleshooting skills on Cisco VSS, VSL and stacking technologies
- Should have detailed technical understanding, troubleshooting and support of, Routing protocols (OSPF, BGP), MPLS in an enterprise environment.
- Perform root cause and troubleshoot network outages
- Should be proficient in Wireless technology, implementation, and troubleshooting
- Deliver reports on the actions performed.
- Hands-on experience with the networking products Like Cisco 6800, 9500,9400,9300, 3800, 2960x and 3650 switches. Juniper EX8200,EX4200,EX3300 and EX2200 switches
- Strong ability to troubleshoot complex network issues and Identify, diagnose, and resolve different types of network problems.
Secondary Skills –
- CCNP Certification
- Project Management: Managing individual network projects within the scope of the Network Team
- Excellent technical and business communication skills, both oral and written
Key Skills Required:
· You will be part of the DevOps engineering team, configuring project environments, troubleshooting integration issues in different systems also be involved in building new features for next generation of cloud recovery services and managed services.
· You will directly guide the technical strategy for our clients and build out a new capability within the company for DevOps to improve our business relevance for customers.
· You will be coordinating with Cloud and Data team for their requirements and verify the configurations required for each production server and come with Scalable solutions.
· You will be responsible to review infrastructure and configuration of micro services and packaging and deployment of application
To be the right fit, you'll need:
· Expert in Cloud Services like AWS.
· Experience in Terraform Scripting.
· Experience in container technology like Docker and orchestration like Kubernetes.
· Good knowledge of frameworks such as Jenkins, CI/CD pipeline, Bamboo Etc.
· Experience with various version control system like GIT, build tools (Mavan, ANT, Gradle ) and cloud automation tools (Chef, Puppet, Ansible)
About the Company:
Gruve is an innovative Software Services startup dedicated to empowering Enterprise Customers in managing their Data Life Cycle. We specialize in Cyber Security, Customer Experience, Infrastructure, and advanced technologies such as Machine Learning and Artificial Intelligence. Our mission is to assist our customers in their business strategies utilizing their data to make more intelligent decisions. As an well-funded early-stage startup, Gruve offers a dynamic environment with strong customer and partner networks.
Why Gruve:
At Gruve, we foster a culture of innovation, collaboration, and continuous learning. We are committed to building a diverse and inclusive workplace where everyone can thrive and contribute their best work. If you’re passionate about technology and eager to make an impact, we’d love to hear from you.
Gruve is an equal opportunity employer. We welcome applicants from all backgrounds and thank all who apply; however, only those selected for an interview will be contacted.
Position summary:
We are seeking a Staff Engineer – DevOps with 8-12 years of experience in designing, implementing, and optimizing CI/CD pipelines, cloud infrastructure, and automation frameworks. The ideal candidate will have expertise in Kubernetes, Terraform, CI/CD, Security, Observability, and Cloud Platforms (AWS, Azure, GCP). You will play a key role in scaling and securing our infrastructure, improving developer productivity, and ensuring high availability and performance.
Key Roles & Responsibilities:
- Design, implement, and maintain CI/CD pipelines using tools like Jenkins, GitLab CI/CD, ArgoCD, and Tekton.
- Deploy and manage Kubernetes clusters (EKS, AKS, GKE) and containerized workloads.
- Automate infrastructure provisioning using Terraform, Ansible, Pulumi, or CloudFormation.
- Implement observability and monitoring solutions using Prometheus, Grafana, ELK, OpenTelemetry, or Datadog.
- Ensure security best practices in DevOps, including IAM, secrets management, container security, and vulnerability scanning.
- Optimize cloud infrastructure (AWS, Azure, GCP) for performance, cost efficiency, and scalability.
- Develop and manage GitOps workflows and infrastructure-as-code (IaC) automation.
- Implement zero-downtime deployment strategies, including blue-green deployments, canary releases, and feature flags.
- Work closely with development teams to optimize build pipelines, reduce deployment time, and improve system reliability.
Basic Qualifications:
- A bachelor’s or master’s degree in computer science, electronics engineering or a related field
- 8-12 years of experience in DevOps, Site Reliability Engineering (SRE), or Infrastructure Automation.
- Strong expertise in CI/CD pipelines, version control (Git), and release automation.
- Hands-on experience with Kubernetes (EKS, AKS, GKE) and container orchestration.
- Proficiency in Terraform, Ansible for infrastructure automation.
- Experience with AWS, Azure, or GCP services (EC2, S3, IAM, VPC, Lambda, API Gateway, etc.).
- Expertise in monitoring/logging tools such as Prometheus, Grafana, ELK, OpenTelemetry, or Datadog.
- Strong scripting and automation skills in Python, Bash, or Go.
Preferred Qualifications
- Experience in FinOps Cloud Cost Optimization) and Kubernetes cluster scaling.
- Exposure to serverless architectures and event-driven workflows.
- Contributions to open-source DevOps projects.
Company - Apptware Solutions
Location Baner Pune
Team Size - 130+
Job Description -
Cloud Engineer with 8+yrs of experience
Roles and Responsibilities
● Have 8+ years of strong experience in deployment, management and maintenance of large systems on-premise or cloud
● Experience maintaining and deploying highly-available, fault-tolerant systems at scale
● A drive towards automating repetitive tasks (e.g. scripting via Bash, Python, Ruby, etc)
● Practical experience with Docker containerization and clustering (Kubernetes/ECS)
● Expertise with AWS (e.g. IAM, EC2, VPC, ELB, ALB, Autoscaling, Lambda, VPN)
● Version control system experience (e.g. Git)
● Experience implementing CI/CD (e.g. Jenkins, TravisCI, CodePipeline)
● Operational (e.g. HA/Backups) NoSQL experience (e.g. MongoDB, Redis) SQL experience (e.g. MySQL)
● Experience with configuration management tools (e.g. Ansible, Chef) ● Experience with infrastructure-as-code (e.g. Terraform, Cloudformation)
● Bachelor's or master’s degree in CS, or equivalent practical experience
● Effective communication skills
● Hands-on cloud providers like MS Azure and GC
● A sense of ownership and ability to operate independently
● Experience with Jira and one or more Agile SDLC methodologies
● Nice to Have:
○ Sensu and Graphite
○ Ruby or Java
○ Python or Groovy
○ Java Performance Analysis
Role: Cloud Engineer
Industry Type: IT-Software, Software Services
Functional Area: IT Software - Application Programming, Maintenance Employment Type: Full Time, Permanent
Role Category: Programming & Design
You need to drive automation for implementing scalable and robust applications. You would indulge your dedication and passion to build server-side optimization ensuring low-latency and high-end performance for the cloud deployed within datacentre. You should have sound knowledge of Open stack and Kubernetes domain.
YOUR ‘OKR’ SUMMARY
OKR means Objective and Key Results.
As a DevOps Engineer, you will understand the overall movement of data in the entire platform, find bottlenecks,define solutions, develop key pieces, write APIs and own deployment of those. You will work with internal and external development teams to discover these opportunities and to solve hard problems. You will also guide engineers in solving the complex problems, developing your acceptance tests for those and reviewing the work and
the test results.
What you will do
- As a DevOps Engineer responsible for systems being used by customer across the globe.
- Set the goals for overall system and divide into goals for the sub-system.
- Guide/motivate/convince/mentor the architects on sub-systems and help them achieving improvements with agility and speed.
- Identify the performance bottleneck and come up with the solution to optimize time and cost taken by build/test system.
- Be a thought leader to contribute to the capacity planning for software/hardware, spanning internal and public cloud, solving the trade-off between turnaround time and utilization.
- Bring in technologies enabling massively parallel systems to improve turnaround time by an order of magnitude.
What you will need
A strong sense of ownership, urgency, and drive. As an integral part of the development team, you will need the following skills to succeed.
- BS or BE/B.Tech or equivalent experience in EE/CS with 10+ years of experience.
- Strong background of Architecting and shipping distributed scalable software product with good understanding of system programming.
- Excellent background of Cloud technologies like: OpenStack, Docker, Kubernetes, Ansible, Ceph is must.
- Excellent understanding of hybrid, multi-cloud architecture and edge computing concepts.
- Ability to identify the bottleneck and come up with solution to optimize it.
- Programming and software development skills in Python, Shell-script along with good understanding of distributed systems and REST APIs.
- Experience in working with SQL/NoSQL database systems such as MySQL, MongoDB or Elasticsearch.
- Excellent knowledge and working experience with Docker containers and Virtual Machines.
- Ability to effectively work across organizational boundaries to maximize alignment and productivity between teams.
- Ability and flexibility to work and communicate effectively in a multi-national, multi-time-zone corporate.
Additional Advantage:
- Deep understanding of technology and passionate about what you do.
- Background in designing high performant scalable software systems with strong focus to optimizehardware cost.
- Solid collaborative and interpersonal skills, specifically a proven ability to effectively guide andinfluence within a dynamic environment.
- Strong commitment to get the most performance out of a system being worked on.
- Prior development of a large software project using service-oriented architecture operating with real time constraints.
What's In It for You?
- You will get a chance to work on cloud-native and hyper-scale products
- You will be working with industry leaders in cloud.
- You can expect a steep learning curve.
- You will get the experience of solving real time problems, eventually you become a problem solver.
Benefits & Perks:
- Competitive Salary
- Health Insurance
- Open Learning - 100% Reimbursement for online technical courses.
- Fast Growth - opportunities to grow quickly and surely
- Creative Freedom + Flat hierarchy
- Sponsorship to all those employees who represent company in events and meet ups.
- Flexible working hours
- 5 days week
- Hybrid Working model (Office and WFH)
Our Hiring Process:
Candidates for this position can expect the hiring process as follows (subject to successful clearing of every round)
- Initial Resume screening call with our Recruiting team
- Next, candidates will be invited to solve coding exercises.
- Next, candidates will be invited for first technical interview
- Next, candidates will be invited for final technical interview
- Finally, candidates will be invited for Culture Plus interview with HR
- Candidates may be asked to interview with the Leadership team
- Successful candidates will subsequently be made an offer via email
As always, the interviews and screening call will be conducted via a mix of telephonic and video call.
So, if you are looking at an opportunity to really make a difference- make it with us…
Coredge.io provides equal employment opportunities to all employees and applicants for employment and prohibits discrimination and harassment of any type without regard to race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by applicable central, state or local laws.
- Public clouds, such as AWS, Azure, or Google Cloud Platform
- Automation technologies, such as Kubernetes or Jenkins
- Configuration management tools, such as Puppet or Chef
- Scripting languages, such as Python or Ruby
- Design, develop, test, debug and maintain components of the cloud infrastructure
- Manage operational priorities of the DBaaS
- Establish a process for handling and leading response to security new vulnerabilities
- Lead certification efforts from the security perspective
- Participate in penetration testing efforts
- Design and build DBaaS processes for key management, rotation storage, encryption, and password management
Requirements:
- Strong software design and implementation skills in building infrastructure frameworks
- Experience building and operating extensible, scalable resilient data systems
- Working knowledge of Java and Python Experience using public cloud infrastructure (AWS, GCP, or Azure)
- Containerization tooling (Docker, EKS, Kubernetes)
- Infrastructure as Code Tooling (Example: Terraform, Cloudformation, Etc.)
- Configuration Management Tooling (Ansible, Chef, etc.)
- Automation Scripting (Python preferred)
- Solid understanding of basic systems operations (disk, network, etc)
- Willingness and ability to learn new languages and concepts
- 5+ years of relevant experience
We are seeking a passionate DevOps Engineer to help create the next big thing in data analysis and search solutions.
You will join our Cloud infrastructure team supporting our developers . As a DevOps Engineer, you’ll be automating our environment setup and developing infrastructure as code to create a scalable, observable, fault-tolerant and secure environment. You’ll incorporate open source tools, automation, and Cloud Native solutions and will empower our developers with this knowledge.
We will pair you up with world-class talent in cloud and software engineering and provide a position and environment for continuous learning.

We are looking for people with programming skills in Python, SQL, Cloud Computing. Candidate should have experience in at least one of the major cloud-computing platforms - AWS/Azure/GCP. He should professioanl experience in handling applications and databases in the cloud using VMs and Docker images. He should have ability to design and develop applications for the cloud.
You will be responsible for
- Leading the DevOps strategy and development of SAAS Product Deployments
- Leading and mentoring other computer programmers.
- Evaluating student work and providing guidance in the online courses in programming and cloud computing.
Desired experience/skills
Qualifications: Graduate degree in Computer Science or related field, or equivalent experience.
Skills:
- Strong programming skills in Python, SQL,
- Cloud Computing
Experience:
2+ years of programming experience including Python, SQL, and Cloud Computing. Familiarity with command line working environment.
Note: A strong programming background, in any language and cloud computing platform is required. We are flexible about the degree of familiarity needed for the specific environments Python, SQL. If you have extensive experience in one of the cloud computing platforms and less in others you should still, consider applying.
Soft Skills:
- Good interpersonal, written, and verbal communication skills; including the ability to explain the concepts to others.
- A strong understanding of algorithms and data structures, and their performance characteristics.
- Awareness of and sensitivity to the educational goals of a multicultural population would also be desirable.
- Detail oriented and well organized.
Location – Pune
Experience - 1.5 to 3 YR
Payroll: Direct with Client
Salary Range: 3 to 5 Lacs (depending on existing)
Role and Responsibility
• Good understanding and Experience on AWS CloudWatch for ES2, Amazon Web Services, and Resources, and other sources.
• Collect and Store logs
• Monitor and Store Logs
• Log Analyze
• Configure Alarm
• Configure Dashboard
• Preparation and following of SOP's, Documentation.
• Good understanding AWS in DevOps.
• Experience with AWS services ( EC2, ECS, CloudWatch, VPC, Networking )
• Experience with a variety of infrastructure, application, and log monitoring tools ~ Prometheus, Grafana,
• Familiarity with Docker, Linux, and Linux security
• Knowledge and experience with container-based architectures like Docker
• Experience on performing troubleshooting on AWS service.
• Experience in configuring services in AWS like EC2, S3, ECS
• Experience with Linux system administration and engineering skills on Cloud infrastructure
• Knowledge of Load Balancers, Firewalls, and network switching components
• Knowledge of Internet-based technologies - TCP/IP, DNS, HTTP, SMTP & Networking concepts
• Knowledge of security best practices
• Comfortable 24x7 supporting Production environments
• Strong communication skills