Virtana is looking for a Senior DevOps Engineer to join our R&D Infrastructure team. In this role, you won't just follow conventions — you'll help redefine them. You will own the architecture, build, and day-to-day operations of the GCP-based cloud platform that powers Virtana's SaaS products and the AI-driven observability experience our Global 2000 customers depend on. This is a hands-on senior individual contributor role with meaningful technical leadership scope, working alongside engineers and architects on a unified observability platform.
Work Location: Pune
Job Type: Hybrid
Role Responsibilities:
GCP Cloud Operations: Develop, deploy, operate, and support production cloud infrastructure primarily on GCP — leveraging GKE, BigTable, BigQuery, Dataflow, Cloud Storage, IAM, and core networking services.
Reliability & SLAs: Ensure production systems are running at all times with multiple levels of redundancy to meet committed SLAs; lead incident response, root cause analysis, and post-incident reviews.
Build & Release Automation: Design, implement, and continuously improve scalable CI/CD pipelines and test frameworks leveraged by QA and development teams across the company.
Infrastructure as Code: Manage large-scale, repeatable deployments using Terraform, Ansible, Puppet, or SaltStack; champion Git-based workflows and version control standards for distributed engineering teams.
Security & Availability: Maintain the ongoing maintenance, security, patching, and availability of services in line with tight operations, security, and procedural models.
Monitoring & Alerting: Plan and deliver high-value monitoring and alerting features to support operations, support, and customer-facing reliability — eating our own dog food with the Virtana Platform wherever possible.
Capacity & Cost: Forecast capacity, plan upgrades, patches, and migrations, and drive cloud cost efficiency across hybrid and multi-cloud environments.
Cross-Functional Partnership: Work with development, operations, and support personnel to identify, isolate, and diagnose issues; handle support escalations and drive permanent fixes.
Required Qualifications:
Bachelor's degree in Computer Science / Engineering or equivalent relevant experience.
5–7 years of professional hands-on DevOps / SRE experience supporting production cloud environments.
Strong, demonstrable production experience on GCP — including GKE, BigTable, BigQuery, Dataflow, IAM, and core GCP networking services.
Deep, hands-on expertise with container orchestration (Kubernetes) and Docker in production.
Advanced proficiency with at least one infrastructure-as-code / configuration management tool: Terraform, Ansible, Puppet, or SaltStack.
Solid understanding of networking, firewalls, load balancers, DNS, and database operations.
Strong working knowledge of Git-based workflows and version control standards for distributed engineering teams.
Comfort operating hybrid environments that include both Linux and Windows ecosystems.
Excellent verbal and written communication skills, with the ability to explain highly technical topics to both technical and non-technical audiences.
Self-motivated, detail-oriented, and able to work both independently and within a globally distributed team.
Good to Have:
Strong scripting skills and a demonstrated ability to automate operational toil — Python preferred; Bash, Go, or Groovy a plus.
Hands-on experience designing and operating CI/CD pipelines with Jenkins (Spinnaker, GitHub Actions, or GitLab CI also welcome).
Exposure to AWS or other public clouds in addition to GCP.
Experience operating SaaS platforms built on microservices architectures.
Preferred Education & Experience: • Bachelor’s or master’s degree in Computer Engineering, Computer Science, Computer Applications, Mathematics, Statistics or related technical field or equivalent practical experience. Relevant experience of at least 3 years in lieu of above if from a different stream of education. • Well-versed in DevOps principals & practices and hands-on DevOps tool-chain integration experience: Release Orchestration & Automation, Source Code & Build Management, Code Quality & Security Management, Behavior Driven Development, Test Driven Development, Continuous Integration, Continuous Delivery, Continuous Deployment, and Operational Monitoring & Management; extra points if you can demonstrate your knowledge with working examples. • Hands-on experience with demonstrable working experience with DevOps tools and platforms viz., Slack, Jira, GIT, Jenkins, Code Quality & Security Plugins, Maven, Artifactory, Terraform, Ansible/Chef/Puppet, Spinnaker, Tekton, StackStorm, Prometheus, Grafana, ELK, PagerDuty, VictorOps, etc. • Well-versed in Virtualization & Containerization; must demonstrate experience in technologies such as Kubernetes, Istio, Docker, OpenShift, Anthos, Oracle VirtualBox, Vagrant, etc. • Well-versed in AWS and/or Azure or and/or Google Cloud; must demonstrate experience in at least FIVE (5) services offered under AWS and/or Azure or and/or Google Cloud in any categories: Compute or Storage, Database, Networking & Content Delivery, Management & Governance, Analytics, Security, Identity, & Compliance (or) equivalent demonstratable Cloud Platform experience. • Well-versed with demonstrable working experience with API Management, API Gateway, Service Mesh, Identity & Access Management, Data Protection & Encryption, tools & platforms. • Hands-on programming experience in either core Java and/or Python and/or JavaScript and/or Scala; freshers passing out of college or lateral movers into IT must be able to code in languages they have studied. • Well-versed with Storage, Networks and Storage Networking basics which will enable you to work in a Cloud environment. • Well-versed with Network, Data, and Application Security basics which will enable you to work in a Cloud as well as Business Applications / API services environment. • Extra points if you are certified in AWS and/or Azure and/or Google Cloud.
Experience in systems administration, SRE or DevOps focused role
Experience in handling production support (on-call)
Good understanding of the Linux operating system and networking concepts.
Demonstrated competency with the following AWS services: ECS, EC2, EBS, EKS, S3, RDS, ELB, IAM, Lambda.
Experience with Docker containers and containerization concepts
Experience with managing and scaling Kubernetes clusters in a production environment
Experience building scalable infrastructure in AWS with Terraform.
Strong knowledge of Protocol-level such as HTTP/HTTPS, SMTP, DNS, and LDAP
Experience monitoring production systems
Expertise in leveraging Automation / DevOps principles, experience with operational tools, and able to apply best practices for infrastructure and software deployment (Ansible).
HAProxy, Nginx, SSH, MySQL configuration and operation experience
Ability to work seamlessly with software developers, QA, project managers, and business development
Ability to produce and maintain written documentation
We are looking for a proficient, passionate, and skilled DevOps Specialist. You will have the opportunity to build an in-depth understanding of our client's business problems and then implement organizational strategies to resolve the same.
Skills required: Strong knowledge and experience of cloud infrastructure (AWS, Azure or GCP), systems, network design, and cloud migration projects. Strong knowledge and understanding of CI/CD processes tools (Jenkins/Azure DevOps) is a must. Strong knowledge and understanding of Docker & Kubernetes is a must. Strong knowledge of Python, along with one more language (Shell, Groovy, or Java). Strong prior experience using automation tools like Ansible, Terraform. Architect systems, infrastructure & platforms using Cloud Services. Strong communication skills. Should have demonstrated the ability to collaborate across teams and organizations.
Benefits of working with OpsTree Solutions:
Opportunity to work on the latest cutting edge tools/technologies in DevOps Knowledge focused work culture Collaboration with very enthusiastic DevOps experts High growth trajectory Opportunity to work with big shots in the IT industry
We are looking for people with programming skills in Python, SQL, Cloud Computing. Candidate should have experience in at least one of the major cloud-computing platforms - AWS/Azure/GCP. He should professioanl experience in handling applications and databases in the cloud using VMs and Docker images. He should have ability to design and develop applications for the cloud.
You will be responsible for
Leading the DevOps strategy and development of SAAS Product Deployments
Leading and mentoring other computer programmers.
Evaluating student work and providing guidance in the online courses in programming and cloud computing.
Desired experience/skills
Qualifications: Graduate degree in Computer Science or related field, or equivalent experience.
Skills:
Strong programming skills in Python, SQL,
Cloud Computing
Experience:
2+ years of programming experience including Python, SQL, and Cloud Computing. Familiarity with command line working environment.
Note: A strong programming background, in any language and cloud computing platform is required. We are flexible about the degree of familiarity needed for the specific environments Python, SQL. If you have extensive experience in one of the cloud computing platforms and less in others you should still, consider applying.
Soft Skills:
Good interpersonal, written, and verbal communication skills; including the ability to explain the concepts to others.
A strong understanding of algorithms and data structures, and their performance characteristics.
Awareness of and sensitivity to the educational goals of a multicultural population would also be desirable.
Pune, Noida, Bengaluru (Bangalore), Mumbai, Chennai
3 - 10 yrs
₹5L - ₹20L / yr
DevOps
Kubernetes
Google Cloud Platform (GCP)
Terraform
Jenkins
+2 more
Role & Responsibilities : • At least 4 years of hands-on experience with cloud infrastructure on GCP • Hands-on-Experience on Kubernetes is a mandate • Exposure to configuration management and orchestration tools at scale (e.g. Terraform, Ansible, Packer) • Knowledge and hand-on-experience in DevOps tools (e.g. Jenkins, Groovy, and Gradle) • Knowledge and hand-on-experience on the various platforms (e.g. Gitlab, CircleCl and Spinnakar) • Familiarity with monitoring and alerting tools (e.g. CloudWatch, ELK stack, Prometheus) • Proven ability to work independently or as an integral member of a team
Preferable Skills: • Familiarity with standard IT security practices such as encryption, credentials and key management. • Proven experience on various coding languages (Java, Python-) to • support DevOps operation and cloud transformation • Familiarity and knowledge of the web standards (e.g. REST APIs, web security mechanisms) • Hands on experience with GCP • Experience in performance tuning, services outage management and troubleshooting.
Attributes: • Good verbal and written communication skills • Exceptional leadership, time management, and organizational skill Ability to operate independently and make decisions with little direct supervision
Technology Lead provides technical leadership with in-depth DevOps experience and is responsible for enabling delivery of high-quality projects to Saviant clients through highly effective DevOps process. This is a highly technical role, with a focus on analysing, designing, documenting, and implementing a complete DevOps process for enterprise applications using the most advanced technology stacks, methodologies, and best practices within the agreed timelines. Individuals in this role will need to have good technical and communication skills and strive to be on the cutting edge, innovate, and explore to deliver quality solutions to Saviant Clients.
Your Role & Responsibilities at Saviant:
• Design, analyze, document, and develop the technical architecture for on-premise as well as cloud-based DevOps solutions around customers’ business problems. • Lead end to end process and setup implementation of configuration management, CI, CD, and monitoring platforms. • Conduct reviews of design and implementation of DevOps processes while establishing, and maintaining best practices • Setup new processes to improve the quality of development, delivery and deployment processes • Provide technical support and guidance to project team members. • Upgrade by learning technologies beyond traditional area of expertise • Contribute to pre-sales, proposal creation, POCs, technology incubation from technical and architecture perspective • Participate in recruitment and people development initiatives. Job Requirements/Qualifications: • Educational Qualification: BE, BTech, MTech, MCA from a reputed institute • 6 to 8 years of hands-on experience of the DevOps process using technologies like Dot Net Core, Python, C#, MVC, ReactJS, Python, Android, IOS, Linux, Windows • Strong hands-on experience of the full life cycle of DevOps: DevOps Orchestration/Configuration/Security/CI-CD/Release Management and Environment management • Solid hands-on knowledge of DevOps technologies and tools such as Jenkins, Spinnaker, Azure for DevOps, Chef, Puppet, JIRA, TFS, Git, SVN, various scripting tools, etc. • Solid hands-on knowledge of containerization technologies and tools such as Docker, Kubernetes, Cloud Foundry • In-depth understanding of various development and deployment architectures from a DevOps perspective • Expertise in Grounds-up DevOps projects involving multiple agile teams spread across geographies. • Experience in a various Agile Project Management software /techniques / tools • Strong analytical and problem solving skills • Excellent written and oral communication skills • Enjoys working as part of agile software teams in a startup environment.
Who Should Apply?
• You have independently managed end-to-end DevOps projects, including understanding requirements, design solutions and implementing, setting up best practices with different business domain over last 2 years. • You are well versed with Agile development methodologies and have successfully implemented them across at least 2-3 projects • You have lead development team of 5 to 8 developers with Technology responsibility • You have served as “Single Point of Contact” for managing technical escalations and decisions
Position: Cloud Analyst Level 1 Location – Pune Experience - 1.5 to 3 YR Payroll: Direct with Client Salary Range: 3 to 5 Lacs (depending on existing) Role and Responsibility • Good understanding and Experience on AWS CloudWatch for ES2, Amazon Web Services, and Resources, and other sources. • Collect and Store logs • Monitor and Store Logs • Log Analyze • Configure Alarm • Configure Dashboard • Preparation and following of SOP's, Documentation. • Good understanding AWS in DevOps. • Experience with AWS services ( EC2, ECS, CloudWatch, VPC, Networking ) • Experience with a variety of infrastructure, application, and log monitoring tools ~ Prometheus, Grafana, • Familiarity with Docker, Linux, and Linux security • Knowledge and experience with container-based architectures like Docker • Experience on performing troubleshooting on AWS service. • Experience in configuring services in AWS like EC2, S3, ECS • Experience with Linux system administration and engineering skills on Cloud infrastructure • Knowledge of Load Balancers, Firewalls, and network switching components • Knowledge of Internet-based technologies - TCP/IP, DNS, HTTP, SMTP & Networking concepts • Knowledge of security best practices • Comfortable 24x7 supporting Production environments • Strong communication skills
Read more
Get to hear about interesting companies hiring right now