11+ Virtual desktop Jobs in India
Apply to 11+ Virtual desktop Jobs on CutShort.io. Find your next job, effortlessly. Browse Virtual desktop Jobs and apply today!
Client Of People First Consultant
VMware Horizon / View, VMware vSphere, NSX • Azure Cloud VDI/AVD – Azure Virtual Desktop • VMware Virtualization • Hyper converged Infrastructure, VSAN, Storage devices SAN/NAS etc.
Roles and Responsibilities
- Designing, implementing, testing and deployment of the virtual desktop infrastructure
- Facilitating the transition of the VDI solution to Operations, providing operational and end user support when required.
- Acting as the single point of contact for all technical engagements on the VDI infrastructure.
- Creation of documented standard processes and procedures for all aspects of VDI infrastructure, administration and management.
- Working with the vendor in assessing the VDI infrastructure architecture from a deployment, performance, security and compliance perspective.
Knowledge around security best practices and understanding of vulnerability assessments.
- Providing mentoring and guidance to other VDI Administrators.
-
Document designs, development plans, operations procedures for the VDI solution.
Responsibilities:
- Design, implement, and maintain cloud infrastructure solutions on Microsoft Azure, with a focus on scalability, security, and cost optimization.
- Collaborate with development teams to streamline the deployment process, ensuring smooth and efficient delivery of software applications.
- Develop and maintain CI/CD pipelines using tools like Azure DevOps, Jenkins, or GitLab CI to automate build, test, and deployment processes.
- Utilize infrastructure-as-code (IaC) principles to create and manage infrastructure deployments using Terraform, ARM templates, or similar tools.
- Manage and monitor containerized applications using Azure Kubernetes Service (AKS) or other container orchestration platforms.
- Implement and maintain monitoring, logging, and alerting solutions for cloud-based infrastructure and applications.
- Troubleshoot and resolve infrastructure and deployment issues, working closely with development and operations teams.
- Ensure high availability, performance, and security of cloud infrastructure and applications.
- Stay up-to-date with the latest industry trends and best practices in cloud infrastructure, DevOps, and automation.
Requirements:
- Bachelor's degree in Computer Science, Engineering, or a related field (or equivalent work experience).
- Minimum of four years of proven experience working as a DevOps Engineer or similar role, with a focus on cloud infrastructure and deployment automation.
- Strong expertise in Microsoft Azure services, including but not limited to Azure Virtual Machines, Azure App Service, Azure Storage, Azure Networking, Azure Security, and Azure Monitor.
- Proficiency in infrastructure-as-code (IaC) tools such as Terraform or ARM templates.
- Hands-on experience with containerization and orchestration platforms, preferably Azure Kubernetes Service (AKS) or Docker Swarm.
- Solid understanding of CI/CD principles and experience with relevant tools such as Azure DevOps, Jenkins, or GitLab CI.
- Experience with scripting languages like PowerShell, Bash, or Python for automation tasks.
- Strong problem-solving and troubleshooting skills with a proactive and analytical mindset.
- Excellent communication and collaboration skills, with the ability to work effectively in a team environment.
- Azure certifications (e.g., Azure Administrator, Azure DevOps Engineer, Azure Solutions Architect) are a plus.
- Candidate should have good Platform experience on Azure with Terraform.
- The devops engineer needs to help developers, create the Pipelines and K8s Deployment Manifests.
- Good to have experience on migrating data from (AWS) to Azure.
- To manage/automate infrastructure automatically using Terraforms. Jenkins is the key CI/CD tool which we uses and it will be used to run these Terraforms.
- VMs to be provisioned on Azure Cloud and managed.
- Good hands on experience of Networking on Cloud is required.
- Ability to setup Database on VM as well as managed DB and Proper set up of cloud hosted microservices needs to be done to communicate with the db services.
- Kubernetes, Storage, KeyValult, Networking(load balancing and routing) and VMs are the key infrastructure expertise which are essential.
- Requirement is to administer Kubernetes cluster end to end. (Application deployment, managing namespaces, load balancing, policy setup, using blue-green/canary deployment models etc).
- The experience in AWS is desirable
- Python experience is optional however Power shell is mandatory
- Know-how on the use of GitHub
We're Hiring: DevOps Tech Lead with 7-9 Years of Experience! 🚀
Are you a seasoned DevOps professional with a passion for cloud technologies and automation? We have an exciting opportunity for a DevOps Tech Lead to join our dynamic team at our Gurgaon office.
🏢 ZoomOps Technolgy Solutions Private Limited
📍 Location: Gurgaon
💼 Full-time position
🔧 Key Skills & Requirements:
✔ 7-9 years of hands-on experience in DevOps roles
✔ Proficiency in Cloud Platforms like AWS, GCP, and Azure
✔ Strong background in Solution Architecture
✔ Expertise in writing Automation Scripts using Python and Bash
✔ Ability to manage IAC tools and CM tools like Terraform, Ansible, pulumi etc..
Responsibilities:
🔹 Lead and mentor the DevOps team, driving innovation and best practices
🔹 Design and implement robust CI/CD pipelines for seamless software delivery
🔹 Architect and optimize cloud infrastructure for scalability and efficiency
🔹 Automate manual processes to enhance system reliability and performance
🔹 Collaborate with cross-functional teams to drive continuous improvement
Join us to work on exciting projects and make a significant impact in the tech space!
Apply now and take the next step in your DevOps career!
Now, more than ever, the Toast team is committed to our customers. We’re taking steps to help restaurants navigate these unprecedented times with technology, resources, and community. Our focus is on building a restaurant platform that helps restaurants adapt, take control, and get back to what they do best: building the businesses they love. And because our technology is purpose-built for restaurants by restaurant people, restaurants can trust that we’ll deliver on their needs for today while investing in experiences that will power their restaurant of the future.
At Toast, our Site Reliability Engineers (SREs) are responsible for keeping all customer-facing services and other Toast production systems running smoothly. SREs are a blend of pragmatic operators and software craftspeople who apply sound software engineering principles, operational discipline, and mature automation to our environments and our codebase. Our decisions are based on instrumentation and continuous observability, as well as predictions and capacity planning.
About this roll* (Responsibilities)
- Gather and analyze metrics from both operating systems and applications to assist in performance tuning and fault finding
- Partner with development teams to improve services through rigorous testing and release procedures
- Participate in system design consulting, platform management, and capacity planning
- Create sustainable systems and services through automation and uplift
- Balance feature development speed and reliability with well-defined service level objectives
Troubleshooting and Supporting Escalations:
- Gather and analyze metrics from both operating systems and applications to assist in performance tuning and fault finding
- Diagnose performance bottlenecks and implement optimizations across infrastructure, databases, web, and mobile applications
- Implement strategies to increase system reliability and performance through on-call rotation and process optimization
- Perform and run blameless RCAs on incidents and outages aggressively, looking for answers that will prevent the incident from ever happening again
Do you have the right ingredients? (Requirements)
- Extensive industry experience with at least 7+ years in SRE and/or DevOps roles
- Polyglot technologist/generalist with a thirst for learning
- Deep understanding of cloud and microservice architecture and the JVM
- Experience with tools such as APM, Terraform, Ansible, GitHub, Jenkins, and Docker
- Experience developing software or software projects in at least four languages, ideally including two of Go, Python, and Java
- Experience with cloud computing technologies ( AWS cloud provider preferred)
Bread puns are encouraged but not required
Position- Cloud and Infrastructure Automation Consultant
Location- India(Pan India)-Work from Home
The position:
This exciting role in Ashnik’s consulting team brings great opportunity to design and deploy automation solutions for Ashnik’s enterprise customers spread across SEA and India. This role takes a lead in consulting the customers for automation of cloud and datacentre based resources. You will work hands-on with your team focusing on infrastructure solutions and to automate infrastructure deployments that are secure and compliant. You will provide implementation oversight of solutions to over the challenges of technology and business.
Responsibilities:
· To lead the consultative discussions to identify challenges for the customers and suggest right fit open source tools
· Independently determine the needs of the customer and create solution frameworks
· Design and develop moderately complex software solutions to meet needs
· Use a process-driven approach in designing and developing solutions.
· To create consulting work packages, detailed SOWs and assist sales team to position them to enterprise customers
· To be responsible for implementation of automation recipes (Ansible/CHEF) and scripts (Ruby, PowerShell, Python) as part of an automated installation/deployment process
Experience and skills required :
· 8 to 10 year of experience in IT infrastructure
· Proven technical skill in designing and delivering of enterprise level solution involving integration of complex technologies
· 6+ years of experience with RHEL/windows system automation
· 4+ years of experience using Python and/or Bash scripting to solve and automate common system tasks
· Strong understanding and knowledge of networking architecture
· Experience with Sentinel Policy as Code
· Strong understanding of AWS and Azure infrastructure
· Experience deploying and utilizing automation tools such as Terraform, CloudFormation, CI/CD pipelines, Jenkins, Github Actions
· Experience with Hashicorp Configuration Language (HCL) for module & policy development
· Knowledge of cloud tools including CloudFormation, CloudWatch, Control Tower, CloudTrail and IAM is desirable
This role requires high degree of self-initiative, working with diversified teams and working with customers spread across Southeast Asia and India region. This role requires you to be pro-active in communicating with customers and internal teams about industry trends, technology development and creating thought leadership.
About Us
Ashnik is a leading enterprise open-source solutions company in Southeast Asia and India, enabling organizations to adopt open source for their digital transformation goals. Founded in 2009, it offers a full-fledged Open-Source Marketplace, Solutions, and Services – Consulting, Managed, Technical, Training. Over 200 leading enterprises so far have leveraged Ashnik’s offerings in the space of Database platforms, DevOps & Microservices, Kubernetes, Cloud, and Analytics.
As a team culture, Ashnik is a family for its team members. Each member brings in a different perspective, new ideas and diverse background. Yet we all together strive for one goal – to deliver the best solutions to our customers using open-source software. We passionately believe in the power of collaboration. Through an open platform of idea exchange, we create a vibrant environment for growth and excellence.
Package : upto 20L
Experience: 8 yrs
Description
DevOps Engineer / SRE
- Understanding of maintenance of existing systems (Virtual machines), Linux stack
- Experience running, operating and maintainence of Kubernetes pods
- Strong Scripting skills
- Experience in AWS
- Knowledge of configuring/optimizing open source tools like Kafka, etc.
- Strong automation maintenance - ability to identify opportunities to speed up build and deploy process with strong validation and automation
- Optimizing and standardizing monitoring, alerting.
- Experience in Google cloud platform
- Experience/ Knowledge in Python will be an added advantage
- Experience on Monitoring Tools like Jenkins, Kubernetes ,Nagios,Terraform etc
Expert troubleshooting skills.
Expertise in designing highly secure cloud services and cloud infrastructure using AWS
(EC2, RDS, S3, ECS, Route53)
Experience with DevOps tools including Docker, Ansible, Terraform.
• Experience with monitoring tools such as DataDog, Splunk.
Experience building and maintaining large scale infrastructure in AWS including
experience leveraging one or more coding languages for automation.
Experience providing 24X7 on call production support.
Understanding of best practices, industry standards and repeatable, supportable
processes.
Knowledge and working experience of container-based deployments such as Docker,
Terraform, AWS ECS.
of TCP/IP, DNS, Certs & Networking Concepts.
Knowledge and working experience of the CI/CD development pipeline and experience
of the CI/CD maturity model. (Jenkins)
Knowledge and working experience
Strong core Linux OS skills, shell scripting, python scripting.
Working experience of modern engineering operations duties, including providing the
necessary tools and infrastructure to support high performance Dev and QA teams.
Database, MySQL administration skills is a plus.
Prior work in high load and high-traffic infrastructure is a plus.
Clear vision of and commitment to providing outstanding customer service.
- Equal Experts is an innovative software delivery consultancy specializing in the delivery of custom software solutions for blue-chip enterprise and public sector clients across a range of industry sectors.
- We deliver market-leading propositions across the digital, online and mobile channels, and are recognized for our leadership in the application of Agile and Lean delivery methods to assure delivery.
- We are focused on hiring DevOps Engineers with skills in AWS, Automation tools, CI/CD tools and the likes.
- The DevOps Consultant will be working with an on-site client team and will be expected to mentor and share their skills and knowledge with the existing client developers.
Your Responsibilities Include:
- Continuous Delivery of quality software created by our project teams. Ensuring smooth build and release with continuous integration.
- Configuration Management - Design and implementation of deployment strategies for multiple projects.
- Virtualization and Infrastructure Provisioning.
- Maintenance and upgrade of our cloud environment (AWS).
- Version control and source code administration.
- Administration of Web Servers, Application Servers and Servlet Containers
- Setting up and managing the Automation efforts for multiple projects.
Technical Expertise:
- Should have worked on Agile projects featuring weekly iterations and releases.
- Should have extensive hands-on experience with:
- Continuous Integration tools like Jenkins.
- Configuration Management tools like Terraform or Ansible.
- Cloud computing using AWS, EC2.
- Hands-on experience on Kubernetes.
- Virtualization tools like Vagrant, Docker or VMWare
- You have an active presence in the DevOps community through your blogs, Stack Overflow and GitHub profiles.
- You are passionate about DevOps. You love to mentor people and evangelize about best practices and innovations in DevOps.
A.P.T Portfolio, a high frequency trading firm that specialises in Quantitative Trading & Investment Strategies.Founded in November 2009, it has been a major liquidity provider in global Stock markets.
As a manager, you would be incharge of managing the devops team and your remit shall include the following
- Private Cloud - Design & maintain a high performance and reliable network architecture to support HPC applications
- Scheduling Tool - Implement and maintain a HPC scheduling technology like Kubernetes, Hadoop YARN Mesos, HTCondor or Nomad for processing & scheduling analytical jobs. Implement controls which allow analytical jobs to seamlessly utilize ideal capacity on the private cloud.
- Security - Implementing best security practices and implementing data isolation policy between different divisions internally.
- Capacity Sizing - Monitor private cloud usage and share details with different teams. Plan capacity enhancements on a quarterly basis.
- Storage solution - Optimize storage solutions like NetApp, EMC, Quobyte for analytical jobs. Monitor their performance on a daily basis to identify issues early.
- NFS - Implement and optimize latest version of NFS for our use case.
- Public Cloud - Drive AWS/Google-Cloud utilization in the firm for increasing efficiency, improving collaboration and for reducing cost. Maintain the environment for our existing use cases. Further explore potential areas of using public cloud within the firm.
- BackUps - Identify and automate back up of all crucial data/binary/code etc in a secured manner at such duration warranted by the use case. Ensure that recovery from back-up is tested and seamless.
- Access Control - Maintain password less access control and improve security over time. Minimize failures for automated job due to unsuccessful logins.
- Operating System -Plan, test and roll out new operating system for all production, simulation and desktop environments. Work closely with developers to highlight new performance enhancements capabilities of new versions.
- Configuration management -Work closely with DevOps/ development team to freeze configurations/playbook for various teams & internal applications. Deploy and maintain standard tools such as Ansible, Puppet, chef etc for the same.
- Data Storage & Security Planning - Maintain a tight control of root access on various devices. Ensure root access is rolled back as soon the desired objective is achieved.
- Audit access logs on devices. Use third party tools to put in a monitoring mechanism for early detection of any suspicious activity.
- Maintaining all third party tools used for development and collaboration - This shall include maintaining a fault tolerant environment for GIT/Perforce, productivity tools such as Slack/Microsoft team, build tools like Jenkins/Bamboo etc
Qualifications
- Bachelors or Masters Level Degree, preferably in CSE/IT
- 10+ years of relevant experience in sys-admin function
- Must have strong knowledge of IT Infrastructure, Linux, Networking and grid.
- Must have strong grasp of automation & Data management tools.
- Efficient in scripting languages and python
Desirables
- Professional attitude, co-operative and mature approach to work, must be focused, structured and well considered, troubleshooting skills.
- Exhibit a high level of individual initiative and ownership, effectively collaborate with other team members.
APT Portfolio is an equal opportunity employer
- Experience working in advanced iterative methodologies such as Agile and Safe.
- Experience with containers and orchestration (Docker, Kubernetes).
- Comfortable working in complex and demanding environments with high degree of change.
- Ability to view system perspective and to perform thorough investigations.
- Experience in frequent delivery to production.
- Microservice-based architecture (Jenkins, Docker, CI/CD, ELK)
- Experience with modern software components (Mongo, Elasticsearch, Kafka).
- Expertise in software development methodologies.
- Understanding of protocols/technologies like HTTP, SSL, LDAP, SSH, SAML, etc.
- Possession of a deep knowledge of development workflows with Git.
- Experience with MySQL or another relational database Environment.
- Automation testing (component, integration and end2end)