

About MSM Digital Private Limited
About
Connect with the team
Similar jobs

Job Details
- Job Title: DevOps and SRE -Technical Project Manager
- Industry: Global digital transformation solutions provider
- Domain - Information technology (IT)
- Experience Required: 12-15 years
- Employment Type: Full Time
- Job Location: Bangalore, Chennai, Coimbatore, Hosur & Hyderabad
- CTC Range: Best in Industry
Job Description
Company’s DevOps Practice is seeking a highly skilled DevOps and SRE Technical Project Manager to lead large-scale transformation programs for enterprise customers. The ideal candidate will bring deep expertise in DevOps and Site Reliability Engineering (SRE), combined with strong program management, stakeholder leadership, and the ability to drive end-to-end execution of complex initiatives.
Key Responsibilities
- Lead the planning, execution, and successful delivery of DevOps and SRE transformation programs for enterprise clients, including full oversight of project budgets, financials, and margins.
- Partner with senior stakeholders to define program objectives, roadmaps, milestones, and success metrics aligned with business and technology goals.
- Develop and implement actionable strategies to optimize development, deployment, release management, observability, and operational workflows across client environments.
- Provide technical leadership and strategic guidance to cross-functional engineering teams, ensuring alignment with industry standards, best practices, and company delivery methodologies.
- Identify risks, dependencies, and blockers across programs, and proactively implement mitigation and contingency plans.
- Monitor program performance, KPIs, and financial health; drive corrective actions and margin optimization where necessary.
- Facilitate strong communication, collaboration, and transparency across engineering, product, architecture, and leadership teams.
- Deliver periodic program updates to internal and client stakeholders, highlighting progress, risks, challenges, and improvement opportunities.
- Champion a culture of continuous improvement, operational excellence, and innovation by encouraging adoption of emerging DevOps, SRE, automation, and cloud-native practices.
- Support GitHub migration initiatives, including planning, execution, troubleshooting, and governance setup for repository and workflow migrations.
Requirements
- Bachelor’s degree in Computer Science, Engineering, Business Administration, or a related technical discipline.
- 15+ years of IT experience, including at least 5 years in a managerial or program leadership role.
- Proven experience leading large-scale DevOps and SRE transformation programs with measurable business impact.
- Strong program management expertise, including planning, execution oversight, risk management, and financial governance.
- Solid understanding of Agile methodologies (Scrum, Kanban) and modern software development practices.
- Deep hands-on knowledge of DevOps principles, CI/CD pipelines, automation frameworks, Infrastructure as Code (IaC), and cloud-native tooling.
- Familiarity with SRE practices such as service reliability, observability, SLIs/SLOs, incident management, and performance optimization.
- Experience with GitHub migration projects—including repository analysis, migration planning, tooling adoption, and workflow modernization.
- Excellent communication, stakeholder management, and interpersonal skills with the ability to influence and lead cross-functional teams.
- Strong analytical, organizational, and problem-solving skills with a results-oriented mindset.
- Preferred certifications: PMP, PgMP, ITIL, Agile/Scrum Master, or relevant technical certifications.
Skills: Devops Tools, Cloud Infrastructure, Team Management
Must-Haves
DevOps principles (5+ years), SRE practices (5+ years), GitHub migration (3+ years), CI/CD pipelines (5+ years), Agile methodologies (5+ years)
Notice period - 0 to 15days only
DevOps Engineer
AiSensy
Gurugram, Haryana, India (On-site)
About AiSensy
AiSensy is a WhatsApp based Marketing & Engagement platform helping businesses like Adani, Delhi Transport Corporation, Yakult, Godrej, Aditya Birla Hindalco., Wipro, Asian Paints, India Today Group Skullcandy, Vivo, Physicswallah, Cosco grow their revenues via WhatsApp.
- Enabling 100,000+ Businesses with WhatsApp Engagement & Marketing
- 400Crores + WhatsApp Messages done between Businesses and Users via AiSensy per year
- Working with top brands like Delhi Transport Corporation, Vivo, Physicswallah & more
- High Impact as Businesses drive 25-80% Revenues using AiSensy Platform
- Mission-Driven and Growth Stage Startup backed by Marsshot.vc, Bluelotus.vc & 50+ Angel Investors
Now, we’re looking for a DevOps Engineer to help scale our infrastructure and optimize performance for millions of users. 🚀
What You’ll Do (Key Responsibilities)
🔹 CI/CD & Automation:
- Implement, manage, and optimize CI/CD pipelines using AWS CodePipeline, GitHub Actions, or Jenkins.
- Automate deployment processes to improve efficiency and reduce downtime.
🔹 Infrastructure Management:
- Use Terraform, Ansible, Chef, Puppet, or Pulumi to manage infrastructure as code.
- Deploy and maintain Dockerized applications on Kubernetes clusters for scalability.
🔹 Cloud & Security:
- Work extensively with AWS (Preferred) or other cloud platforms to build and maintain cloud infrastructure.
- Optimize cloud costs and ensure security best practices are in place.
🔹 Monitoring & Troubleshooting:
- Set up and manage monitoring tools like CloudWatch, Prometheus, Datadog, New Relic, or Grafana to track system performance and uptime.
- Proactively identify and resolve infrastructure-related issues.
🔹 Scripting & Automation:
- Use Python or Bash scripting to automate repetitive DevOps tasks.
- Build internal tools for system health monitoring, logging, and debugging.
What We’re Looking For (Must-Have Skills)
✅ Version Control: Proficiency in Git (GitLab / GitHub / Bitbucket)
✅ CI/CD Tools: Hands-on experience with AWS CodePipeline, GitHub Actions, or Jenkins
✅ Infrastructure as Code: Strong knowledge of Terraform, Ansible, Chef, or Pulumi
✅ Containerization & Orchestration: Experience with Docker & Kubernetes
✅ Cloud Expertise: Hands-on experience with AWS (Preferred) or other cloud providers
✅ Monitoring & Alerting: Familiarity with CloudWatch, Prometheus, Datadog, or Grafana
✅ Scripting Knowledge: Python or Bash for automation
Bonus Skills (Good to Have, Not Mandatory)
➕ AWS Certifications: Solutions Architect, DevOps Engineer, Security, Networking
➕ Experience with Microsoft/Linux/F5 Technologies
➕ Hands-on knowledge of Database servers
Job Requirements
Required Experience
5–6 years of hands-on experience in Salesforce DevOps, release engineering, or deployment
management.
Strong expertise in Salesforce deployment processes, including CI/CD pipelines.
Significant hands-on experience with at least two of the following tools: Gearset, Copado,
Flosum.
Solid understanding of Salesforce architecture, metadata, and development lifecycle.
Familiarity with version control systems (e.g., Git) and agile methodologies.
Key Responsibilities
Design, implement, and manage CI/CD pipelines for Salesforce deployments using Gearset,
Copado, or Flosum.
Automate and optimize deployment processes to ensure efficient, reliable, and repeatable
releases across Salesforce environments.
Collaborate with development, QA, and operations teams to gather requirements and ensure
alignment of deployment strategies.
Monitor, troubleshoot, and resolve deployment and release issues.
Maintain documentation for deployment processes and provide training on best practices.
Stay updated on the latest Salesforce DevOps tools, features, and best practices.
Technical Skills
Skill Area Requirements
Deployment ToolsHands-on with Gearset, Copado, Flosum for Salesforce deployments
CI/CDBuilding and maintaining pipelines, automation, and release management
Version ControlProficiency with Git and related workflows
Salesforce PlatformUnderstanding of metadata, SFDX, and environment management
Scripting
Familiarity with scripting (e.g., Shell, Python) for automation (preferred)
Communication
Strong written and verbal communication skills
Preferred Qualifications
Bachelor’s degree in Computer Science, Information Technology, or related field.
Certifications
Salesforce certifications (e.g., Salesforce Administrator, Platform Developer I/II) are a plus.
Experience with additional DevOps tools (Jenkins, GitLab, Azure DevOps) is beneficial.
Experience with Salesforce DX and deployment strategies for large-scale orgs.
What does a successful Senior DevOps Engineer do at Fiserv?
This role’s focus will be on contributing and enhancing our DevOps environment within Issuer Solution group, where our cross functional Scrum teams are delivering solutions built on cutting-edge mobile technology and products. You will be expected to support across the wider business unit, leading DevOps practices and initiatives.
What will you do:
• Build, manage, and deploy CI/CD pipelines.
• DevOps Engineer - Helm Chart, Rundesk, Openshift
• Strive for continuous improvement and build continuous integration, continuous development, and constant deployment pipeline.
• Implementing various development, testing, automation tools, and IT infrastructure
• Optimize and automate release/development cycles and processes.
• Be part of and help promote our DevOps culture.
• Identify and implement continuous improvements to the development practice
What you must have:
• 3+ years of experience in devops with hands-on experience in the following:
- Writing automation scripts for deployments and housekeeping using shell scripts (bash) and ansible playbooks
- Building docker images and running/managing docker instances
- Building Jenkins pipelines using groovy scripts
- Working knowledge on kubernetes including application deployments, managing application configurations and persistence volumes
• Has good understanding on infrastructure as code
• Ability to write and update documentation
• Demonstrate a logical, process orientated approach to problems and troubleshooting
• Ability to collaborate with multi development teams
What you are preferred to have:
• 8+ years of development experience
• Jenkins administration experience
• Hands-on experience in building and deploying helm charts
Process Skills:
• Should have worked in Agile Project
We are looking for very hands-on DevOps engineers with 3 to 6 years of experience. The person will be part of a team that is responsible for designing & implementing automation from scratch for medium to large scale cloud infrastructure and providing 24x7 services to our North American / European customers. This also includes ensuring ~100% uptime for almost 50+ internal sites. The person is expected to deliver with both high speed and high quality as well as work for 40 hours per week (~6.5 hours per day, 6 days per week) in shifts that will rotate every month.
This person MUST have:
-
B.E Computer Science or equivalent
-
2+ Years of hands-on experience troubleshooting/setting up of the Linux environment, who can write shell scripts for any given requirement.
-
1+ Years of hands-on experience setting up/configuring AWS or GCP services from SCRATCH and maintaining them.
-
1+ Years of hands-on experience setting up/configuring Kubernetes & EKS and ensuring high availability of container orchestration.
-
1+ Years of hands-on experience setting up CICD from SCRATCH in Jenkins & Gitlab.
-
Experience configuring/maintaining one monitoring tool.
-
Excellent verbal & written communication skills.
-
Candidates with certifications - AWS, GCP, CKA, etc will be preferred
-
Hands-on experience with databases (Cassandra, MongoDB, MySQL, RDS).
Experience:
-
Min 3 years of experience as DevOps automation engineer buildingg, running, and maintaining production sites.
-
Not looking for candidates who have experience only as L1/L2 or Build & Deploy.
Location: Remotely, anywhere in India.
Timings:
-
The person is expected to deliver with both high speed and high quality as well as work for 40 hours per week (~6.5 hours per day, 6 days per week) in shifts that will rotate every month.
Position:
-
Full time/Direct
-
We have great benefits such as PF, medical insurance, 12 annual company holidays, 12 PTO leaves per year, annual increments, Diwali bonus, spot bonuses and other incentives etc.
-
We dont believe in locking in people with large notice periods. You will stay here because you love the company. We have only a 15 days notice period.
We're Hiring: DevOps Tech Lead with 7-9 Years of Experience! 🚀
Are you a seasoned DevOps professional with a passion for cloud technologies and automation? We have an exciting opportunity for a DevOps Tech Lead to join our dynamic team at our Gurgaon office.
🏢 ZoomOps Technolgy Solutions Private Limited
📍 Location: Gurgaon
💼 Full-time position
🔧 Key Skills & Requirements:
✔ 7-9 years of hands-on experience in DevOps roles
✔ Proficiency in Cloud Platforms like AWS, GCP, and Azure
✔ Strong background in Solution Architecture
✔ Expertise in writing Automation Scripts using Python and Bash
✔ Ability to manage IAC tools and CM tools like Terraform, Ansible, pulumi etc..
Responsibilities:
🔹 Lead and mentor the DevOps team, driving innovation and best practices
🔹 Design and implement robust CI/CD pipelines for seamless software delivery
🔹 Architect and optimize cloud infrastructure for scalability and efficiency
🔹 Automate manual processes to enhance system reliability and performance
🔹 Collaborate with cross-functional teams to drive continuous improvement
Join us to work on exciting projects and make a significant impact in the tech space!
Apply now and take the next step in your DevOps career!
About Hive
Hive is the leading provider of cloud-based AI solutions for content understanding,
trusted by the world’s largest, fastest growing, and most innovative organizations. The
company empowers developers with a portfolio of best-in-class, pre-trained AI models, serving billions of customer API requests every month. Hive also offers turnkey software applications powered by proprietary AI models and datasets, enabling breakthrough use cases across industries. Together, Hive’s solutions are transforming content moderation, brand protection, sponsorship measurement, context-based ad targeting, and more.
Hive has raised over $120M in capital from leading investors, including General Catalyst, 8VC, Glynn Capital, Bain & Company, Visa Ventures, and others. We have over 250 employees globally in our San Francisco, Seattle, and Delhi offices. Please reach out if you are interested in joining the future of AI!
About Role
Our unique machine learning needs led us to open our own data centers, with an
emphasis on distributed high performance computing integrating GPUs. Even with these data centers, we maintain a hybrid infrastructure with public clouds when the right fit. As we continue to commercialize our machine learning models, we also need to grow our DevOps and Site Reliability team to maintain the reliability of our enterprise SaaS offering for our customers. Our ideal candidate is someone who is
able to thrive in an unstructured environment and takes automation seriously. You believe there is no task that can’t be automated and no server scale too large. You take pride in optimizing performance at scale in every part of the stack and never manually performing the same task twice.
Responsibilities
● Create tools and processes for deploying and managing hardware for Private Cloud Infrastructure.
● Improve workflows of developer, data, and machine learning teams
● Manage integration and deployment tooling
● Create and maintain monitoring and alerting tools and dashboards for various services, and audit infrastructure
● Manage a diverse array of technology platforms, following best practices and
procedures
● Participate in on-call rotation and root cause analysis
Requirements
● Minimum 5 - 10 years of previous experience working directly with Software
Engineering teams as a developer, DevOps Engineer, or Site Reliability
Engineer.
● Experience with infrastructure as a service, distributed systems, and software design at a high-level.
● Comfortable working on Linux infrastructures (Debian) via the CLIAble to learn quickly in a fast-paced environment.
● Able to debug, optimize, and automate routine tasks
● Able to multitask, prioritize, and manage time efficiently independently
● Can communicate effectively across teams and management levels
● Degree in computer science, or similar, is an added plus!
Technology Stack
● Operating Systems - Linux/Debian Family/Ubuntu
● Configuration Management - Chef
● Containerization - Docker
● Container Orchestrators - Mesosphere/Kubernetes
● Scripting Languages - Python/Ruby/Node/Bash
● CI/CD Tools - Jenkins
● Network hardware - Arista/Cisco/Fortinet
● Hardware - HP/SuperMicro
● Storage - Ceph, S3
● Database - Scylla, Postgres, Pivotal GreenPlum
● Message Brokers: RabbitMQ
● Logging/Search - ELK Stack
● AWS: VPC/EC2/IAM/S3
● Networking: TCP / IP, ICMP, SSH, DNS, HTTP, SSL / TLS, Storage systems,
RAID, distributed file systems, NFS / iSCSI / CIFS
Who we are
We are a group of ambitious individuals who are passionate about creating a revolutionary AI company. At Hive, you will have a steep learning curve and an opportunity to contribute to one of the fastest growing AI start-ups in San Francisco. The work you do here will have a noticeable and direct impact on the
development of the company.
Thank you for your interest in Hive and we hope to meet you soon
Job Summary
You'd be meticulously analyzing project requirements and carry forward the development of highly robust, scalable and easily maintainable backend applications, work independently, and you'll have the support & opportunity to thrive in a fast-paced environment.
Responsibilities and Duties:
- building and setting up new development tools and infrastructure
- understanding the needs of stakeholders and conveying this to developers
- working on ways to automate and improve development and release processes
- testing and examining code written by others and analysing results
- ensuring that systems are safe and secure against cybersecurity threats
- identifying technical problems and developing software updates and ‘fixes’
- working with software developers and software engineers to ensure that development follows established processes and works as intended
- planning out projects and being involved in project management decisions
Skill Requirements:
- Managing GitHub (example: - creating branches for test, QA, development and production, creating Release tags, resolve merge conflict)
- Setting up of the servers based on the projects in either AWS or Azure (test, development, QA, staging and production)
- AWS S3 configuring and s3 web hosting, Archiving data from s3 to s3-glacier
- Deploying the build(application) to the servers using AWS CI/CD and Jenkins (Automated and manual)
- AWS Networking and Content delivery (VPC, Route 53 and CloudFront)
- Managing databases like RDS, Snowflake, Athena, Redis and Elasticsearch
- Managing IAM roles and policies for the functions like Lambda, SNS, aws cognito, secret manager, certificate manager, Guard Duty, Inspector EC2 and S3.
- AWS Analytics (Elasticsearch, Athena, Glue and kinesis).
- AWS containers (elastic container registry, elastic container service, elastic Kubernetes service, Docker Hub and Docker compose
- AWS Auto scaling group (launch configuration, launch template) and load balancer
- EBS (snapshots, volumes and AMI.)
- AWS CI/CD build spec scripting, Jenkins groovy scripting, shell scripting and python scripting.
- Sagemaker, Textract, forecast, LightSail
- Android and IOS automation building
- Monitoring tools like cloudwatch, cloudwatch log group, Alarm, metric dashboard, SNS(simple notification service), SES(simple email service)
- Amazon MQ
- Operating system Linux and windows
- X-Ray, Cloud9, Codestar
- Fluent Shell Scripting
- Soft Skills
- Scripting Skills , Good to have knowledge (Python, Javascript, Java,Node.js)
- Knowledge On Various DevOps Tools And Technologies
Qualifications and Skills
Job Type: Full-time
Experience: 4 - 7 yrs
Qualification: BE/ BTech/MCA.
Location: Bengaluru, Karnataka








