
Site Reliability-Devops Engineer
at Altimetrik
Platform Services Engineer
DevSecOps Engineer
- Strong Systems Experience- Linux, networking, cloud, APIs
- Scripting language Programming - Shell, Python
- Strong Debugging Capability
- AWS Platform -IAM, Network,EC2, Lambda, S3, CloudWatch
- Knowledge on Terraform, Packer, Ansible, Jenkins
- Observability - Prometheus, InfluxDB, Dynatrace,
- Grafana, Splunk • DevSecOps-CI/CD - Jenkins
- Microservices
- Security & Access Management
- Container Orchestration a plus - Kubernetes, Docker etc.
- Big Data Platforms knowledge EMR, Databricks. Cloudera a plus

Similar jobs
Role Purpose: Maintain and enhance the IaC-driven cloud infrastructure, pipelines, and environments post team-exit.
Key Skills:
- Azure DevOps Services (Repos, Pipelines, Artifacts)
- Terraform (advanced) – infrastructure provisioning
- CI/CD pipeline design and automation
- ARM templates, Bicep, or equivalent (if used alongside Terraform)
- Monitoring & Logging – Azure Monitor, Log Analytics
- Security & Compliance – Azure Policies, RBAC, NSGs, etc.
- Networking Basics – VNets, subnets, peering, firewalls
Experience Level:
- 5+ years total experience in cloud infrastructure
- 3+ years hands-on in Azure and Terraform
- Should have delivered or supported production-grade, IaC-managed platforms
• Bachelor’s or master’s degree in Computer Engineering,
Computer Science, Computer Applications, Mathematics, Statistics or related technical field or
equivalent practical experience. Relevant experience of at least 3 years in lieu of above if from a
different stream of education.
• Well-versed in DevOps principals & practices and hands-on DevOps
tool-chain integration experience: Release Orchestration & Automation, Source Code & Build
Management, Code Quality & Security Management, Behavior Driven Development, Test Driven
Development, Continuous Integration, Continuous Delivery, Continuous Deployment, and
Operational Monitoring & Management; extra points if you can demonstrate your knowledge with
working examples.
• Hands-on experience with demonstrable working experience with DevOps tools
and platforms viz., Slack, Jira, GIT, Jenkins, Code Quality & Security Plugins, Maven, Artifactory,
Terraform, Ansible/Chef/Puppet, Spinnaker, Tekton, StackStorm, Prometheus, Grafana, ELK,
PagerDuty, VictorOps, etc.
• Well-versed in Virtualization & Containerization; must demonstrate
experience in technologies such as Kubernetes, Istio, Docker, OpenShift, Anthos, Oracle VirtualBox,
Vagrant, etc.
• Well-versed in AWS and/or Azure or and/or Google Cloud; must demonstrate
experience in at least FIVE (5) services offered under AWS and/or Azure or and/or Google Cloud in
any categories: Compute or Storage, Database, Networking & Content Delivery, Management &
Governance, Analytics, Security, Identity, & Compliance (or) equivalent demonstratable Cloud
Platform experience.
• Well-versed with demonstrable working experience with API Management,
API Gateway, Service Mesh, Identity & Access Management, Data Protection & Encryption, tools &
platforms.
• Hands-on programming experience in either core Java and/or Python and/or JavaScript
and/or Scala; freshers passing out of college or lateral movers into IT must be able to code in
languages they have studied.
• Well-versed with Storage, Networks and Storage Networking basics
which will enable you to work in a Cloud environment.
• Well-versed with Network, Data, and
Application Security basics which will enable you to work in a Cloud as well as Business
Applications / API services environment.
• Extra points if you are certified in AWS and/or Azure
and/or Google Cloud.
As a DevOps Engineer, you’ll play a key role in managing our cloud infrastructure, automating deployments, and ensuring high availability across our global server network. You’ll work closely with our technical team to optimize performance and scalability.
Responsibilities
✅ Design, implement, and manage cloud infrastructure (primarily Azure)
✅ Automate deployments using CI/CD pipelines (GitHub Actions, Jenkins, or equivalent)
✅ Monitor and optimize server performance & uptime (100% uptime goal)
✅ Work with cPanel-based hosting environments and ensure seamless operation
✅ Implement security best practices & compliance measures
✅ Troubleshoot system issues, scale infrastructure, and enhance reliability
Requirements
🔹 3-7 years of DevOps experience in cloud environments (Azure preferred)
🔹 Hands-on expertise in CI/CD tools (GitHub Actions, Jenkins, etc.)
🔹 Proficiency in Terraform, Ansible, Docker, Kubernetes
🔹 Strong knowledge of Linux system administration & networking
🔹 Experience with monitoring tools (Prometheus, Grafana, ELK, etc.)
🔹 Security-first mindset & automation-driven approach
Why Join Us?
🚀 Work at a fast-growing startup backed by Microsoft
💡 Lead high-impact DevOps projects in a cloud-native environment
🌍 Hybrid work model with flexibility in Bangalore, Delhi, or Mumbai
💰 Competitive salary ₹12-30 LPA based on experience
How to Apply?
📩 Apply now & follow us for future updates:
🔗 X (Twitter): https://x.com/CygenHost
🔗 LinkedIn: https://www.linkedin.com/company/cygen-host/
🔗 Instagram: https://www.instagram.com/cygenhost
Would you like any modifications before posting this? Or should I move on to the next role? 🚀
Exp:8 to 10 years notice periods 0 to 20 days
Job Description :
- Provision Gcp Resources Based On The Architecture Design And Features Aligned With Business Objectives
- Monitor Resource Availability, Usage Metrics And Provide Guidelines For Cost And Performance Optimization
- Assist It/Business Users Resolving Gcp Service Related Issues
- Provide Guidelines For Cluster Automation And Migration Approaches And Techniques Including Ingest, Store, Process, Analyse And Explore/Visualise Data.
- Provision Gcp Resources For Data Engineering And Data Science Projects.
- Assistance With Automated Data Ingestion, Data Migration And Transformation(Good To Have)
- Assistance With Deployment And Troubleshooting Applications In Kubernetes.
- Establish Connections And Credibility In How To Address The Business Needs Via Design And Operate Cloud-Based Data Solutions
Key Responsibilities / Tasks :
- Building complex CI/CD pipelines for cloud native PaaS services such as Databases, Messaging, Storage, Compute in Google Cloud Platform
- Building deployment pipeline with Github CI (Actions)
- Building terraform codes to deploy infrastructure as a code
- Working with deployment and troubleshooting of Docker, GKE, Openshift, and Cloud Run
- Working with Cloud Build, Cloud Composer, and Dataflow
- Configuring software to be monitored by Appdynamics
- Configuring stackdriver logging and monitoring in GCP
- Work with splunk, Kibana, Prometheus and grafana to setup dashboard
Your skills, experience, and qualification :
- Total experience of 5+ Years, in as Devops. Should have at least 4 year of experience in Google could and Github CI.
- Should have strong experience in Microservices/API.
- Should have strong experience in Devops tools like Gitbun CI, teamcity, Jenkins and Helm.
- Should know Application deployment and testing strategies in Google cloud platform.
- Defining and setting development, test, release, update, and support processes for DevOps operation
- Strive for continuous improvement and build continuous integration, continuous development, and constant deployment pipeline (CI/CD Pipeline)
- Excellent understanding of Java
- Knowledge on Kafka, ZooKeeper, Hazelcast, Pub/Sub is nice to have.
- Understanding of cloud networking, security such as software defined networking/firewalls, virtual networks and load balancers.
- Understanding of cloud identity and access
- Understanding of the compute runtime and the differences between native compute, virtual and containers
- Configuration and managing databases such as Oracle, Cloud SQL, and Cloud Spanner.
- Excellent troubleshooting
- Working knowledge of various tools, open-source technologies
- Awareness of critical concepts of Agile principles
- Certification in Google professional Cloud DevOps Engineer is desirable.
- Experience with Agile/SCRUM environment.
- Familiar with Agile Team management tools (JIRA, Confluence)
- Understand and promote Agile values: FROCC (Focus, Respect, Openness, Commitment, Courage)
- Good communication skills
- Pro-active team player
- Comfortable working in multi-disciplinary, self-organized teams
- Professional knowledge of English
- Differentiators : knowledge/experience about
As a MLOps Engineer in QuantumBlack you will:
Develop and deploy technology that enables data scientists and data engineers to build, productionize and deploy machine learning models following best practices. Work to set the standards for SWE and
DevOps practices within multi-disciplinary delivery teams
Choose and use the right cloud services, DevOps tooling and ML tooling for the team to be able to produce high-quality code that allows your team to release to production.
Build modern, scalable, and secure CI/CD pipelines to automate development and deployment
workflows used by data scientists (ML pipelines) and data engineers (Data pipelines)
Shape and support next generation technology that enables scaling ML products and platforms. Bring
expertise in cloud to enable ML use case development, including MLOps
Our Tech Stack-
We leverage AWS, Google Cloud, Azure, Databricks, Docker, Kubernetes, Argo, Airflow, Kedro, Python,
Terraform, GitHub actions, MLFlow, Node.JS, React, Typescript amongst others in our projects
Key Skills:
• Excellent hands-on expert knowledge of cloud platform infrastructure and administration
(Azure/AWS/GCP) with strong knowledge of cloud services integration, and cloud security
• Expertise setting up CI/CD processes, building and maintaining secure DevOps pipelines with at
least 2 major DevOps stacks (e.g., Azure DevOps, Gitlab, Argo)
• Experience with modern development methods and tooling: Containers (e.g., docker) and
container orchestration (K8s), CI/CD tools (e.g., Circle CI, Jenkins, GitHub actions, Azure
DevOps), version control (Git, GitHub, GitLab), orchestration/DAGs tools (e.g., Argo, Airflow,
Kubeflow)
• Hands-on coding skills Python 3 (e.g., API including automated testing frameworks and libraries
(e.g., pytest) and Infrastructure as Code (e.g., Terraform) and Kubernetes artifacts (e.g.,
deployments, operators, helm charts)
• Experience setting up at least one contemporary MLOps tooling (e.g., experiment tracking,
model governance, packaging, deployment, feature store)
• Practical knowledge delivering and maintaining production software such as APIs and cloud
infrastructure
• Knowledge of SQL (intermediate level or more preferred) and familiarity working with at least
one common RDBMS (MySQL, Postgres, SQL Server, Oracle)
This person MUST have:
- B.E Computer Science or equivalent
- 2+ Years of hands-on experience troubleshooting/setting up of the Linux environment, who can write shell scripts for any given requirement.
- 1+ Years of hands-on experience setting up/configuring AWS or GCP services from SCRATCH and maintaining them.
- 1+ Years of hands-on experience setting up/configuring Kubernetes & EKS and ensuring high availability of container orchestration.
- 1+ Years of hands-on experience setting up CICD from SCRATCH in Jenkins & Gitlab.
- Experience configuring/maintaining one monitoring tool.
- Excellent verbal & written communication skills.
- Candidates with certifications - AWS, GCP, CKA, etc will be preferred
- Hands-on experience with databases (Cassandra, MongoDB, MySQL, RDS).
Experience:
- Min 3 years of experience as SRE automation engineer building, running, and maintaining production sites. Not looking for candidates who have experience only as L1/L2.
Location:
- Remotely, anywhere in India
Timings:
- The person is expected to deliver with both high speed and high quality as well as work for 40 Hours per week (~6.5 hours per day, 6 days per week) in shifts which will rotate every month.
Position:
- Full time/Direct
- We have great benefits such as PF, medical insurance, 12 annual company holidays, 12 PTO leaves per year, annual increments, Diwali bonus, spot bonuses and other incentives etc.
- We dont believe in locking in people with large notice periods. You will stay here because you love the company. We have only a 15 days notice period.
Requirements:
● Knowledge of building micro-services.
● Experience in managing cloud infrastructure with disaster recovery and security in
mind (AWS, GCP, Azure).
● Experience with High Availability clusters setup.
● Experience in creating alerting and monitoring strategies.
● Strong debugging skills.
● Experience with 0 downtime Continuous Delivery setup (Jenkins, AWS Code
Deploy, Team City, Go CD etc).
● Experience with Infrastructure as Code & Automation tools (Bash, Ansible,
Puppet, Chef, Terraform etc).
● Master of *nix systems, including working with docker, process & network
monitoring tools.
● Knowledge of monitoring tools like New Relic, App Dynamics etc.
● Experience with Messaging systems (RMQ, Kafka etc. ).
● Knowledge of DevOps Intelligence.
● Experience in setting up & driving DevOps initiatives in side the org Excellen.
● Good team player.
● Good to have experience in Kubernetes cluster management.
Specific responsibilities commensurate with experience and include:
- Ability to react quickly and effectively to identify and resolve issues that heavily impact CI/CD system (immediate mitigation of impact, long-term resolution including strategies for risk mitigation/monitoring/alert for proactive resolution of potential future occurrences)
- Design, develop, unit test, and implement build automation scripts including environment configuration validation processes
- Automate and improve development process by evaluation and introduction of new tools and scripts, and manage their life cycle and validation
- Determine branching strategy and maintain branches for various components, products, and product lines
- Come up with solutions to open-ended problems that focus on workflow improvements for the Software department
- Address issues with well-defined requirements efficiently; come up with short-term and long-term solutions and staged deployment strategies
- Self-driven-- takes action to move tickets from start to completion with minimal oversight
- Ability to communicate with and consider perspectives of stakeholders including but not limited to: IT, software development, verification
- Ability to break down a problem into smaller components and solve them in a logical, controlled, clearly explainable approach
- Lead the creation and maintenance of a pre-production environment as a testbed for build process improvements and changes before deployment to the production environment
- Gather metrics via direct input, data based on analysis of developer working habits analysis and pain points to assess current state and areas requiring further improvement
- Define chain of communication and immediate paths of action in the case of a build fault state
- Ability to work within constraints of the internal network without access to commercial cloud solutions
- Create metrics that define ‘efficiency’ and ‘reliability’ in measurable terms, and track them
- Perform static code and security analysis
- Design and execute unit tests and perform code coverage analysis
- Able to work in Agile development team environment
Key Requirement & Qualifications:
- Bachelor’s degree (or higher) in Electrical Engineering, Computer Engineering, Computer Science or equivalent
- 6+ years (minimum) experience handling Build, Release, and Deployment of software on Windows and/or Linux environments (on-premise)
- Experience with the development and deployment of CM processes and tools
- Build automation for .NET using TeamCity (Jenkins is an asset)
- Scripting languages: Windows batch scripting, Powershell, Ant/NAnt
- Source control systems usage, branching strategies, and workflow (Git preferred, Subversion)
- 6+ years of hands-on programming experience with C# and .NET (both Framework and Core)
- Troubleshooting and debugging-- what information to gather when there are issues with CI/CD system, and how to gather it (i.e., analyzing network communication? Windows crash dumps, java logs, etc.)
- 6+ years (minimum) in web/desktop application software development experience
- Excellent problem solving, critical and analytical thinking
- Strong team player who understands SDLC and QA methodologies
- A professional, results-oriented individual with a high degree of self-motivation
- Excellent written and verbal communication skills and the ability to coordinate work/activities with multiple software/IT teams
- Working with virtual machines and build management on virtual machines (VMware preferred).
- Managing configurations for multiple build environments
- OS administration and scripting experience (Windows is a must, Linux desired)
- Experience with test automation tools (NUnit, customer inhouse frameworks) and strategies is an asset
- Creation and maintenance of monitoring and alert systems (Zabbix)
- Familiarity with databases (SQL-based) - create, modify, optimize (via script)
- Data and metrics gathering, aggregation, and reporting
- Experience with work management and documentation tools: JIRA and Confluence
Who we are?
Searce is a niche’ Cloud Consulting business with futuristic tech DNA. We do new-age tech to realise the “Next” in the “Now” for our Clients. We specialise in Cloud Data Engineering, AI/Machine Learning and Advanced Cloud infra tech such as Anthos and Kubernetes. We are one of the top & the fastest growing partners for Google Cloud and AWS globally with over 2,500 clients successfully moved to cloud.
What we believe?
- Best practices are overrated
- Implementing best practices can only make one n ‘average’ .
- Honesty and Transparency
- We believe in naked truth. We do what we tell and tell what we do.
- Client Partnership
- Client - Vendor relationship: No. We partner with clients instead.
- And our sales team comprises 100% of our clients.
How we work?
It’s all about being Happier first. And rest follows. Searce work culture is defined by HAPPIER.
- Humble: Happy people don’t carry ego around. We listen to understand; not to respond.
- Adaptable: We are comfortable with uncertainty. And we accept changes well. As that’s what life's about.
- Positive: We are super positive about work & life in general. We love to forget and forgive. We don’t hold grudges. We don’t have time or adequate space for it.
- Passionate: We are as passionate about the great street-food vendor across the street as about Tesla’s new model and so on. Passion is what drives us to work and makes us deliver the quality we deliver.
- Innovative: Innovate or Die. We love to challenge the status quo.
- Experimental: We encourage curiosity & making mistakes.
- Responsible: Driven. Self motivated. Self governing teams. We own it.
Are you the one? Quick self-discovery test:
- Love for cloud: When was the last time your dinner entailed an act on “How would ‘Jerry Seinfeld’ pitch Cloud platform & products to this prospect” and your friend did the ‘Sheldon’ version of the same thing.
- Passion for sales: When was the last time you went at a remote gas station while on vacation, and ended up helping the gas station owner saasify his 7 gas stations across other geographies.
- Compassion for customers: You listen more than you speak. When you do speak, people feel the need to listen.
- Humor for life: When was the last time you told a concerned CEO, ‘If Elon Musk can attempt to take humanity to Mars, why can’t we take your business to run on cloud ?
Your bucket of undertakings:
- This position will be responsible to consult with clients and propose architectural solutions to help move & improve infra from on-premise to cloud or help optimize cloud spend from one public cloud to the other
- Be the first one to experiment on new age cloud offerings, help define the best practise as a thought leader for cloud, automation & Dev-Ops, be a solution visionary and technology expert across multiple channels.
- Continually augment skills and learn new tech as the technology and client needs evolve
- Demonstrate knowledge of cloud architecture and implementation features (OS, multi-tenancy, virtualization, orchestration, elastic scalability)
- Use your experience in Google cloud platform, AWS or Microsoft Azure to build hybrid-cloud solutions for customers.
- Provide leadership to project teams, and facilitate the definition of project deliverables around core Cloud based technology and methods.
- Define tracking mechanisms and ensure IT standards and methodology are met; deliver quality results.
- Define optimal design patterns and solutions for high availability and disaster recovery for applications
- Participate in technical reviews of requirements, designs, code and other artifacts Identify and keep abreast of new technical concepts in AWS
- Security, Risk and Compliance - Advise customers on best practices around access management, network setup, regulatory compliance and related areas
- Develop solutions architecture and evaluate architectural alternatives for private, public and hybrid cloud models, including IaaS, PaaS, and other cloud services
- Demonstrate leadership ability to back decisions with research and the “why,” and articulate several options, the pros and cons for each, and a recommendation • Maintain overall industry knowledge on latest trends, technology, etc. • • Contribute to DevOps development activities and complex development tasks
- Act as a Subject Matter Expert on cloud end-to-end architecture, including AWS and future providers, networking, provisioning, and management
Accomplishment Set
- Passionate, persuasive, articulate Cloud professional capable of quickly establishing interest and credibility
- Good business judgment, a comfortable, open communication style, and a willingness and ability to work with customers and teams.
- Strong service attitude and a commitment to quality. Highly organised and efficient. Confident working with others to inspire a high-quality standard.
Education, Experience, etc.
- To reiterate: Passion to tech-awesome, insatiable desire to learn the latest of the new-age cloud tech, highly analytical aptitude and a strong ‘desire to deliver’ outlives those fancy degrees!
- 6 - 10 years of experience with at least 5 - 6 years of hands-on experience in Cloud Computing
- (AWS/GCP/Azure) and IT operational experience in a global enterprise environment.
- Good analytical, communication, problem solving, and learning skills.
- Knowledge on programming against cloud platforms such as AWS and lean development methodologies.
Must-Have’s:
- Hands-on DevOps (Git, Ansible, Terraform, Jenkins, Python/Ruby)
Job Description:
- Knowledge on what is a DevOps CI/CD Pipeline
- Understanding of version control systems like Git, including branching and merging strategies
- Knowledge of what is continuous delivery and integration tools like Jenkins, Github
- Knowledge developing code using Ruby or Python and Java or PHP
- Knowledge writing Unix Shell (bash, ksh) scripts
- Knowledge of what is automation/configuration management using Ansible, Terraform, Chef or Puppet
- Experience and willingness to keep learning in a Linux environment
- Ability to provide after-hours support as needed for emergency or urgent situations
Nice to have’s:
- Proficient with container based products like docker and Kubernetes
- Excellent communication skills (verbal and written)
- Able to work in a team and be a team player
- Knowledge of PHP, MySQL, Apache and other open source software
- BA/BS in computer science or similar









