

We are now seeking a talented and motivated individual to contribute to our product in the Cloud data
protection space. Ability to clearly comprehend customer needs in a cloud environment, excellent
troubleshooting skills, and the ability to focus on problem resolution until completion are a requirement.
Responsibilities Include:
Review proposed feature requirements
Create test plan and test cases
Analyze performance, diagnosis, and troubleshooting
Enter and track defects
Interact with customers, partners, and development teams
Researching customer issues and product initiatives
Provide input for service documentation
Required Skills:
Bachelor's degree in Computer Science, Information Systems or related discipline
3+ years' experience inclusive of Software as a Service and/or DevOps engineering experience
Experience with AWS services like VPC, EC2, RDS, SES, ECS, Lambda, S3, ELB
Experience with technologies such as REST, Angular, Messaging, Databases, etc.
Strong troubleshooting skills and issue isolation skills
Possess excellent communication skills (written and verbal English)
Must be able to work as an individual contributor within a team
Ability to think outside the box
Experience in configuring infrastructure
Knowledge of CI / CD
Desirable skills:
Programming skills in scripting languages (e.g., python, bash)
Knowledge of Linux administration
Knowledge of testing tools/frameworks: TestNG, Selenium, etc
Knowledge of Identity and Security

Similar jobs
Managing cloud-based serverless infrastructure on AWS, GCP(firebase) with IaC
(Terraform, CloudFormation etc.,)
Deploying and maintaining products, services, and network components with a focus
on security, reliability, and zero downtime
Automating and streamlining existing processes to aid the development team
Working with the development team to create ephemeral environments, simplifying
the development lifecycle
Driving forward our blockchain infrastructure by creating and managing validators for
a wide variety of new and existing blockchains
Requirements:
1-3+ years in a SRE / DevOps / DevSecOps or Infrastructure Engineering role
Strong working knowledge of Amazon Web Services (AWS) or GCP or similar cloud
ecosystem
Experience working with declarative Infrastructure-as-Code frameworks(Terraform,
CloudFormation)
Experience with containerization technologies and tools (Docker, Kubernetes), CI/CD
pipelines and Linux/Unix administration
Bonus points - if you know more about crypto, staking, defi, proof-of-stake,
validators, delegations
Benefits:
Competitive CTC on par with market along with ESOPs/Tokens
As a DevOps Engineer, you’ll play a key role in managing our cloud infrastructure, automating deployments, and ensuring high availability across our global server network. You’ll work closely with our technical team to optimize performance and scalability.
Responsibilities
✅ Design, implement, and manage cloud infrastructure (primarily Azure)
✅ Automate deployments using CI/CD pipelines (GitHub Actions, Jenkins, or equivalent)
✅ Monitor and optimize server performance & uptime (100% uptime goal)
✅ Work with cPanel-based hosting environments and ensure seamless operation
✅ Implement security best practices & compliance measures
✅ Troubleshoot system issues, scale infrastructure, and enhance reliability
Requirements
🔹 3-7 years of DevOps experience in cloud environments (Azure preferred)
🔹 Hands-on expertise in CI/CD tools (GitHub Actions, Jenkins, etc.)
🔹 Proficiency in Terraform, Ansible, Docker, Kubernetes
🔹 Strong knowledge of Linux system administration & networking
🔹 Experience with monitoring tools (Prometheus, Grafana, ELK, etc.)
🔹 Security-first mindset & automation-driven approach
Why Join Us?
🚀 Work at a fast-growing startup backed by Microsoft
💡 Lead high-impact DevOps projects in a cloud-native environment
🌍 Hybrid work model with flexibility in Bangalore, Delhi, or Mumbai
💰 Competitive salary ₹12-30 LPA based on experience
How to Apply?
📩 Apply now & follow us for future updates:
🔗 X (Twitter): https://x.com/CygenHost
🔗 LinkedIn: https://www.linkedin.com/company/cygen-host/
🔗 Instagram: https://www.instagram.com/cygenhost
Would you like any modifications before posting this? Or should I move on to the next role? 🚀
Main tasks
- Supervision of the CI/CD process for the automated builds and deployments of web services and web applications as well as desktop tool in the cloud and container environment
- Responsibility of the operations part of a DevOps organization especially for development in the environment of container technology and orchestration, e.g. with Kubernetes
- Installation, operation and monitoring of web applications in cloud data centers for the purpose of development of the test as well as for the operation of an own productive cloud
- Implementation of installations of the solution especially in the container context
- Introduction, maintenance and improvement of installation solutions for development in the desktop and server environment as well as in the cloud and with on-premise Kubernetes
- Maintenance of the system installation documentation and implementation of trainings
Execution of internal software tests and support of involved teams and stakeholders
- Hands on Experience with Azure DevOps.
Qualification profile
- Bachelor’s or master’s degree in communications engineering, electrical engineering, physics or comparable qualification
- Experience in software
- Installation and administration of Linux and Windows systems including network and firewalling aspects
- Experience with build and deployment automation with tools like Jenkins, Gradle, Argo, AnangoDB or similar as well as system scripting (Bash, Power-Shell, etc.)
- Interest in operation and monitoring of applications in virtualized and containerized environments in cloud and on-premise
- Server environments, especially application, web-and database servers
- Knowledge in VMware/K3D/Rancer is an advantage
- Good spoken and written knowledge of English
Exp:8 to 10 years notice periods 0 to 20 days
Job Description :
- Provision Gcp Resources Based On The Architecture Design And Features Aligned With Business Objectives
- Monitor Resource Availability, Usage Metrics And Provide Guidelines For Cost And Performance Optimization
- Assist It/Business Users Resolving Gcp Service Related Issues
- Provide Guidelines For Cluster Automation And Migration Approaches And Techniques Including Ingest, Store, Process, Analyse And Explore/Visualise Data.
- Provision Gcp Resources For Data Engineering And Data Science Projects.
- Assistance With Automated Data Ingestion, Data Migration And Transformation(Good To Have)
- Assistance With Deployment And Troubleshooting Applications In Kubernetes.
- Establish Connections And Credibility In How To Address The Business Needs Via Design And Operate Cloud-Based Data Solutions
Key Responsibilities / Tasks :
- Building complex CI/CD pipelines for cloud native PaaS services such as Databases, Messaging, Storage, Compute in Google Cloud Platform
- Building deployment pipeline with Github CI (Actions)
- Building terraform codes to deploy infrastructure as a code
- Working with deployment and troubleshooting of Docker, GKE, Openshift, and Cloud Run
- Working with Cloud Build, Cloud Composer, and Dataflow
- Configuring software to be monitored by Appdynamics
- Configuring stackdriver logging and monitoring in GCP
- Work with splunk, Kibana, Prometheus and grafana to setup dashboard
Your skills, experience, and qualification :
- Total experience of 5+ Years, in as Devops. Should have at least 4 year of experience in Google could and Github CI.
- Should have strong experience in Microservices/API.
- Should have strong experience in Devops tools like Gitbun CI, teamcity, Jenkins and Helm.
- Should know Application deployment and testing strategies in Google cloud platform.
- Defining and setting development, test, release, update, and support processes for DevOps operation
- Strive for continuous improvement and build continuous integration, continuous development, and constant deployment pipeline (CI/CD Pipeline)
- Excellent understanding of Java
- Knowledge on Kafka, ZooKeeper, Hazelcast, Pub/Sub is nice to have.
- Understanding of cloud networking, security such as software defined networking/firewalls, virtual networks and load balancers.
- Understanding of cloud identity and access
- Understanding of the compute runtime and the differences between native compute, virtual and containers
- Configuration and managing databases such as Oracle, Cloud SQL, and Cloud Spanner.
- Excellent troubleshooting
- Working knowledge of various tools, open-source technologies
- Awareness of critical concepts of Agile principles
- Certification in Google professional Cloud DevOps Engineer is desirable.
- Experience with Agile/SCRUM environment.
- Familiar with Agile Team management tools (JIRA, Confluence)
- Understand and promote Agile values: FROCC (Focus, Respect, Openness, Commitment, Courage)
- Good communication skills
- Pro-active team player
- Comfortable working in multi-disciplinary, self-organized teams
- Professional knowledge of English
- Differentiators : knowledge/experience about
Kutumb is the first and largest communities platform for Bharat. We are growing at an exponential trajectory. More than 1 Crore users use Kutumb to connect with their community. We are backed by world-class VCs and angel investors. We are growing and looking for exceptional Infrastructure Engineers to join our Engineering team.
More on this here - https://kutumbapp.com/why-join-us.html">https://kutumbapp.com/why-join-us.html
We’re excited if you have:
- Recent experience designing and building unified observability platforms that enable companies to use the sometimes-overwhelming amount of available data (metrics, logs, and traces) to determine quickly if their application or service is operating as desired
- Expertise in deploying and using open-source observability tools in large-scale environments, including Prometheus, Grafana, ELK (ElasticSearch + Logstash + Kibana), Jaeger, Kiali, and/or Loki
- Familiarity with open standards like OpenTelemetry, OpenTracing, and OpenMetrics
- Familiarity with Kubernetes and Istio as the architecture on which the observability platform runs, and how they integrate and scale. Additionally, the ability to contribute improvements back to the joint platform for the benefit of all teams
- Demonstrated customer engagement and collaboration skills to curate custom dashboards and views, and identify and deploy new tools, to meet their requirements
- The drive and self-motivation to understand the intricate details of a complex infrastructure environment
- Using CICD tools to automatically perform canary analysis and roll out changes after passing automated gates (think Argo & keptn)
- Hands-on experience working with AWS
- Bonus points for knowledge of ETL pipelines and Big data architecture
- Great problem-solving skills & takes pride in your work
- Enjoys building scalable and resilient systems, with a focus on systems that are robust by design and suitably monitored
- Abstracting all of the above into as simple of an interface as possible (like Knative) so developers don't need to know about it unless they choose to open the escape hatch
What you’ll be doing:
- Design and build automation around the chosen tools to make onboarding new services easy for developers (dashboards, alerts, traces, etc)
- Demonstrate great communication skills in working with technical and non-technical audiences
- Contribute new open-source tools and/or improvements to existing open-source tools back to the CNCF ecosystem
Tools we use:
Kops, Argo, Prometheus/ Loki/ Grafana, Kubernetes, AWS, MySQL/ PostgreSQL, Apache Druid, Cassandra, Fluentd, Redis, OpenVPN, MongoDB, ELK
What we offer:
- High pace of learning
- Opportunity to build the product from scratch
- High autonomy and ownership
- A great and ambitious team to work with
- Opportunity to work on something that really matters
- Top of the class market salary and meaningful ESOP ownership
CI/CD tools Jenkins/Bamboo/Teamcity/CircleCI, DevSecOps Pipeline, Cloud Services (AWS/Azure/GCP), Ansible, Terraform, Docker, Helm, Cloud formation template, Webserver deployment & config, Databases(SQL/NoSQL) deployment & config, Git, Artifactory, Monitoring tools (Nagios, Grafana, Prometheus etc), Application logs (ELK/EFK, Splunk etc.), API Gateways, Security tools, Vault. |
We are hiring for https://www.linkedin.com/feed/hashtag/?keywords=devops&highlightedUpdateUrns=urn%3Ali%3Aactivity%3A7003255016740294656" target="_blank">#Devops Engineer for a reputed https://www.linkedin.com/feed/hashtag/?keywords=mnc&highlightedUpdateUrns=urn%3Ali%3Aactivity%3A7003255016740294656" target="_blank">#MNC
Job Description:
Total exp- 6+Years
Must have:
Minimum 3-4 years hands-on experience in https://www.linkedin.com/feed/hashtag/?keywords=kubernetes&highlightedUpdateUrns=urn%3Ali%3Aactivity%3A7003255016740294656" target="_blank">#Kubernetes and https://www.linkedin.com/feed/hashtag/?keywords=docker&highlightedUpdateUrns=urn%3Ali%3Aactivity%3A7003255016740294656" target="_blank">#Docker
Proficiency in https://www.linkedin.com/feed/hashtag/?keywords=aws&highlightedUpdateUrns=urn%3Ali%3Aactivity%3A7003255016740294656" target="_blank">#AWS Cloud
Good to have Kubernetes admin certification
Job Responsibilities:
Responsible for managing Kubernetes cluster
Deploying infrastructure for the project
Build https://www.linkedin.com/feed/hashtag/?keywords=cicd&highlightedUpdateUrns=urn%3Ali%3Aactivity%3A7003255016740294656" target="_blank">#CICD pipeline
Looking for https://www.linkedin.com/feed/hashtag/?keywords=immediate&highlightedUpdateUrns=urn%3Ali%3Aactivity%3A7003255016740294656" target="_blank">#Immediate Joiners only
Location: Pune
Salary: As per market standards
Mode: https://www.linkedin.com/feed/hashtag/?keywords=work&highlightedUpdateUrns=urn%3Ali%3Aactivity%3A7003255016740294656" target="_blank">#Work from office
Implementing various development, testing, automation tools, and IT infrastructure
Planning the team structure, activities, and involvement in project management activities.
Managing stakeholders and external interfaces
Setting up tools and required infrastructure
Defining and setting development, test, release, update, and support processes for DevOps operation
Have the technical skill to review, verify, and validate the software code developed in the project.
Troubleshooting techniques and fixing the code bugs
Monitoring the processes during the entire lifecycle for its adherence and updating or creating new processes for improvement and minimizing the wastage
Encouraging and building automated processes wherever possible
Identifying and deploying cybersecurity measures by continuously performing vulnerability assessment and risk management
Incidence management and root cause analysis
Coordination and communication within the team and with customers
Selecting and deploying appropriate CI/CD tools
Strive for continuous improvement and build continuous integration, continuous development, and constant deployment pipeline (CI/CD Pipeline)
Mentoring and guiding the team members
Monitoring and measuring customer experience and KPIs
Managing periodic reporting on the progress to the management and the customer

We are looking for people with programming skills in Python, SQL, Cloud Computing. Candidate should have experience in at least one of the major cloud-computing platforms - AWS/Azure/GCP. He should professioanl experience in handling applications and databases in the cloud using VMs and Docker images. He should have ability to design and develop applications for the cloud.
You will be responsible for
- Leading the DevOps strategy and development of SAAS Product Deployments
- Leading and mentoring other computer programmers.
- Evaluating student work and providing guidance in the online courses in programming and cloud computing.
Desired experience/skills
Qualifications: Graduate degree in Computer Science or related field, or equivalent experience.
Skills:
- Strong programming skills in Python, SQL,
- Cloud Computing
Experience:
2+ years of programming experience including Python, SQL, and Cloud Computing. Familiarity with command line working environment.
Note: A strong programming background, in any language and cloud computing platform is required. We are flexible about the degree of familiarity needed for the specific environments Python, SQL. If you have extensive experience in one of the cloud computing platforms and less in others you should still, consider applying.
Soft Skills:
- Good interpersonal, written, and verbal communication skills; including the ability to explain the concepts to others.
- A strong understanding of algorithms and data structures, and their performance characteristics.
- Awareness of and sensitivity to the educational goals of a multicultural population would also be desirable.
- Detail oriented and well organized.
Opening for a Java Developer with Devops experience
Experience required: 5 yrs to 10 yrs
Essential Required Skills:
Familiarity with Version Control such as GitHub, BitBucket
- Java programmer(Liferay, Alfresco will add plus point)
- AWS
- OPs(ansible, apache, python, terraform)
- Effective communication skills
- An analytical bent of mind and problem-solving aptitude
- Good time management skills
- Curiosity for learning
- Patience
Roles & Responsibilities:
- Candidate with good hand on exposure on AWS, Cloud, Devops, Ansible, Docker, Jekins.
- Strong proficiency in Linux, Open Source, Web based and Cloud based environments (ability to use open source technologies and tools)
- Strong scripting and automation (bash, Perl, common Linux utils), strong grasp of automation tools a plus.
- Strong debugging skills (OS, scripting, Web based technologies), SQL, Java and Database concepts are a plus
- Apache, nginx, git, svn, GNU tools
- Must have exposure on Grep, awk, sed, Git, svn
- Scripting (bash, python)
- API related skills (REST, and any other like google, aws, atlassian)
- Web based technology
- Strong Unix Skills
- Java programmer, Coding (Springboot, Microservices, Liferay, Alfresco will add plus point)
- Proficient in AWS
- Ops (ansible, apache, python, terraform)
Benefits
- Cash Rewards & Recognition on Monthly Basis
- Work-Life Balance (Flexible Working Hours)
- Five-Day Work Week

