50+ Terraform Jobs in India
Apply to 50+ Terraform Jobs on CutShort.io. Find your next job, effortlessly. Browse Terraform Jobs and apply today!

Job Specification:
- Job Location - Noida
- Experience - 2-5 Years
- Qualification - B.Tech, BE, MCA (Technical background required)
- Working Days - 5
- Job nature - Permanent
- Role - IT Cloud Engineer
- Proficient in Linux.
- Hands on experience with AWS cloud or Google Cloud.
- Knowledge of container technology like Docker.
- Expertise in scripting languages. (Shell scripting or Python scripting)
- Working knowledge of LAMP/LEMP stack, networking and version control system like Gitlab or Github.
Job Description:
The incumbent would be responsible for:
- Deployment of various infrastructures on Cloud platforms like AWS, GCP, Azure, OVH etc.
- Server monitoring, analysis and troubleshooting.
- Deploying multi-tier architectures using microservices.
- Integration of Container technologies like Docker, Kubernetes etc as per application requirement.
- Automating workflow with python or shell scripting.
- CI and CD integration for application lifecycle management.
- Hosting and managing websites on Linux machines.
- Frontend, backend and database optimization.
- Protecting operations by keeping information confidential.
- Providing information by collecting, analyzing, summarizing development & service issues.
- Prepares & installs solutions by determining and designing system specifications, standards & programming.
Are you looking to explore what is possible in a collaborative and innovative work environment? Is your goal to work with a team of talented professionals who are keenly focused on solving complex business problems and supporting product innovation with technology?
If so, you might be our next Senior DevOps Engineer, where you will be involved in building out systems for our rapidly expanding team, enabling the whole group to operate more effectively and iterate at top speed in an open, collaborative environment.
Systems management and automation are the name of the game here – in development, testing, staging, and production. If you are passionate about building innovative and complex software, are comfortable in an “all hands on deck” environment, and can thrive in an Insurtech culture, we want to meet you!
What We’re Looking For
- You will collaborate with our development team to support ongoing projects, manage software releases, and ensure smooth updates to QA and production environments. This includes handling configuration updates and meeting all release requirements.
- You will work closely with your team members to enhance the company’s engineering tools, systems, procedures, and data security practices.
- Provide technical guidance and educate team members and coworkers on development and operations.
- Monitor metrics and develop ways to improve.
- Conduct systems tests for security, performance, and availability.
- Collaborate with team members to improve the company’s engineering tools, systems and procedures, and data security.
What You’ll Be Doing:
- You have a working knowledge of technologies like:
- Docker, Kubernetes
- oipJenkins (alt. Bamboo, TeamCity, Travis CI, BuildMaster)
- Ansible, Terraform, Pulumi
- Python
- You have experience with GitHub Actions, Version Control, CI/CD/CT, shell scripting, and database change management
- You have working experience with Microsoft Azure, Amazon AWS, Google Cloud, or other cloud providers
- You have experience with cloud security management
- You can configure assigned applications and troubleshoot most configuration issues without assistance
- You can write accurate, concise, and formatted documentation that can be reused and read by others
- You know scripting tools like bash or PowerShell
Profile: Sr. Devops Engineer
Location: Gurugram
Experience: 05+ Years
Company: Watsoo
Required Skills & Qualifications
- Bachelor’s degree in Computer Science, Engineering, or related field.
- 5+ years of proven hands-on DevOps experience.
- Strong experience with CI/CD tools (Jenkins, GitLab CI, GitHub Actions, etc.).
- Expertise in containerization & orchestration (Docker, Kubernetes, Helm).
- Hands-on experience with cloud platforms (AWS, Azure, or GCP).
- Proficiency in Infrastructure as Code (IaC) tools (Terraform, Ansible, Pulumi, or CloudFormation).
- Experience with monitoring and logging solutions (Prometheus, Grafana, ELK, CloudWatch, etc.).
- Proficiency in scripting languages (Python, Bash, or Shell).
- Knowledge of networking, security, and system administration.
- Strong problem-solving skills and ability to work in fast-paced environments.
- Troubleshoot production issues, perform root cause analysis, and implement preventive measures.
- Advocate DevOps best practices, automation, and continuous improvement

About Us:
MyOperator and Heyo are India’s leading conversational platforms, empowering 40,000+ businesses with Call and WhatsApp-based engagement. We’re a product-led SaaS company scaling rapidly, and we’re looking for a skilled Software Developer to help build the next generation of scalable backend systems.
Role Overview:
We’re seeking a passionate Python Developer with strong experience in backend development and cloud infrastructure. This role involves building scalable microservices, integrating AI tools like LangChain/LLMs, and optimizing backend performance for high-growth B2B products.
Key Responsibilities:
- Develop robust backend services using Python, Django, and FastAPI
- Design and maintain a scalable microservices architecture
- Integrate LangChain/LLMs into AI-powered features
- Write clean, tested, and maintainable code with pytest
- Manage and optimize databases (MySQL/Postgres)
- Deploy and monitor services on AWS
- Collaborate across teams to define APIs, data flows, and system architecture
Must-Have Skills:
- Python and Django
- MySQL or Postgres
- Microservices architecture
- AWS (EC2, RDS, Lambda, etc.)
- Unit testing using pytest
- LangChain or Large Language Models (LLM)
- Strong grasp of Data Structures & Algorithms
- AI coding assistant tools (e.g., Chat GPT & Gemini)
Good to Have:
- MongoDB or ElasticSearch
- Go or PHP
- FastAPI
- React, Bootstrap (basic frontend support)
- ETL pipelines, Jenkins, Terraform
Why Join Us?
- 100% Remote role with a collaborative team
- Work on AI-first, high-scale SaaS products
- Drive real impact in a fast-growing tech company
- Ownership and growth from day one
Must have skillsets:
- Terraform HCL module development.
- AzureRM provider & Azure infrastructure services.
- IaC modernization, handling deprecated arguments, upgrading provider versions.
- Git, CI/CD workflows.
Good to have skillsets
- Jsonnet or similar templating frameworks.
- Terragrunt.
- Azure security and governance best practices.
- Experience in consulting/migration projects.
Must have skills of POD member
- 5+ years experience in Terraform.
- Hands-on module development experience
- 3+ years experience in Azure.
- Familiarity with Jsonnet or ability to quickly learn.
- Strong testing and validation mindset for IaC.
Job Summary:
We are seeking passionate Developers with experience in Microservices architecture to join our team in Noida. The ideal candidate should have hands-on expertise in Java, Spring Boot, Hibernate, and front-end technologies like Angular, JavaScript, and Bootstrap. You will be responsible for developing enterprise-grade software applications that enhance patient safety worldwide.
Key Responsibilities:
- Develop and maintain applications using Microservices architecture.
- Work with modern technologies like Java, Spring Boot, Hibernate, Angular, Kafka, Redis, and Hazelcast.
- Utilize AWS, Git, Nginx, Tomcat, Oracle, Jira, Confluence, and Jenkins for development and deployment.
- Collaborate with cross-functional teams to design and build scalable enterprise applications.
- Develop intuitive UI/UX components using Bootstrap, jQuery, and JavaScript.
- Ensure high-performance, scalable, and secure applications for Fortune 100 pharmaceutical companies.
- Participate in Agile development, managing changing priorities effectively.
- Conduct code reviews, troubleshoot issues, and optimize application performance.
Required Skills & Qualifications:
- 5+ years of hands-on experience in Java 7/8, Spring Boot, and Hibernate.
- Strong knowledge of OOP concepts and Design Patterns.
- Experience working with relational databases (Oracle/MySQL).
- Proficiency in Bootstrap, JavaScript, jQuery, HTML, and Angular.
- Hands-on experience in Microservices-based application development.
- Strong problem-solving, debugging, and analytical skills.
- Excellent communication and collaboration abilities.
- Ability to adapt to new technologies and manage multiple priorities.
- Experience in developing high-quality web applications.
Good to Have:
- Exposure to Kafka, Redis, and Hazelcast.
- Experience working with cloud-based solutions (AWS preferred).
- Familiarity with DevOps tools like Jenkins, Docker, and Kubernetes.

Overview:
We as a global leader in software consultancy, is hiring a Senior Consultant skilled in backend technologies (Node.js / Java) with strong experience in Temporal.io (BPM/Workflow platform). This role offers an exciting opportunity to work remotely on cutting-edge, scalable enterprise solutions involving modern microservices, BPM workflows, and cloud-native technologies.
As part of our high-performance engineering team, you will contribute to designing, building, and maintaining distributed systems that drive critical client projects. The ideal candidate is hands-on with Temporal.io, backend development (Node.js or Java), and familiar with modern cloud, DevOps, and automation practices.
Key Roles and Responsibilities:
- Design, develop, and maintain backend services using Node.js (NestJS) or Java integrated with Temporal.io workflow engine.
- Model and implement business processes using BPMN workflows, including managing external task workers, Operate, and Tasklist in Temporal.
- Build and manage RESTful APIs and microservices that are scalable, secure, and maintainable.
- Collaborate closely with cross-functional teams including frontend engineers, DevOps, cloud architects, and QA.
- Implement secure authentication and authorization flows using Keycloak IAM.
- Develop and optimize database interactions with PostgreSQL in a process-driven architecture.
- Utilize Azure services such as Blob Storage, API Gateway, and AKS for cloud infrastructure.
- Automate cloud infrastructure provisioning using Terraform and manage containerized deployments with Kubernetes.
- Ensure high-quality deliverables by implementing unit and integration tests using Jest, and documenting APIs with Swagger/Postman/Insomnia.
- Participate in peer code reviews, technical discussions, and contribute to architectural decisions.
- Maintain and improve CI/CD pipelines (Azure DevOps/GitHub Actions desirable).
Technical Requirements:
Must-Have Skills:
- Temporal.io BPM Platform:
- BPMN modeling, external task workers, Operate, Tasklist (Hands-on experience mandatory)
- Backend Development:
- Node.js (with TypeScript, NestJS framework) OR Java (Strong proficiency)
- Frontend Exposure (Nice-to-Have):
- Modern React.js (v17+) with TypeScript (component-driven design)
- Cloud & Infrastructure:
- Experience with Azure Services: Blob Storage, API Gateway, AKS
- Infrastructure automation using Terraform
- Container orchestration via Kubernetes
- Database:
- Strong understanding of PostgreSQL and its role in process-driven applications
- Authentication & Authorization:
- Experience integrating Keycloak IAM for user role and token-based authorization
- Testing & API Management:
- Testing with Jest
- API documentation and testing using Swagger / Postman / Insomnia (OpenAPI)
- Version Control:
- Git and GitFlow branching strategy
Nice-to-Have / Bonus Skills:
- Blockchain integration for secure KYC/identity flows
- Building custom Camunda Connectors or writing exporter plugins
- Experience with Azure DevOps or GitHub Actions for CI/CD automation
- Authorization enforcement using identity-based access patterns
Additional Information:
- Work Mode: Remote (occasional office visits for team meetings)

Senior SRE Developer
The Site Reliability Engineer (SRE) position is a software development-oriented role, focusing heavily on coding, automation, and ensuring the stability and reliability of our global platform. The ideal candidate will primarily be a skilled software developer capable of participating in on-call rotations. The SRE team develops sophisticated telemetry and automation tools, proactively monitoring platform health and executing automated corrective actions. As guardians of the production environment, the SRE team leverages advanced telemetry to anticipate and mitigate issues, ensuring continuous platform stability.
Responsibilities:
- ● Develop and maintain advanced telemetry and automation tools for monitoring and managing global platform health.
- ● Actively participate in on-call rotations, swiftly diagnosing and resolving system issues and escalations from the customer support team (this is not a customer-facing role).
- ● Implement automated solutions for incident response, system optimization, and reliability improvement.
Requirements: Software Development:
- ● 3+ years of professional Python development experience.
- ● Strong grasp of Python object-oriented programming concepts and inheritance.
- ● Experience developing multi-threaded Python applications.
- ● 2+ years of experience using Terraform, with proficiency in creating modules and submodules
- from scratch.
- ● Proficiency or willingness to learn Golang.
- Operating Systems:
- ● Experience with Linux operating systems.
- ● Strong understanding of monitoring critical system health parameters.
- Cloud:
- ● 3+ years of hands-on experience with AWS services including EC2, Lambda, CloudWatch, EKS, ELB, RDS, DynamoDB, and SQS.
- ● AWS Associate-level certification or higher preferred. Networking:
● Basic understanding of network protocols: ○ TCP/IP
○ DNS
○ HTTP
○ Load balancing concepts
Additional Qualifications (Preferred):
● Familiarity with trading systems and low-latency environments is advantageous but not required.

Role: GenAI Full Stack Engineer
Fulltime
Work Location: Remote
Job Description:
• Python and familiar with AI/Gen AI frameworks. Experience with data manipulation libraries like Pandas and NumPy is crucial.
• Specific expertise in implementing and managing large language models (LLMs) is a must.
• Fast API experience for API development
• A solid grasp of software engineering principles, including version control (Git), continuous integration and continuous deployment (CI/CD) practices, and automated testing, is required. Experience in MLOps, ML engineering, and Data Science, with a proven track record of developing and maintaining AI solutions, is essential.
• We also need proficiency in DevOps tools such as Docker, Kubernetes, Jenkins, and Terraform, along with advanced CI/CD practices.
Job Position: DevOps Engineer
Experience Range: 2 - 3 years
Type:Full Time
Location:India (Remote)
Desired Skills: DevOps, Kubernetes (EKS), Docker, Kafka, HAProxy, MQTT brokers, Redis, PostgreSQL, TimescaleDB, Shell Scripting, Terraform, AWS (API Gateway, ALB, ECS, EKS, SNS, SES, CloudWatch Logs), Prometheus, Grafana, Jenkins, GitHub
Your key responsibilities:
- Collaborate with developers to design and implement scalable, secure, and reliable infrastructure.
- Manage and automate CI/CD pipelines (Jenkins - Groovy Scripts, GitHub Actions), ensuring smooth deployments.
- Containerise applications using Docker and manage workloads on Kubernetes (EKS).
- Work with AWS services (ECS, EKS, API Gateway, SNS, SES, CloudWatch Logs) to provision and maintain infrastructure.
- Implement infrastructure as code using Terraform.
- Set up and manage monitoring and alerting using Prometheus and Grafana.
- Manage and optimize Kafka, Redis, PostgreSQL, TimescaleDB deployments.
- Troubleshoot issues in distributed systems and ensure high availability using HAProxy, load balancing, and failover strategies.
- Drive automation initiatives across development, testing, and production environments.
What you’ll bring
Required:
- 2–3 years of hands-on DevOps experience.
- Strong proficiency in Shell Scripting.
- Practical experience with Docker and Kubernetes (EKS).
- Knowledge of Terraform or other IaC tools.
- Experience with Jenkins pipelines (Groovy scripting preferred).
- Exposure to AWS cloud services (ECS, EKS, API Gateway, SNS, SES, CloudWatch).
- Understanding of microservices deployment and orchestration.
- Familiarity with monitoring/observability tools (Prometheus, Grafana).
- Good communication and collaboration skills.
Nice to have:
- Experience with Kafka, HAProxy, MQTT brokers.
- Knowledge of Redis, PostgreSQL, TimescaleDB.
- Exposure to DevOps best practices in agile environments.
Are you an experienced Infrastructure/DevOps Engineer looking for an exciting remote opportunity to design, automate, and scale modern cloud environments? We’re seeking a skilled engineer with strong expertise in Terraform and DevOps practices to join our growing team. If you’re passionate about automation, cloud infrastructure, and CI/CD pipelines, we’d love to hear from you!
Key Responsibilities:
- Design, implement, and manage cloud infrastructure using Terraform (IaC).
- Build and maintain CI/CD pipelines for seamless application deployment.
- Ensure scalability, reliability, and security of cloud-based systems.
- Collaborate with developers and QA to optimize environments and workflows.
- Automate infrastructure provisioning, monitoring, and scaling.
- Troubleshoot infrastructure and deployment issues quickly and effectively.
- Stay up to date with emerging DevOps tools, practices, and cloud technologies.
Requirements:
- Minimum 5+ years of professional experience in DevOps or Infrastructure Engineering.
- Strong expertise in Terraform and Infrastructure as Code (IaC).
- Hands-on experience with AWS / Azure / GCP (at least one cloud platform).
- Proficiency in CI/CD tools (Jenkins, GitHub Actions, GitLab CI/CD, etc.).
- Experience with Docker, Kubernetes, and container orchestration.
- Strong knowledge of Linux systems, networking, and security best practices.
- Familiarity with monitoring & logging tools (Prometheus, Grafana, ELK, etc.).
- Scripting experience (Bash, Python, or similar).
- Excellent problem-solving skills and ability to work in remote teams.
Perks and Benefits:
- Competitive salary with remote work flexibility.
- Opportunity to work with global clients on modern infrastructure.
- Growth and learning opportunities in cutting-edge DevOps practices.
- Collaborative team culture that values automation and innovation.
About CoverSelf: We are an InsurTech start-up based out of Bangalore, with a focus on Healthcare. CoverSelf empowers healthcare insurance companies with a truly NEXT-GEN cloud-native, holistic & customizable platform preventing and adapting to the ever-evolving claims & payment inaccuracies. Reduce complexity and administrative costs with a unified healthcare dedicated platform.
Overview about the role: We are looking for a Junior DevOps Engineer who would be working on the bleeding edge of technologies. The role would be primarily to achieve various functions like maintaining, monitoring, securing and automating our cloud infrastructure and applications. If you have a solid background in kubernetes and terraform, we’d love to speak with you.
Responsibilities:
➔ Implement, and maintain application infrastructure, databases, and networks.
➔ Develop and implement automation scripts using Terraform for infrastructure deployment.
➔ Implement and maintain containerized applications using Docker and Kubernetes.
➔ Work with other DevOps Engineers in the team on deploying applications, provisioning infrastructure, Automation, routine audits, upgrading systems, capacity planning, and benchmarking.
➔ Work closely with our Engineering Team to ensure seamless integration of new tools and perform day-to-day activities which can help developers deploy and release their code seamlessly.
➔ Respond to service outages/incidents and ensure system uptime requirements are met.
➔ Ensure the security and compliance of our applications and infrastructure.
Requirements:
➔ Must have a B.Tech degree
➔ Must have at least 2 years experience as a devops engineer
➔ Operating Systems: Good understanding in any of the UNIX/Linux platforms and good to have windows.
➔ Source Code Management: Expertise in GIT for version control and managing branching strategies.
➔ Networking: Basic understanding of network fundamentals like Networks, DNS, PORTS, ROUTES,NAT GATEWAYS and VPN.
➔ Cloud Platforms: Should have minimum 2 years of experience in working with AWS and good to have understanding of other cloud platforms, such as Microsoft Azure and Google Cloud Platform.
➔ Infrastructure Automation: Experience with Terraform to automate infrastructure provisioning and configuration.
➔ Container Orchestration: Must have at least 1 year of experience in managing Kubernetes clusters.
➔ Containerization: Experience in containerization of applications using Docker.
➔ CI/CD and Scripting: Experience with CI/CD concepts and tools (e.g., Gitlab CI) and scripting languages like Python or Shell for automation.
➔ Monitoring and Observability: Familiarity with monitoring tools like Prometheus, Grafana, CloudWatch, and troubleshooting using logs and metrics analysis.
➔ Security Practices: Basic understanding of security best practices in a DevOps environment and Integration of security into the CI/CD pipeline (DevSecOps).
➔ Databases: Good to have knowledge on one of the databases like MySQL, Postgres, mongo.
➔ Problem Solving and Troubleshooting: Debugging and troubleshooting skills for resolving issues in development, testing, and production environments.
Work Location: Jayanagar - Bangalore.
Work Mode: Work from Office.
Benefits: Best in the Industry Compensation, Friendly & Flexible Leave Policy, Health Benefits, Certifications & Courses Reimbursements, Chance to be part of rapidly growing start-up & the next success story, and many more.
Additional Information: At CoverSelf, we are creating a global workplace that enables everyone to find their true potential, purpose, and passion irrespective of their background, gender, race, sexual orientation, religion and ethnicity. We are committed to providing equal opportunity for all and believe that diversity in the workplace creates a more vibrant, richer work environment that advances the goals of our employees, communities and the business.

A strong proficiency in at least one scripting language (e.g., Python, Bash, PowerShell) is required.
Candidates must possess an in-depth ability to design, write, and implement complex automation logic, not just basic scripts.
Proven experience in automating DevOps processes, environment provisioning, and configuration management is essential.
Cloud Platform (AWS Preferred) : • Extensive hands-on experience with Amazon Web Services (AWS) is highly preferred.
Candidates must be able to demonstrate expert-level knowledge of core AWS services and articulate their use cases.
Excellent debugging and problem-solving skills within the AWS ecosystem are mandatory. The ability to diagnose and resolve issues efficiently is a key requirement.
Infrastructure as Code (IaC - Terraform Preferred) : • Expert-level knowledge and practical experience with Terraform are required.
Candidates must have a deep understanding of how to write scalable, modular, and reusable Terraform code.
Containerization and Orchestration (Kubernetes Preferred) : • Advanced, hands-on experience with Kubernetes is mandatory. • Candidates must be proficient in solving complex, production-level issues related to deployments, networking, and cluster management. • A solid foundational knowledge of Docker is required.
Job Title: AWS DevOps Engineer
Experience Level: 5+ Years
Location: Bangalore, Pune, Hyderabad, Chennai and Gurgaon
Summary:
We are looking for a hands-on Platform Engineer with strong execution skills to provision and manage cloud infrastructure. The ideal candidate will have experience with Linux, AWS services, Kubernetes, and Terraform, and should be capable of troubleshooting complex issues in cloud and container environments.
Key Responsibilities:
- Provision AWS infrastructure using Terraform (IaC).
- Manage and troubleshoot Kubernetes clusters (EKS/ECS).
- Work with core AWS services: VPC, EC2, S3, RDS, Lambda, ALB, WAF, and CloudFront.
- Support CI/CD pipelines using Jenkins and GitHub.
- Collaborate with teams to resolve infrastructure and deployment issues.
- Maintain documentation of infrastructure and operational procedures.
Required Skills:
- 3+ years of hands-on experience in AWS infrastructure provisioning using Terraform.
- Strong Linux administration and troubleshooting skills.
- Experience managing Kubernetes clusters.
- Basic experience with CI/CD tools like Jenkins and GitHub.
- Good communication skills and a positive, team-oriented attitude.
Preferred:
- AWS Certification (e.g., Solutions Architect, DevOps Engineer).
- Exposure to Agile and DevOps practices.
- Experience with monitoring and logging tools.
Job Summary:
We are seeking a highly skilled and proactive DevOps Engineer with 4+ years of experience to join our dynamic team. This role requires strong technical expertise across cloud infrastructure, CI/CD pipelines, container orchestration, and infrastructure as code (IaC). The ideal candidate should also have direct client-facing experience and a proactive approach to managing both internal and external stakeholders.
Key Responsibilities:
- Collaborate with cross-functional teams and external clients to understand infrastructure requirements and implement DevOps best practices.
- Design, build, and maintain scalable cloud infrastructure on AWS (EC2, S3, RDS, ECS, etc.).
- Develop and manage infrastructure using Terraform or CloudFormation.
- Manage and orchestrate containers using Docker and Kubernetes (EKS).
- Implement and maintain CI/CD pipelines using Jenkins or GitHub Actions.
- Write robust automation scripts using Python and Shell scripting.
- Monitor system performance and availability, and ensure high uptime and reliability.
- Execute and optimize SQL queries for MSSQL and PostgreSQL databases.
- Maintain clear documentation and provide technical support to stakeholders and clients.
Required Skills:
- Minimum 4+ years of experience in a DevOps or related role.
- Proven experience in client-facing engagements and communication.
- Strong knowledge of AWS services – EC2, S3, RDS, ECS, etc.
- Proficiency in Infrastructure as Code using Terraform or CloudFormation.
- Hands-on experience with Docker and Kubernetes (EKS).
- Strong experience in setting up and maintaining CI/CD pipelines with Jenkins or GitHub.
- Solid understanding of SQL and working experience with MSSQL and PostgreSQL.
- Proficient in Python and Shell scripting.
Preferred Qualifications:
AWS Certifications (e.g., AWS Certified DevOps Engineer) are a plus.
Experience working in Agile/Scrum environments.
Strong problem-solving and analytical skills.
- Looking manage IaC module
- Terraform experience is a must
- Terraform Module as a part of central platform team
- Azure/GCP exp is a must
- C#/Python/Java coding – is good to have

Quantalent AI is hiring for a fastest growing fin-tech firm
Job Title: DevOps - 3
Roles and Responsibilities:
- Develop deep understanding of the end-to-end configurations, dependencies, customer requirements, and overall characteristics of the production services as the accountable owner for overall service operations
- Implementing best practices, challenging the status quo, and tab on industry and technical trends, changes, and developments to ensure the team is always striving for best-in-class work
- Lead incident response efforts, working closely with cross-functional teams to resolve issues quickly and minimize downtime. Implement effective incident management processes and post-incident reviews
- Participate in on-call rotation responsibilities, ensuring timely identification and resolution of infrastructure issues
- Possess expertise in designing and implementing capacity plans, accurately estimating costs and efforts for infrastructure needs.
- Systems and Infrastructure maintenance and ownership for production environments, with a continued focus on improving efficiencies, availability, and supportability through automation and well defined runbooks
- Provide mentorship and guidance to a team of DevOps engineers, fostering a collaborative and high-performing work environment. Mentor team members in best practices, technologies, and methodologies.
- Design for Reliability - Architect & implement solutions that keeps Infrastructure running with Always On availability and ensures high uptime SLA for the Infrastructure
- Manage individual project priorities, deadlines, and deliverables related to your technical expertise and assigned domains
- Collaborate with Product & Information Security teams to ensure the integrity and security of Infrastructure and applications. Implement security best practices and compliance standards.
Must Haves
- 5-8 years of experience as Devops / SRE / Platform Engineer.
- Strong expertise in automating Infrastructure provisioning and configuration using tools like Ansible, Packer, Terraform, Docker, Helm Charts etc.
- Strong skills in network services such as DNS, TLS/SSL, HTTP, etc
- Expertise in managing large-scale cloud infrastructure (preferably AWS and Oracle)
- Expertise in managing production grade Kubernetes clusters
- Experience in scripting using programming languages like Bash, Python, etc.
- Expertise in skill sets for centralized logging systems, metrics, and tooling frameworks such as ELK, Prometheus/VictoriaMetrics, and Grafana etc.
- Experience in Managing and building High scale API Gateway, Service Mesh, etc
- Systematic problem-solving approach, coupled with strong communication skills and a sense of ownership and drive
- Have a working knowledge of a backend programming language
- Deep knowledge & experience with Unix / Linux operating systems internals (Eg. filesystems, user management, etc)
- A working knowledge and deep understanding of cloud security concepts
- Proven track record of driving results and delivering high-quality solutions in a fast-paced environment
- Demonstrated ability to communicate clearly with both technical and non-technical project stakeholders, with the ability to work effectively in a cross-functional team environment.

A modern configuration management platform based on advanced
Key Responsibilities:
Kubernetes Management:
Deploy, configure, and maintain Kubernetes clusters on AKS, EKS, GKE, and OKE.
Troubleshoot and resolve issues related to cluster performance and availability.
Database Migration:
Plan and execute database migration strategies across multicloud environments, ensuring data integrity and minimal downtime.
Collaborate with database teams to optimize data flow and management.
Coding and Development:
Develop, test, and optimize code with a focus on enhancing algorithms and data structures for system performance.
Implement best coding practices and contribute to code reviews.
Cross-Platform Integration:
Facilitate seamless integration of services across different cloud providers to enhance interoperability.
Collaborate with development teams to ensure consistent application performance across environments.
Performance Optimization:
Monitor system performance metrics, identify bottlenecks, and implement effective solutions to optimize resource utilization.
Conduct regular performance assessments and provide recommendations for improvements.
Experience:
Minimum of 2+ years of experience in cloud computing, with a strong focus on Kubernetes management across multiple platforms.
Technical Skills:
Proficient in cloud services and infrastructure, including networking and security considerations.
Strong programming skills in languages such as Python, Go, or Java, with a solid understanding of algorithms and data structures.
Problem-Solving:
Excellent analytical and troubleshooting skills with a proactive approach to identifying and resolving issues.
Communication:
Strong verbal and written communication skills, with the ability to collaborate effectively with cross-functional teams.
Preferred Skills:
- Familiarity with CI/CD tools and practices.
- Experience with container orchestration and management tools.
- Knowledge of microservices architecture and design patterns.

Job Summary:
We are seeking an experienced and highly motivated Senior Python Developer to join our dynamic and growing engineering team. This role is ideal for a seasoned Python expert who thrives in a fast-paced, collaborative environment and has deep experience building scalable applications, working with cloud platforms, and automating infrastructure.
Key Responsibilities:
Develop and maintain scalable backend services and APIs using Python, with a strong emphasis on clean architecture and maintainable code.
Design and implement RESTful APIs using frameworks such as Flask or FastAPI, and integrate with relational databases using ORM tools like SQLAlchemy.
Work with major cloud platforms (AWS, GCP, or Oracle Cloud Infrastructure) using Python SDKs to build and deploy cloud-native applications.
Automate system and infrastructure tasks using tools like Ansible, Chef, or other configuration management solutions.
Implement and support Infrastructure as Code (IaC) using Terraform or cloud-native templating tools to manage resources effectively.
Work across both Linux and Windows environments, ensuring compatibility and stability across platforms.
Required Qualifications:
5+ years of professional experience in Python development, with a strong portfolio of backend/API projects.
Strong expertise in Flask, SQLAlchemy, and other Python-based frameworks and libraries.
Proficient in asynchronous programming and event-driven architecture using tools such as asyncio, Celery, or similar.
Solid understanding and hands-on experience with cloud platforms – AWS, Google Cloud Platform, or Oracle Cloud Infrastructure.
Experience using Python SDKs for cloud services to automate provisioning, deployment, or data workflows.
Practical knowledge of Linux and Windows environments, including system-level scripting and debugging.
Automation experience using tools such as Ansible, Chef, or equivalent configuration management systems.
Experience implementing and maintaining CI/CD pipelines with industry-standard tools.
Familiarity with Docker and container orchestration concepts (e.g., Kubernetes is a plus).
Hands-on experience with Terraform or equivalent infrastructure-as-code tools for managing cloud environments.
Excellent problem-solving skills, attention to detail, and a proactive mindset.
Strong communication skills and the ability to collaborate with diverse technical teams.
Preferred Qualifications (Nice to Have):
Experience with other Python frameworks (FastAPI, Django)
Knowledge of container orchestration tools like Kubernetes
Familiarity with monitoring tools like Prometheus, Grafana, or Datadog
Prior experience working in an Agile/Scrum environment
Contributions to open-source projects or technical blogs
We are looking for an experienced Cloud & DevOps Engineer to join our growing team. The ideal candidate should have hands-on expertise in cloud platforms, automation, CI/CD, and container orchestration. You will be responsible for building scalable and secure infrastructure, optimizing deployments, and ensuring system reliability in a fast-paced environment.
Responsibilities
- Design, deploy, and manage applications on AWS / GCP.
- Build and maintain CI/CD pipelines using Jenkins, GitHub Actions, or GitLab CI/CD.
- Manage containerized workloads with Docker & Kubernetes.
- Implement Infrastructure as Code (IaC) using Terraform.
- Automate infrastructure and operational tasks using Python/Shell scripts.
- Set up monitoring & logging (Prometheus, Grafana, CloudWatch, ELK).
- Ensure security, scalability, and high availability of systems.
- Collaborate with development and QA teams in an Agile/DevOps environment.
Required Skills
- AWS, GCP (cloud platforms)
- Terraform (IaC)
- Docker, Kubernetes (containers & orchestration)
- Python, Bash (scripting & automation)
- CI/CD tools (Jenkins, GitHub Actions, GitLab CI/CD)
- Monitoring & Logging (Prometheus, Grafana, CloudWatch)
- Strong Linux/Unix administration
Preferred Skills (Good to Have)
- Cloud certifications (AWS, Azure, or GCP).
- Knowledge of serverless computing (AWS Lambda, Cloud Run).
- Experience with DevSecOps and cloud security practices.
Job Description
We are looking for an experienced GCP Cloud Engineer to design, implement, and manage cloud-based solutions on Google Cloud Platform (GCP). The ideal candidate should have expertise in GKE (Google Kubernetes Engine), Cloud Run, Cloud Loadbalancer, Cloud function, Azure DevOps, and Terraform, with a strong focus on automation, security, and scalability.
Work location: Pune/Mumbai/Bangalore
Experience: 4-7 Years
Joining: Mid of October
You will work closely with development, operations, and security teams to ensure robust cloud infrastructure and CI/CD pipelines while optimizing performance and cost.
Key Responsibilities:
1. Cloud Infrastructure Design & Management
· Architect, deploy, and maintain GCP cloud resources via terraform/other automation.
· Implement Google Cloud Storage, Cloud SQL, file store, for data storage and processing needs.
· Manage and configure Cloud Load Balancers (HTTP(S), TCP/UDP, and SSL Proxy) for high availability and scalability.
· Optimize resource allocation, monitoring, and cost efficiency across GCP environments.
2. Kubernetes & Container Orchestration
· Deploy, manage, and optimize workloads on Google Kubernetes Engine (GKE).
· Work with Helm charts, Istio, and service meshes for microservices deployments.
· Automate scaling, rolling updates, and zero-downtime deployments.
3. Serverless & Compute Services
· Deploy and manage applications on Cloud Run and Cloud Functions for scalable, serverless workloads.
· Optimize containerized applications running on Cloud Run for cost efficiency and performance.
4. CI/CD & DevOps Automation
· Design, implement, and manage CI/CD pipelines using Azure DevOps.
· Automate infrastructure deployment using Terraform, Bash and Power shell scripting
· Integrate security and compliance checks into the DevOps workflow (DevSecOps).
Required Skills & Qualifications:
✔ Experience: 4+ years in Cloud Engineering, with a focus on GCP.
✔ Cloud Expertise: Strong knowledge of GCP services (GKE, Compute Engine, IAM, VPC, Cloud Storage, Cloud SQL, Cloud Functions).
✔ Kubernetes & Containers: Experience with GKE, Docker, GKE Networking, Helm.
✔ DevOps Tools: Hands-on experience with Azure DevOps for CI/CD pipeline automation.
✔ Infrastructure-as-Code (IaC): Expertise in Terraform for provisioning cloud resources.
✔ Scripting & Automation: Proficiency in Python, Bash, or PowerShell for automation.
✔ Security & Compliance: Knowledge of cloud security principles, IAM, and compliance standards.
About Wissen Technology
Wissen Technology, established in 2015 and part of the Wissen Group (founded in 2000), is a specialized technology consulting company. We pride ourselves on delivering high-quality solutions for global organizations across Banking & Finance, Telecom, and Healthcare domains.
Here’s why Wissen Technology stands out:
Global Presence: Offices in US, India, UK, Australia, Mexico, and Canada.
Expert Team: Wissen Group comprises over 4000 highly skilled professionals worldwide, with Wissen Technology contributing 1400 of these experts. Our team includes graduates from prestigious institutions such as Wharton, MIT, IITs, IIMs, and NITs.
Recognitions: Great Place to Work® Certified.
Featured as a Top 20 AI/ML Vendor by CIO Insider (2020).
Impressive Growth: Achieved 400% revenue growth in 5 years without external funding.
Successful Projects: Delivered $650 million worth of projects to 20+ Fortune 500 companies.
For more details:
Website: www.wissen.com
Wissen Thought leadership : https://www.wissen.com/articles/
LinkedIn: Wissen Technology
Company information:
At LogixHealth, we're making intelligence matter throughout healthcare. LogixHealth has over two decades of experience providing full service coding, billing and revenue cycle solutions for emergency departments, hospitals and physician practices for millions of visits annually. LogixHealth provides ongoing coding, claims management and the latest business intelligence analytics for clients in over 40 states. For more information.
Role overview
Knowledge and Skill Sets:
· Office 365 Administration: Expertise in managing Office 365 services, including Exchange Online, SharePoint Online, Teams, and OneDrive.
· Intune Administration: Proficiency in Microsoft Intune, including device management, policy enforcement, and application deployment.
· Experience with virtualization platforms (Citrix, Nutanix, VMware, Hyper-V, etc.).
· Should have exposure to IaC and Terraform/Ansible
· Data Center Operations (DCO) and NOC Support: Experience in supporting DCO and NOC operations, with the ability to troubleshoot and resolve issues in a 24/7 environment.
· PowerShell and Scripting: Ability to create and use PowerShell scripts and other tools to automate tasks within Office 365 and Intune environments.
· Automation and IaC: Knowledge related to ansible, terraform and devops framework
· Security and Compliance: In-depth knowledge of security features in Office 365 and Intune, including MFA, DLP, and compliance tools.
· Backup and Recovery: Understanding of backup solutions for Office 365 data and disaster recovery planning.
· Monitoring Tools: Familiarity with monitoring tools for tracking the health and performance of Office 365 and Intune services.
· Communication: Strong communication skills for providing technical support and collaborating with IT teams.
· Documentation: Ability to create detailed documentation and training resources.
· 24/7 Availability: Commitment to providing round-the-clock support for critical Office 365 and Intune services.
What would you do here
Job Purpose:
The Collaboration tools Administrator is responsible for managing and maintaining the organization's Office 365, MS teams, Intune, SharePoint, Defender, MDM and IAM environments while supporting Data Center Operations (DCO) activities. The position requires 24/7 availability to support critical operations and respond promptly to incidents.
Role Description:
· Office 365 Administration: Manage and maintain Office 365 services, including user accounts, licenses, permissions, and configurations for Exchange Online, SharePoint Online, Teams, and OneDrive.
· Intune Administration: Oversee the administration of Microsoft Intune, including device enrollment, configuration policies, application deployment, and mobile device management (MDM) to ensure secure access to corporate resources.
· Security and Compliance: Implement and manage security measures within Office 365 and Intune, including multi-factor authentication (MFA), Data Loss Prevention (DLP), and compliance policies to protect data and meet regulatory requirements.
· Incident Response: Provide 24/7 on-call support for Office 365 and Intune-related incidents, ensuring quick resolution to minimize downtime and disruption.
· User Support: Offer technical support and troubleshooting for end-users, addressing issues related to Office 365, Intune, and other integrated services.
· Monitoring and Reporting: Continuously monitor the performance, security, and compliance of Office 365 and Intune services, generating regular reports on system health and usage.
· Automation and Scripting: Utilize PowerShell and other scripting tools to automate administrative tasks and improve operational efficiency within Office 365 and Intune environments.
· Backup and Recovery: Manage backup solutions for Office 365 data and participate in disaster recovery planning and execution to ensure business continuity.
· Documentation and Training: Develop and maintain detailed documentation of configurations, procedures, and best practices, and provide training to IT staff and end-users.
Key Deliverables:
· Reliable and secure operation of Office 365 and Intune services as part of the overall IT infrastructure.
· Effective integration of Office 365 and Intune with DCO and NOC operations.
· Timely incident response and resolution to ensure 24/7 availability of critical services.
· Efficient management of user accounts, licenses, and security settings within Office 365 and Intune.
· Regular monitoring, reporting, and auditing of Office 365 and Intune performance and security.
· Comprehensive documentation and training materials for internal use.
Thanks!
Position Overview:
We are seeking a highly motivated and skilled DevOps Engineer with 3-8 years of experience to join our dynamic team. The ideal candidate will have a strong foundation in Linux, infrastructure automation, containerization, orchestration tools, and cloud platforms. This role offers an opportunity to work on cutting-edge technologies and contribute to the development and maintenance of scalable, secure, and efficient CI/CD pipelines.
Key Responsibilities:
● Design, implement, and maintain scalable CI/CD pipelines to streamline software development and deployment.
● Manage, monitor, and optimize infrastructure using tools like Terraform for Infrastructure as Code (IaC).
● Deploy, configure, and manage containerized applications using Docker and orchestrate them with Kubernetes.
● Develop and maintain Helm charts for managing Kubernetes deployments.
● Automate repetitive operational tasks using scripting languages such as Python, Bash, or PowerShell.
● Collaborate with development teams to ensure seamless integration and delivery of applications.
● Monitor and troubleshoot system performance, ensuring high availability and reliability of services.
● Configure and maintain cloud infrastructure on AWS.
● Implement and maintain security best practices in cloud environments and CI/CD pipelines.
● Manage and optimize system logs and metrics using monitoring tools like Prometheus, Grafana, ELK Stack, or Cloud-native monitoring tools.
Key Requirements:
● Experience: 3-8 years in a DevOps or similar role.
● Linux: Strong proficiency in Linux-based systems, including configuration, troubleshooting, and performance tuning is must
● IaC Tools: Hands-on experience with Terraform for infrastructure provisioning and automation.
● Containerization: Proficient in using Docker to build, deploy, and manage containers.
● Kubernetes: Experience with Kubernetes for container orchestration, including knowledge of deployments, services, pv, pvc and ingress controllers.
● Helm Charts: Familiarity with creating and managing Helm charts for Kubernetes applications.
● CI/CD Tools: Knowledge of tools like Jenkins, GitHub Actions, GitLab CI/CD, or CircleCI for continuous integration and deployment.
● Cloud Platforms: Hands-on experience with at least one major cloud provider (AWS, Azure, or GCP).
● Scripting: Proficiency in automation scripting using Python, Bash, or similar languages.
● Monitoring: Understanding of monitoring and logging tools such as Prometheus, Grafana, or ELK Stack.
● Version Control: Strong experience with version control tools like Git.
Preferred Qualifications:
● Knowledge of networking concepts (e.g., DNS, load balancing, firewalls).
● Familiarity with security practices such as role-based access control (RBAC) and secrets management.
● Exposure to Agile/Scrum methodologies and tools like Jira.
● Certification in any of the cloud platforms (AWS Certified DevOps Engineer, Azure DevOps Expert, or GCP Professional DevOps Engineer) is a plus.
Soft Skills:
● Strong problem-solving and troubleshooting skills.
● Ability to work collaboratively in a team-oriented environment.
● Excellent communication and documentation skills.
● Proactive approach to learning new tools and technologies.
Note: Experience over Linux is Must.
Job Summary:
We are seeking a highly skilled and proactive DevOps Engineer with 4+ years of experience to join our dynamic team. This role requires strong technical expertise across cloud infrastructure, CI/CD pipelines, container orchestration, and infrastructure as code (IaC). The ideal candidate should also have direct client-facing experience and a proactive approach to managing both internal and external stakeholders.
Key Responsibilities:
- Collaborate with cross-functional teams and external clients to understand infrastructure requirements and implement DevOps best practices.
- Design, build, and maintain scalable cloud infrastructure on AWS (EC2, S3, RDS, ECS, etc.).
- Develop and manage infrastructure using Terraform or CloudFormation.
- Manage and orchestrate containers using Docker and Kubernetes (EKS).
- Implement and maintain CI/CD pipelines using Jenkins or GitHub Actions.
- Write robust automation scripts using Python and Shell scripting.
- Monitor system performance and availability, and ensure high uptime and reliability.
- Execute and optimize SQLqueries for MSSQL and PostgresQL databases.
- Maintain clear documentation and provide technical support to stakeholders and clients.
Required Skills:
- Minimum 4+ years of experience in a DevOps or related role.
- Proven experience in client-facing engagements and communication.
- Strong knowledge of AWS services – EC2, S3, RDS, ECS, etc.
- Proficiency in Infrastructure as Code using Terraform or CloudFormation.
- Hands-on experience with Docker and Kubernetes (EKS).
- Strong experience in setting up and maintaining CI/CD pipelines with Jenkins or GitHub.
- Solid understanding of SQL and working experience with MSSQL and PostgreSQL.
- Proficient in Python and Shell scripting.
Preferred Qualifications:
- AWS Certifications (e.g., AWS Certified DevOps Engineer) are a plus.
- Experience working in Agile/Scrum environments.
- Strong problem-solving and analytical skills.
Job Title: AWS Devops Engineer – Manager Business solutions
Location: Gurgaon, India
Experience Required: 8-12 years
Industry: IT
We are looking for a seasoned AWS DevOps Engineer with robust experience in AWS middleware services and MongoDB Cloud Infrastructure Management. The role involves designing, deploying, and maintaining secure, scalable, and high-availability infrastructure, along with developing efficient CI/CD pipelines and automating operational processes.
Key Deliverables (Essential functions & Responsibilities of the Job):
· Design, deploy, and manage AWS infrastructure, with a focus on middleware services such as API Gateway, Lambda, SQS, SNS, ECS, and EKS.
· Administer and optimize MongoDB Atlas or equivalent cloud-based MongoDB solutions for high availability, security, and performance.
· Develop, manage, and enhance CI/CD pipelines using tools like AWS CodePipeline, Jenkins, GitHub Actions, GitLab CI/CD, or Bitbucket Pipelines.
· Automate infrastructure provisioning using Terraform, AWS CloudFormation, or AWS CDK.
· Implement monitoring and logging solutions using CloudWatch, Prometheus, Grafana, or the ELK Stack.
· Enforce cloud security best practices — IAM, VPC setups, encryption, certificate management, and compliance controls.
· Work closely with development teams to improve application reliability, scalability, and performance.
· Manage containerized environments using Docker, Kubernetes (EKS), or AWS ECS.
· Perform MongoDB administration tasks such as backups, performance tuning, indexing, and sharding.
· Participate in on-call rotations to ensure 24/7 infrastructure availability and quick incident resolution.
Knowledge Skills and Abilities:
· 7+ years of hands-on AWS DevOps experience, especially with middleware services.
· Strong expertise in MongoDB Atlas or other cloud MongoDB services.
· Proficiency in Infrastructure as Code (IaC) tools like Terraform, CloudFormation, or AWS CDK.
· Solid experience with CI/CD tools: Jenkins, CodePipeline, GitHub Actions, GitLab, Bitbucket, etc.
· Excellent scripting skills in Python, Bash, or PowerShell.
· Experience in containerization and orchestration: Docker, EKS, ECS.
· Familiarity with monitoring tools like CloudWatch, ELK, Prometheus, Grafana.
· Strong understanding of AWS networking and security: IAM, VPC, KMS, Security Groups.
· Ability to solve complex problems and thrive in a fast-paced environment.
Preferred Qualifications
· AWS Certified DevOps Engineer – Professional or AWS Solutions Architect – Associate/Professional.
· MongoDB Certified DBA or Developer.
· Experience with serverless services like AWS Lambda, Step Functions.
· Exposure to multi-cloud or hybrid cloud environments.
Mail updated resume with current salary-
Email: jobs[at]glansolutions[dot]com
Satish; 88O 27 49 743
Google search: Glan management consultancy
Job Title: AWS Devops Engineer – Manager Business solutions
Location: Gurgaon, India
Experience Required: 8-12 years
Industry: IT
We are looking for a seasoned AWS DevOps Engineer with robust experience in AWS middleware services and MongoDB Cloud Infrastructure Management. The role involves designing, deploying, and maintaining secure, scalable, and high-availability infrastructure, along with developing efficient CI/CD pipelines and automating operational processes.
Key Deliverables (Essential functions & Responsibilities of the Job):
· Design, deploy, and manage AWS infrastructure, with a focus on middleware services such as API Gateway, Lambda, SQS, SNS, ECS, and EKS.
· Administer and optimize MongoDB Atlas or equivalent cloud-based MongoDB solutions for high availability, security, and performance.
· Develop, manage, and enhance CI/CD pipelines using tools like AWS CodePipeline, Jenkins, GitHub Actions, GitLab CI/CD, or Bitbucket Pipelines.
· Automate infrastructure provisioning using Terraform, AWS CloudFormation, or AWS CDK.
· Implement monitoring and logging solutions using CloudWatch, Prometheus, Grafana, or the ELK Stack.
· Enforce cloud security best practices — IAM, VPC setups, encryption, certificate management, and compliance controls.
· Work closely with development teams to improve application reliability, scalability, and performance.
· Manage containerized environments using Docker, Kubernetes (EKS), or AWS ECS.
· Perform MongoDB administration tasks such as backups, performance tuning, indexing, and sharding.
· Participate in on-call rotations to ensure 24/7 infrastructure availability and quick incident resolution.
Knowledge Skills and Abilities:
· 7+ years of hands-on AWS DevOps experience, especially with middleware services.
· Strong expertise in MongoDB Atlas or other cloud MongoDB services.
· Proficiency in Infrastructure as Code (IaC) tools like Terraform, CloudFormation, or AWS CDK.
· Solid experience with CI/CD tools: Jenkins, CodePipeline, GitHub Actions, GitLab, Bitbucket, etc.
· Excellent scripting skills in Python, Bash, or PowerShell.
· Experience in containerization and orchestration: Docker, EKS, ECS.
· Familiarity with monitoring tools like CloudWatch, ELK, Prometheus, Grafana.
· Strong understanding of AWS networking and security: IAM, VPC, KMS, Security Groups.
· Ability to solve complex problems and thrive in a fast-paced environment.
Preferred Qualifications
· AWS Certified DevOps Engineer – Professional or AWS Solutions Architect – Associate/Professional.
· MongoDB Certified DBA or Developer.
· Experience with serverless services like AWS Lambda, Step Functions.
· Exposure to multi-cloud or hybrid cloud environments.
Mail updated resume with current salary-
Email: etalenthire[at]gmail[dot]com
Satish; 88O 27 49 743

The Opportunity
We’re looking for a Senior Data Engineer to join our growing Data Platform team. This role is a hybrid of data engineering and business intelligence, ideal for someone who enjoys solving complex data challenges while also building intuitive and actionable reporting solutions.
You’ll play a key role in designing and scaling the infrastructure and pipelines that power analytics, dashboards, machine learning, and decision-making across Sonatype. You’ll also be responsible for delivering clear, compelling, and insightful business intelligence through tools like Looker Studio and advanced SQL queries.
What You’ll Do
- Design, build, and maintain scalable data pipelines and ETL/ELT processes.
- Architect and optimize data models and storage solutions for analytics and operational use.
- Create and manage business intelligence reports and dashboards using tools like Looker Studio, Power BI, or similar.
- Collaborate with data scientists, analysts, and stakeholders to ensure datasets are reliable, meaningful, and actionable.
- Own and evolve parts of our data platform (e.g., Airflow, dbt, Spark, Redshift, or Snowflake).
- Write complex, high-performance SQL queries to support reporting and analytics needs.
- Implement observability, alerting, and data quality monitoring for critical pipelines.
- Drive best practices in data engineering and business intelligence, including documentation, testing, and CI/CD.
- Contribute to the evolution of our next-generation data lakehouse and BI architecture.
What We’re Looking For
Minimum Qualifications
- 5+ years of experience as a Data Engineer or in a hybrid data/reporting role.
- Strong programming skills in Python, Java, or Scala.
- Proficiency with data tools such as Databricks, data modeling techniques (e.g., star schema, dimensional modeling), and data warehousing solutions like Snowflake or Redshift.
- Hands-on experience with modern data platforms and orchestration tools (e.g., Spark, Kafka, Airflow).
- Proficient in SQL with experience in writing and optimizing complex queries for BI and analytics.
- Experience with BI tools such as Looker Studio, Power BI, or Tableau.
- Experience in building and maintaining robust ETL/ELT pipelines in production.
- Understanding of data quality, observability, and governance best practices.
Bonus Points
- Experience with dbt, Terraform, or Kubernetes.
- Familiarity with real-time data processing or streaming architectures.
- Understanding of data privacy, compliance, and security best practices in analytics and reporting.
Why You’ll Love Working Here
- Data with purpose: Work on problems that directly impact how the world builds secure software.
- Full-spectrum impact: Use both engineering and analytical skills to shape product, strategy, and operations.
- Modern tooling: Leverage the best of open-source and cloud-native technologies.
- Collaborative culture: Join a passionate team that values learning, autonomy, and real-world impact.

About the Role
We’re hiring a Data Engineer to join our Data Platform team. You’ll help build and scale the systems that power analytics, reporting, and data-driven features across the company. This role works with engineers, analysts, and product teams to make sure our data is accurate, available, and usable.
What You’ll Do
- Build and maintain reliable data pipelines and ETL/ELT workflows.
- Develop and optimize data models for analytics and internal tools.
- Work with team members to deliver clean, trusted datasets.
- Support core data platform tools like Airflow, dbt, Spark, Redshift, or Snowflake.
- Monitor data pipelines for quality, performance, and reliability.
- Write clear documentation and contribute to test coverage and CI/CD processes.
- Help shape our data lakehouse architecture and platform roadmap.
What You Need
- 2–4 years of experience in data engineering or a backend data-related role.
- Strong skills in Python or another backend programming language.
- Experience working with SQL and distributed data systems (e.g., Spark, Kafka).
- Familiarity with NoSQL stores like HBase or similar.
- Comfortable writing efficient queries and building data workflows.
- Understanding of data modeling for analytics and reporting.
- Exposure to tools like Airflow or other workflow schedulers.
Bonus Points
- Experience with DBT, Databricks, or real-time data pipelines.
- Familiarity with cloud infrastructure tools like Terraform or Kubernetes.
- Interest in data governance, ML pipelines, or compliance standards.
Why Join Us?
- Work on data that supports meaningful software security outcomes.
- Use modern tools in a cloud-first, open-source-friendly environment.
- Join a team that values clarity, learning, and autonomy.
If you're excited about building impactful software and helping others do the same, this is an opportunity to grow as a technical leader and make a meaningful impact.

About the Role
The Engineering Manager - Platform role blends hands-on engineering with leadership and strategic influence. You will lead high-performing engineering teams to build the infrastructure, pipelines, and systems that fuel analytics, business intelligence, and machine learning across our global products. We’re looking for a leader who brings deep technical experience in modern data platforms, is fluent in programming, and understands the nuances of open-source consumption and software supply chain security. This hybrid role is based out of our Hyderabad office.
What You’ll Do
- Lead, mentor, and grow a team of engineers responsible for building scalable, secure, and maintainable data solutions.
- Write and review production code across frontend (React/TypeScript) and backend (Java/Kotlin) systems.
- Review, and guide production-level code in Python, Java, or similar languages.
- Ensure strong foundations in governance, observability, and data quality.
- Collaborate with cross-functional teams including Product, Security, Engineering, and Data Science to translate business needs into data strategies and deliverables.
- Apply your knowledge of open-source component usage, dependency management, and software composition analysis to ensure our data platforms support secure development practices.
- Embed application security principles into data platform design, supporting Sonatype’s mission to secure the software supply chain.
- Foster an engineering culture that prioritizes continuous improvement, technical excellence, and team ownership
Who You Are
- A technical leader with a strong background in data engineering, platform design, and secure software development.
- Comfortable operating across domains—data infrastructure, programming, architecture, security, and team leadership.
- Passionate about delivering high-impact results through technical contributions, mentoring, and strategic thinking.
- Familiar with modern data engineering practices, open-source ecosystems, and the challenges of managing data securely at scale.
- A collaborative communicator who thrives in hybrid and cross-functional team environments.
What You Need
- 10+ years of experience in engineering, backend systems, and infrastructure development.
- Experience in a technical leadership or engineering management role with hands-on contribution.
- Expertise in technologies: ReactJS, Document DB, API Security, Jenkins, Elasticsearch, etc.
- Strong programming skills in Python, Java, or Scala with experience building robust, production-grade systems.
- Understanding software dependency management and open-source consumption patterns.
- Familiarity with application security principles and a strong interest in secure software supply chains.
- Experience supporting real-time data systems or streaming architectures.
- Exposure to machine learning pipelines or data productization.
- Experience with tools like Terraform, Kubernetes, and CI/CD for data engineering workflows.
- Knowledge of data governance frameworks and regulatory compliance (GDPR, SOC2, etc.).
Why Join Us?
- Help secure the software supply chain for millions of developers worldwide.
- Build meaningful software in a collaborative, fast-moving environment with strong technical peers.
- Stay hands-on while leading—technical leadership is part of the job, not separate from it.
- Join a global engineering organization with deep local roots and a strong team culture.
- Competitive salary, great benefits, and opportunities for growth and innovation.
If you're excited about building impactful software and helping others do the same, This is an opportunity to grow as a technical leader and make a meaningful impact.

About the Role
As an Engineering Manager, you will divide your time between hands-on technical work and team leadership. You’ll write and review production code, drive system design and architectural discussions, and mentor engineers through complex technical challenges. At the same time, you will guide your team’s growth, partner with cross-functional stakeholders and help shape our product direction. This role requires someone who can think strategically, execute effectively, and stay close to the code.
What You’ll Do
- Lead a high-impact engineering team building secure, performant, and user-friendly features.
- Write and review production code across frontend (React/TypeScript) and backend (Java/Kotlin) systems.
- Guide technical design and architecture for complex systems and user-facing features.
- Partner with Product Managers and Designers to define and deliver on product roadmaps.
- Help shape and uphold best practices in code quality, testing, security, and system performance.
- Mentor engineers through design discussions, code reviews, and technical guidance.
- Recruit, retain, and grow top engineering talent while fostering a culture of collaboration and ownership.
Who You Are
- A technical leader who enjoys solving hard problems and contributing directly to engineering outcomes.
- Experienced in building scalable, modern web applications using Java/Kotlin and React/TypeScript.
- Committed to mentoring engineers and helping them grow through hands-on leadership.
- A strong partner to Product and UX, capable of translating business goals into technical strategy.
- Collaborative and grounded, with a preference for in-person interaction and real-time discussion.
What You Need
- 10+ years of experience in full-stack software development, including user-facing product work.
- Experience as an Engineering Manager or Technical Lead, with continued technical contribution.
- Proficiency in Java/Kotlin and JavaScript/TypeScript, including architecture and implementation.
- Experience with API design, frontend/backend integration, and CI/CD pipelines.
- Proven ability to influence team direction and mentor others through technical excellence.
Why Join Us?
- Help secure the software supply chain for millions of developers worldwide.
- Build meaningful software in a collaborative, fast-moving environment with strong technical peers.
- Stay hands-on while leading—technical leadership is part of the job, not separate from it.
- Join a global engineering organization with deep local roots and a strong team culture.
- Competitive salary, great benefits, and opportunities for growth and innovation.
If you're excited about building impactful software and helping others do the same, this is an opportunity to grow as a technical leader and make a meaningful impact.

Job Type : Contract
Location : Bangalore
Experience : 5+yrs
The role focuses on cloud security engineering with a strong emphasis on GCP, while also covering AWS and Azure.
Required Skills:
- 5+ years of experience in software and/or cloud platform engineering, particularly focused on GCP environment.
- Knowledge of the Shared Responsibility Model; keen understanding of the security risks inherent in hosting cloud-based applications and data.
- Experience developing across the security assurance lifecycle (including prevent, detect, respond, and remediate controls)?Experience in configuring Public Cloud native security tooling and capabilities with a focus on Google Cloud Organizational policies/constraints, VPC SC, IAM policies and GCP APIs.
- Experience with Cloud Security Posture Management (CSPM) 3rd Party tools such as Wiz, Prisma, Check Point CloudGuard, etc.
- Experience in Policy-as-code (Rego) and OPA platform.
- Experience solutioning and configuring event-driven serverless-based security controls in Azure, including but not limited to technologies such as Azure Function, Automation Runbook, AWS Lambda and Google Cloud Functions.
- Deep understanding of DevOps processes and workflows.
- Working knowledge of the Secure SDLC process
- Experience with Infrastructure as Code (IaC) tooling, preferably Terraform.
- Familiarity with Logging and data pipeline concepts and architectures in cloud.
- Strong in scripting languages such as PowerShell or Python or Bash or Go.
- Knowledge of Agile best practices and methodologies
- Experience creating technical architecture documentation.? Excellent communication, written and interpersonal skills.
- Practical experience in designing and configuring CICD pipelines. Practical experience in GitHub Actions and Jenkins.
- Experience in ITSM.
- Ability to articulate complex technical concepts to non-technical stakeholders.
- Experience with risk control frameworks and engagements with risk and regulatory functions
- Experience in the financial industry would be a plus.

Position: General Cloud Automation Engineer/General Cloud Engineer
Location-Balewadi High Street,Pune
Key Responsibilities:
- Strategic Automation Leadership
- Drive automation to improve deployment speed and reduce manual work.
- Promote scalable, long-term automation solutions.
- Infrastructure as Code (IaC) & Configuration Management
- Develop IaC using Terraform, CloudFormation, Ansible.
- Maintain infrastructure via Ansible, Puppet, Chef.
- Scripting in Python, Bash, PowerShell, JavaScript, GoLang.
- CI/CD & Cloud Optimization
- Enhance CI/CD using Jenkins, GitHub Actions, GitLab CI/CD.
- Automate across AWS, Azure, GCP, focusing on performance, networking, and cost-efficiency.
- Integrate monitoring tools such as Prometheus, Grafana, Datadog, ELK.
- Security Automation
- Enforce security with tools like Vault, Snyk, Prisma Cloud.
- Implement automated compliance and access controls.
- Innovation & Continuous Improvement
- Evaluate and adopt emerging automation tools.
- Foster a forward-thinking automation culture.
Required Skills & Tools:
Strong background in automation, DevOps, and cloud engineering.
Expert in:
IaC: Terraform, CloudFormation, Azure ARM, Bicep
Config Mgmt: Ansible, Puppet, Chef
Cloud Platforms: AWS, Azure, GCP
CI/CD: Jenkins, GitHub Actions, GitLab CI/CD
Scripting: Python, Bash, PowerShell, JavaScript, GoLang
Monitoring & Security: Prometheus, Grafana, ELK, Vault, Prisma Cloud
Network Automation: Private Endpoints, Transit Gateways, Firewalls, etc.
Certifications Preferred:
AWS DevOps Engineer
Terraform Associate
Red Hat Certified Engineer
Role Purpose: Maintain and enhance the IaC-driven cloud infrastructure, pipelines, and environments post team-exit.
Key Skills:
- Azure DevOps Services (Repos, Pipelines, Artifacts)
- Terraform (advanced) – infrastructure provisioning
- CI/CD pipeline design and automation
- ARM templates, Bicep, or equivalent (if used alongside Terraform)
- Monitoring & Logging – Azure Monitor, Log Analytics
- Security & Compliance – Azure Policies, RBAC, NSGs, etc.
- Networking Basics – VNets, subnets, peering, firewalls
Experience Level:
- 5+ years total experience in cloud infrastructure
- 3+ years hands-on in Azure and Terraform
- Should have delivered or supported production-grade, IaC-managed platforms
Job Title : Lead System Administrator / Team Leader – Server Administration (NOC)
Experience : 12 to 16 Years
Location : Bengaluru (Whitefield / Domlur) or Coimbatore
Work Mode : Initially Work From Office (5 days/week during probation), Hybrid thereafter (3 days WFO)
Salary : Up to ₹28 LPA (including 8% variable)
Notice Period : Immediate / Serving / up to 30 days
Shift Time : Flexible (11:00 AM – 8:00 PM)
Role Overview :
We are seeking an experienced Lead System Administrator / Team Leader to manage our server administration team and ensure the stability, performance, and security of our infrastructure. This is a hands-on leadership role that demands technical depth, strategic thinking, and excellent team management capabilities.
Mandatory Skills :
- Windows Server Administration
- Citrix, VMware, and Hypervisor Platforms
- 1–2 Years of Team Lead / Leadership Experience
- Scripting (PowerShell, Bash, etc.)
- Infrastructure as Code – Terraform / Ansible
- Monitoring, Backup, and Compliance Tools Exposure
- Experience in 24/7 Production Environments
- Strong Communication & Documentation Skills
Key Responsibilities :
- Lead and mentor a team of system/server administrators.
- Manage installation, configuration, and support of Windows-based physical & virtual servers.
- Ensure optimal uptime, performance, and availability of server infrastructure.
- Oversee Active Directory, DNS, DHCP, file servers, and backup systems.
- Implement disaster recovery strategies & capacity planning.
- Collaborate with security, application, and network teams.
- Create and maintain SOPs, asset inventories, and architectural documentation.
- Drive compliance with IT policies and audit standards.
- Provide on-call support and lead incident management for server-related issues.
Qualifications :
- Bachelor’s degree in Computer Science, IT, or related field.
- 10+ Years in server/system administration, including 1 to 2 years in a leadership capacity.
- Strong knowledge of Windows Server environments.
- Hands-on experience with Citrix, VMware, Nutanix, Hyper-V.
- Familiarity with Azure cloud platforms.
- Proficient in automation and scripting tools (PowerShell, Bash).
- Knowledge of Infrastructure as Code using Terraform and Ansible.
- Certifications like MCSA/MCSE, RHCE are a plus.
- Excellent communication, documentation, and team management skills.
Interview Process :
- L1 – Technical Interview (with Partner Team)
- L2 – Technical Interview (Client)
- L3 – Techno-Managerial Round
- L4 – HR Discussion
Job Title : Senior System Administrator
Experience : 7 to 12 Years
Location : Bangalore (Whitefield/Domlur) or Coimbatore
Work Mode :
- First 3 Months : Work From Office (5 Days)
- Post-Probation : Hybrid (3 Days WFO)
- Shift : Rotational (Day & Night)
- Notice Period : Immediate to 30 Days
- Salary : Up to ₹24 LPA (including 8% variable), slightly negotiable
Role Overview :
Seeking a Senior System Administrator with strong experience in server administration, virtualization, automation, and hybrid infrastructure. The role involves managing Windows environments, scripting, cloud/on-prem operations, and ensuring 24x7 system availability.
Mandatory Skills :
Windows Server, Virtualization (Citrix/VMware/Nutanix/Hyper-V), Office 365, Intune, PowerShell, Terraform/Ansible, CI/CD, Hybrid Cloud (Azure), Monitoring, Backup, NOC, DCO.
Key Responsibilities :
- Manage physical/virtual Windows servers and core services (AD, DNS, DHCP).
- Automate infrastructure using Terraform/Ansible.
- Administer Office 365, Intune, and ensure compliance.
- Support hybrid on-prem + Azure environments.
- Handle monitoring, backups, disaster recovery, and incident response.
- Collaborate on DevOps pipelines and write automation scripts (PowerShell).
Nice to Have :
MCSA/MCSE/RHCE, Azure admin experience, team leadership background
Interview Rounds :
L1 – Technical (Platform)
L2 – Technical
L3 – Techno-Managerial
L4 – HR
Job Summary:
We are seeking a highly skilled and proactive DevOps Engineer with 4+ years of experience to join our dynamic team. This role requires strong technical expertise across cloud infrastructure, CI/CD pipelines, container orchestration, and infrastructure as code (IaC). The ideal candidate should also have direct client-facing experience and a proactive approach to managing both internal and external stakeholders.
Key Responsibilities:
- Collaborate with cross-functional teams and external clients to understand infrastructure requirements and implement DevOps best practices.
- Design, build, and maintain scalable cloud infrastructure on AWS (EC2, S3, RDS, ECS, etc.).
- Develop and manage infrastructure using Terraform or CloudFormation.
- Manage and orchestrate containers using Docker and Kubernetes (EKS).
- Implement and maintain CI/CD pipelines using Jenkins or GitHub Actions.
- Write robust automation scripts using Python and Shell scripting.
- Monitor system performance and availability, and ensure high uptime and reliability.
- Execute and optimize SQL queries for MSSQL and PostgreSQL databases.
- Maintain clear documentation and provide technical support to stakeholders and clients.
Required Skills:
- Minimum 4+ years of experience in a DevOps or related role.
- Proven experience in client-facing engagements and communication.
- Strong knowledge of AWS services – EC2, S3, RDS, ECS, etc.
- Proficiency in Infrastructure as Code using Terraform or CloudFormation.
- Hands-on experience with Docker and Kubernetes (EKS).
- Strong experience in setting up and maintaining CI/CD pipelines with Jenkins or GitHub.
- Solid understanding of SQL and working experience with MSSQL and PostgreSQL.
- Proficient in Python and Shell scripting.
Preferred Qualifications:
- AWS Certifications (e.g., AWS Certified DevOps Engineer) are a plus.
- Experience working in Agile/Scrum environments.
- Strong problem-solving and analytical skills.
Work Mode & Timing:
- Hybrid – Pune-based candidates preferred.
- Working hours: 12:30 PM to 9:30 PM IST to align with client time zones.

This is a U.S.-based healthcare technology and revenu
Job Title: Senior System Administrator
Location: Bangalore
Experience Required: 7+ Years
Work Mode:
- 5 Days Working
- Rotational Shifts
- Hybrid Work after probation
Job Description:
We are seeking a Senior System Administrator with 7+ years of hands-on experience in managing Windows Server environments, virtualization technologies, automation tools, and hybrid infrastructure (on-prem & Azure). The ideal candidate should possess strong problem-solving skills, be proficient in scripting, and have experience in Office 365 and Microsoft Intune administration.
Key Responsibilities:
- Manage and maintain Windows Server environments
- Handle virtualization platforms such as Citrix, Nutanix, VMware, Hyper-V
- Implement and maintain automation using tools like Ansible, Terraform, PowerShell
- Work with Infrastructure as Code (IaC) platforms and DevOps frameworks
- Support and manage Office 365 and Microsoft Intune
- Monitor and support Data Center Operations (DCO) and NOC
- Ensure security and compliance across systems
- Provide scripting and troubleshooting support for infrastructure automation
- Collaborate with teams for CI/CD pipeline integration
- Handle monitoring, backup, and disaster recovery processes
- Work effectively in a hybrid environment (on-prem and Azure)
Skills Required:
- Office 365 Administration
- Microsoft Intune Administration
- Security & Compliance
- Automation & Infrastructure as Code (IaC)
- Tools: PowerShell, Terraform, Ansible
- CI/CD and DevOps framework exposure
- Monitoring & Backup
- Data Center Operations (DCO) & NOC Support
- Hybrid environment experience (on-prem and Azure)
- Scripting & Troubleshooting
- PowerShell scripting for automation
About Us
CLOUDSUFI, a Google Cloud Premier Partner, is a global leading provider of data-driven digital transformation across cloud-based enterprises. With a global presence and focus on Software & Platforms, Life sciences and Healthcare, Retail, CPG, financial services, and supply chain, CLOUDSUFI is positioned to meet customers where they are in their data monetization journey.
Our Values
We are a passionate and empathetic team that prioritizes human values. Our purpose is to elevate the quality of lives for our family, customers, partners and the community.
Equal Opportunity Statement
CLOUDSUFI is an equal opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all employees. All qualified candidates receive consideration for employment without regard to race, color, religion, gender, gender identity or expression, sexual orientation, and national origin status. We provide equal opportunities in employment, advancement, and all other areas of our workplace. Please explore more at https://www.cloudsufi.com/
What are we looking for
We are seeking a highly skilled and experienced Senior DevOps Engineer to join our team. The ideal candidate will have extensive expertise in modern DevOps tools and practices, particularly in managing CI/CD pipelines, infrastructure as code, and cloud-native environments. This role involves designing, implementing, and maintaining robust, scalable, and efficient infrastructure and deployment pipelines to support our development and operations teams.
Required Skills and Experience:
- 7+ years of experience in DevOps, infrastructure automation, or related fields.
- Advanced expertise in Terraform for infrastructure as code.
- Solid experience with Helm for managing Kubernetes applications.
- Proficient with GitHub for version control, repository management, and workflows.
- Extensive experience with Kubernetes for container orchestration and management.
- In-depth understanding of Google Cloud Platform (GCP) services and architecture.
- Strong scripting and automation skills (e.g., Python, Bash, or equivalent).
- Excellent problem-solving skills and attention to detail. - Strong communication and collaboration abilities in agile development environments.
Preferred Qualifications:
- Experience with other CI/CD tools (e.g., Jenkins, GitLab CI/CD).
- Knowledge of additional cloud platforms (e.g., AWS, Azure).
- Certification in Kubernetes (CKA/CKAD) or Google Cloud (GCP Professional DevOps Engineer).
Behavioral Competencies
• Must have worked with US/Europe based clients in onsite/offshore delivery models.
• Should have very good verbal and written communication, technical articulation, listening and presentation skills.
• Should have proven analytical and problem solving skills.
• Should have collaborative mindset for cross-functional team work
• Passion for solving complex search problems
• Should have demonstrated effective task prioritization, time management and internal/external stakeholder management skills.
• Should be a quick learner, self starter, go-getter and team player.
• Should have experience of working under stringent deadlines in a Matrix organization structure.
We are looking for a highly skilled DevOps/Cloud Engineer with over 6 years of experience in infrastructure automation, cloud platforms, networking, and security. If you are passionate about designing scalable systems and love solving complex cloud and DevOps challenges—this opportunity is for you.
Key Responsibilities
- Design, deploy, and manage cloud-native infrastructure using Kubernetes (K8s), Helm, Terraform, and Ansible
- Automate provisioning and orchestration workflows for cloud and hybrid environments
- Manage and optimize deployments on AWS, Azure, and GCP for high availability and cost efficiency
- Troubleshoot and implement advanced network architectures including VPNs, firewalls, load balancers, and routing protocols
- Implement and enforce security best practices: IAM, encryption, compliance, and vulnerability management
- Collaborate with development and operations teams to improve CI/CD workflows and system observability
Required Skills & Qualifications
- 6+ years of experience in DevOps, Infrastructure as Code (IaC), and cloud-native systems
- Expertise in Helm, Terraform, and Kubernetes
- Strong hands-on experience with AWS and Azure
- Solid understanding of networking, firewall configurations, and security protocols
- Experience with CI/CD tools like Jenkins, GitHub Actions, or similar
- Strong problem-solving skills and a performance-first mindset
Why Join Us?
- Work on cutting-edge cloud infrastructure across diverse industries
- Be part of a collaborative, forward-thinking team
- Flexible hybrid work model – work from anywhere while staying connected
- Opportunity to take ownership and lead critical DevOps initiatives
∙Need 8+ years of experience in Devops CICD
∙Managing large-scale AWS deployments using Infrastructure as Code (IaC) and k8s developer tools
∙Managing build/test/deployment of very large-scale systems, bridging between developers and live stacks
∙Actively troubleshoot issues that arise during development and production
∙Owning, learning, and deploying SW in support of customer-facing applications
∙Help establish DevOps best practices
∙Actively work to reduce system costs
∙Work with open-source technologies, helping to ensure robustness and secureness of said technologies
∙Actively work with CI/CD, GIT and other component parts of the build and deployment system
∙Leading skills with AWS cloud stack
∙Proven implementation experience with Infrastructure as Code (Terraform, Terragrunt, Flux, Helm charts)
at scale
∙Proven experience with Kubernetes at scale
∙Proven experience with cloud management tools beyond AWS console (k9s, lens)
∙Strong communicator who people want to work with – must be thought of as the ultimate collaborator
∙Solid team player
∙Strong experience with Linux-based infrastructures and AWS
∙Strong experience with databases such as MySQL, Redshift, Elasticsearch, Mongo, and others
∙Strong knowledge of JavaScript, GIT
∙Agile practitioner
What You’ll Do:
We’re looking for a skilled DevOps Engineer to help us build and maintain reliable, secure, and scalable infrastructure. You will work closely with our development, product, and security teams to streamline deployments, improve performance, and ensure cloud infrastructure resilience.
Responsibilities:
● Deploy, manage, and monitor infrastructure on Google Cloud Platform (GCP)
● Build CI/CD pipelines using Jenkins and integrate them with Git workflows
● Design and manage Kubernetes clusters and helm-based deployments
● Manage infrastructure as code using Terraform
● Set up logging, monitoring, and alerting (Stackdriver, Prometheus, Grafana)
● Ensure security best practices across cloud resources, networks, and secrets
● Automate repetitive operations and improve system reliability
● Collaborate with developers to troubleshoot and resolve issues in staging/production environments
What We’re Looking For:
Required Skills:
● 1–3 years of hands-on experience in a DevOps or SRE role
● Strong knowledge of GCP services (IAM, GKE, Cloud Run, VPC, Cloud Build, etc.)
● Proficiency in Kubernetes (deployment, scaling, troubleshooting)
● Experience with Terraform for infrastructure provisioning
● CI/CD pipeline setup using Jenkins, GitHub Actions, or similar tools
● Understanding of DevSecOps principles and cloud security practices
● Good command over Linux, shell scripting, and basic networking concepts
Nice to have:
● Experience with Docker, Helm, ArgoCD
● Exposure to other cloud platforms (AWS, Azure)
● Familiarity with incident response and disaster recovery planning
● Knowledge of logging and monitoring tools like ELK, Prometheus, Grafana

At TechBiz Global, we are providing recruitment service to our TOP clients from our portfolio. We are currently seeking 4 DevOps Support Engineer to join one of our clients' teams in India who can start until 20th of July. If you're looking for an exciting opportunity to grow in a innovative environment, this could be the perfect fit for you.
Job requirements
Key Responsibilities:
- Monitor and troubleshoot AWS and/or Azure environments to ensure optimal performance and availability.
- Respond promptly to incidents and alerts, investigating and resolving issues efficiently.
- Perform basic scripting and automation tasks to streamline cloud operations (e.g., Bash, Python).
- Communicate clearly and fluently in English with customers and internal teams.
- Collaborate closely with the Team Lead, following Standard Operating Procedures (SOPs) and escalation workflows.
- Work in a rotating shift schedule, including weekends and nights, ensuring continuous support coverage.
Shift Details:
- Engineers rotate shifts, typically working 4–5 shifts per week.
- Each engineer works about 4 to 5 shifts per week, rotating through morning, evening, and night shifts—including weekends—to cover 24/7 support evenly among the team
- Rotation ensures no single engineer is always working nights or weekends; the load is shared fairly among the team.
Qualifications:
- 2–5 years of experience in DevOps or cloud support roles.
- Strong familiarity with AWS and/or Azure cloud environments.
- Experience with CI/CD tools such as GitHub Actions or Jenkins.
- Proficiency with monitoring tools like Datadog, CloudWatch, or similar.
- Basic scripting skills in Bash, Python, or comparable languages.
- Excellent communication skills in English.
- Comfortable and willing to work in a shift-based support role, including night and weekend shifts.
- Prior experience in a shift-based support environment is preferred.
What We Offer:
- Remote work opportunity — work from anywhere in India with a stable internet connection.
- Comprehensive training program including:
- Shadowing existing processes to gain hands-on experience.
- Learning internal tools, Standard Operating Procedures (SOPs), ticketing systems, and escalation paths to ensure smooth onboarding and ongoing success.
DevOps Engineer
AiSensy
Gurugram, Haryana, India (On-site)
About AiSensy
AiSensy is a WhatsApp based Marketing & Engagement platform helping businesses like Adani, Delhi Transport Corporation, Yakult, Godrej, Aditya Birla Hindalco., Wipro, Asian Paints, India Today Group Skullcandy, Vivo, Physicswallah, Cosco grow their revenues via WhatsApp.
- Enabling 100,000+ Businesses with WhatsApp Engagement & Marketing
- 400Crores + WhatsApp Messages done between Businesses and Users via AiSensy per year
- Working with top brands like Delhi Transport Corporation, Vivo, Physicswallah & more
- High Impact as Businesses drive 25-80% Revenues using AiSensy Platform
- Mission-Driven and Growth Stage Startup backed by Marsshot.vc, Bluelotus.vc & 50+ Angel Investors
Now, we’re looking for a DevOps Engineer to help scale our infrastructure and optimize performance for millions of users. 🚀
What You’ll Do (Key Responsibilities)
🔹 CI/CD & Automation:
- Implement, manage, and optimize CI/CD pipelines using AWS CodePipeline, GitHub Actions, or Jenkins.
- Automate deployment processes to improve efficiency and reduce downtime.
🔹 Infrastructure Management:
- Use Terraform, Ansible, Chef, Puppet, or Pulumi to manage infrastructure as code.
- Deploy and maintain Dockerized applications on Kubernetes clusters for scalability.
🔹 Cloud & Security:
- Work extensively with AWS (Preferred) or other cloud platforms to build and maintain cloud infrastructure.
- Optimize cloud costs and ensure security best practices are in place.
🔹 Monitoring & Troubleshooting:
- Set up and manage monitoring tools like CloudWatch, Prometheus, Datadog, New Relic, or Grafana to track system performance and uptime.
- Proactively identify and resolve infrastructure-related issues.
🔹 Scripting & Automation:
- Use Python or Bash scripting to automate repetitive DevOps tasks.
- Build internal tools for system health monitoring, logging, and debugging.
What We’re Looking For (Must-Have Skills)
✅ Version Control: Proficiency in Git (GitLab / GitHub / Bitbucket)
✅ CI/CD Tools: Hands-on experience with AWS CodePipeline, GitHub Actions, or Jenkins
✅ Infrastructure as Code: Strong knowledge of Terraform, Ansible, Chef, or Pulumi
✅ Containerization & Orchestration: Experience with Docker & Kubernetes
✅ Cloud Expertise: Hands-on experience with AWS (Preferred) or other cloud providers
✅ Monitoring & Alerting: Familiarity with CloudWatch, Prometheus, Datadog, or Grafana
✅ Scripting Knowledge: Python or Bash for automation
Bonus Skills (Good to Have, Not Mandatory)
➕ AWS Certifications: Solutions Architect, DevOps Engineer, Security, Networking
➕ Experience with Microsoft/Linux/F5 Technologies
➕ Hands-on knowledge of Database servers
Strong proficiency in at least one major cloud platform - Azure, GCP(Primary) or AWS. Azure + GCP (Secondary) preferred. Having proficiency in all 3 is a significant plus.
· Design, develop, and maintain cloud-based applications and infrastructure across various cloud platforms.
· Select and configure appropriate cloud services based on specific project requirements and constraints.
· Implement infrastructure automation with tools like Terraform and Ansible.
· Write clean, efficient, and well-documented code using various programming languages. (Python (Required), Knowledge of Java, C#, JavaScript is a plus).
· Implement RESTful APIs and microservices architectures.
· Utilize DevOps practices for continuous integration and continuous delivery (CI/CD).
· Design, configure, and manage scalable and secure cloud infrastructure for MLOps.
· Monitor and optimize cloud resources for performance and cost efficiency.
· Implement security best practices throughout the development lifecycle.
· Collaborate with developers, operations, and security teams to ensure seamless integration and successful deployments.
· Stay up-to-date on the latest cloud technologies, MLOps tools and trends
· Strong analytical and problem-solving skills.
Looking for Fresher developers
Responsibilities:
- Implement integrations requested by customers
- Deploy updates and fixes
- Provide Level 2 technical support
- Build tools to reduce occurrences of errors and improve customer experience
- Develop software to integrate with internal back-end systems
- Perform root cause analysis for production errors
- Investigate and resolve technical issues
- Develop scripts to automate visualization
- Design procedures for system troubleshooting and maintenance
Requirements and skill:
Knowledge in DevOps Engineer or similar software engineering role
Good knowledge of Terraform, Kubernetes
Working knowledge on AWS, Google Cloud
You can directly contact me on nine three one six one two zero one three two

Job Title : Senior Consultant (Java / NodeJS + Temporal)
Experience : 5 to 12 Years
Location : Bengaluru, Chennai, Hyderabad, Pune, Mumbai, Gurugram, Coimbatore
Work Mode : Remote (Must be open to travel for occasional team meetups)
Notice Period : Immediate Joiners or Serving Notice
Interview Process :
- R1 : Tech Interview (60 mins)
- R2 : Technical Interview
- R3 : (Optional) Interview with Client
Job Summary :
We are seeking a Senior Backend Consultant with strong hands-on expertise in Temporal (BPM/Workflow Engine) and either Node.js or Java.
The ideal candidate will have experience in designing and developing microservices and process-driven applications, as well as orchestrating complex workflows using Temporal.io.
You will work on high-scale systems, collaborating closely with cross-functional teams.
Mandatory Skills :
Temporal.io, Node.js (or Java), React.js, Keycloak IAM, PostgreSQL, Terraform, Kubernetes, Azure, Jest, OpenAPI
Key Responsibilities :
- Design and implement scalable backend services using Node.js or Java.
- Build and manage complex workflow orchestrations using Temporal.io.
- Integrate with IAM solutions like Keycloak for role-based access control.
- Work with React (v17+), TypeScript, and component-driven frontend design.
- Use PostgreSQL for structured data persistence and optimized queries.
- Manage infrastructure using Terraform and orchestrate via Kubernetes.
- Leverage Azure Services like Blob Storage, API Gateway, and AKS.
- Write and maintain API documentation using Swagger/Postman/Insomnia.
- Conduct unit and integration testing using Jest.
- Participate in code reviews and contribute to architectural decisions.
Must-Have Skills :
- Temporal.io – BPMN modeling, external task workers, Operate, Tasklist
- Node.js + TypeScript (preferred) or strong Java experience
- React.js (v17+) and component-driven UI development
- Keycloak IAM, PostgreSQL, and modern API design
- Infrastructure automation with Terraform, Kubernetes
- Experience in using GitFlow, OpenAPI, Jest for testing
Nice-to-Have Skills :
- Blockchain integration experience for secure KYC/identity flows
- Custom Camunda Connectors or exporter plugin development
- CI/CD experience using Azure DevOps or GitHub Actions
- Identity-based task completion authorization enforcement
Job Role : Azure DevSecOps Engineer (Security-Focused)
Experience : 12 to 18 Years
Location : Preferably Delhi NCR (Hybrid); Remote possible with 1–2 office visits per quarter (Gurgaon)
Joining Timeline : Max 45 days (Buyout option available)
Work Mode : Full-time | 5 Days Working
About the Role :
We are looking for a highly experienced Azure DevSecOps Engineer with a strong focus on cloud security practices.
This role is 60–70% security-driven, involving threat modeling, secure cloud architecture, and infrastructure security on Azure using Terraform.
Key Responsibilities :
- Architect and maintain secure, scalable Azure cloud infrastructure using Terraform.
- Implement security best practices : IAM, threat modeling, network security, data protection, and compliance (e.g., GDPR).
- Build CI/CD pipelines and automate deployments using Azure DevOps, Jenkins, Prometheus.
- Monitor, analyze, and proactively improve security posture.
- Collaborate with global teams to ensure secure design, development, and operations.
- Stay updated on cloud security trends and lead mitigation efforts.
Mandatory Skills :
Azure, Terraform, DevSecOps, Cloud Security, Threat Modelling, IAM, CI/CD (Azure DevOps), Docker, Kubernetes, Prometheus, Infrastructure as Code (IaC), Compliance Frameworks (GDPR)
Preferred Certifications :
Certified DevSecOps Professional (CDP), Microsoft Azure Certifications
We are seeking an experienced and passionate Cloud and DevOps Trainer to join our training and development team. The trainer will be responsible for delivering high-quality, hands-on training in Cloud technologies (such as AWS, Azure, or GCP) and DevOps tools and practices to students or working professionals.
About the Role:
We are looking for a skilled AWS DevOps Engineer to join our Cloud Operations team in Bangalore. This hybrid role is ideal for someone with hands-on experience in AWS and a strong background in application migration from on-premises to cloud environments. You'll play a key role in driving cloud adoption, optimizing infrastructure, and ensuring seamless cloud operations.
Key Responsibilities:
- Manage and maintain AWS cloud infrastructure and services.
- Lead and support application migration projects from on-prem to cloud.
- Automate infrastructure provisioning using Infrastructure as Code (IaC) tools.
- Monitor cloud environments and optimize cost, performance, and reliability.
- Collaborate with development, operations, and security teams to implement DevOps best practices.
- Troubleshoot and resolve infrastructure and deployment issues.
Required Skills:
- 3–5 years of experience in AWS cloud environment.
- Proven experience with on-premises to cloud application migration.
- Strong understanding of AWS core services (EC2, VPC, S3, IAM, RDS, etc.).
- Solid scripting skills (Python, Bash, or similar).
Good to Have:
- Experience with Terraform for Infrastructure as Code.
- Familiarity with Kubernetes for container orchestration.
- Exposure to CI/CD tools like Jenkins, GitLab, or AWS CodePipeline.
Required Skills:
- Experience in systems administration, SRE or DevOps focused role
- Experience in handling production support (on-call)
- Good understanding of the Linux operating system and networking concepts.
- Demonstrated competency with the following AWS services: ECS, EC2, EBS, EKS, S3, RDS, ELB, IAM, Lambda.
- Experience with Docker containers and containerization concepts
- Experience with managing and scaling Kubernetes clusters in a production environment
- Experience building scalable infrastructure in AWS with Terraform.
- Strong knowledge of Protocol-level such as HTTP/HTTPS, SMTP, DNS, and LDAP
- Experience monitoring production systems
- Expertise in leveraging Automation / DevOps principles, experience with operational tools, and able to apply best practices for infrastructure and software deployment (Ansible).
- HAProxy, Nginx, SSH, MySQL configuration and operation experience
- Ability to work seamlessly with software developers, QA, project managers, and business development
- Ability to produce and maintain written documentation