50+ Terraform Jobs in Bangalore (Bengaluru) | Terraform Job openings in Bangalore (Bengaluru)
Apply to 50+ Terraform Jobs in Bangalore (Bengaluru) on CutShort.io. Explore the latest Terraform Job opportunities across top companies like Google, Amazon & Adobe.
Drive the design, automation, and reliability of Albert Invent’s core platform to support scalable, high-performance AI applications.
You will partner closely with Product Engineering and SRE teams to ensure security, resiliency, and developer productivity while owning end-to-end service operability.
Key Responsibilities
- Own the design, reliability, and operability of Albert’s mission-critical platform.
- Work closely with Product Engineering and SRE to build scalable, secure, and high-performance services.
- Plan and deliver core platform capabilities that improve developer velocity, system resilience, and scalability.
- Maintain a deep understanding of microservices topology, dependencies, and behavior.
- Act as the technical authority for performance, reliability, and availability across services.
- Drive automation and orchestration across infrastructure and operations.
- Serve as the final escalation point for complex or undocumented production issues.
- Lead root-cause analysis, mitigation strategies, and long-term system improvements.
- Mentor engineers in building robust, automated, and production-grade systems.
- Champion best practices in SRE, reliability, and platform engineering.
Must-Have Requirements
- Bachelor’s degree in Computer Science, Engineering, or equivalent practical experience.
- 4+ years of strong backend coding in Python or Node.js.
- 4+ years of overall software engineering experience, including 2+ years in an SRE / automation-focused role.
- Strong hands-on experience with Infrastructure as Code (Terraform preferred).
- Deep experience with AWS cloud infrastructure and distributed systems (microservices, APIs, service-to-service communication).
- Experience with observability systems – logs, metrics, and tracing.
- Experience using CI/CD pipelines (e.g., CircleCI).
- Performance testing experience using K6 or similar tools.
- Strong focus on automation, standards, and operational excellence.
- Experience building low-latency APIs (< 200ms response time).
- Ability to work in fast-paced, high-ownership environments.
- Proven ability to lead technically, mentor engineers, and influence engineering quality.
Good-to-Have Skills
- Kubernetes and container orchestration experience.
- Observability tools such as Prometheus, Grafana, OpenTelemetry, Datadog.
- Experience building Internal Developer Platforms (IDPs) or reusable engineering frameworks.
- Exposure to ML infrastructure or data engineering pipelines.
- Experience working in compliance-driven environments (SOC2, HIPAA, etc.).
JOB DETAILS:
- Job Title: Senior Devops Engineer 2
- Industry: Ride-hailing
- Experience: 5-7 years
- Working Days: 5 days/week
- Work Mode: ONSITE
- Job Location: Bangalore
- CTC Range: Best in Industry
Required Skills: Cloud & Infrastructure Operations, Kubernetes & Container Orchestration, Monitoring, Reliability & Observability, Proficiency with Terraform, Ansible etc., Strong problem-solving skills with scripting (Python/Go/Shell)
Criteria:
1. Candidate must be from a product-based or scalable app-based start-ups company with experience handling large-scale production traffic.
2. Minimum 5 yrs of experience working as a DevOps/Infrastructure Consultant
3. Own end-to-end infrastructure right from non-prod to prod environment including self-managed
4. Candidate must have experience in database migration from scratch
5. Must have a firm hold on the container orchestration tool Kubernetes
6. Must have expertise in configuration management tools like Ansible, Terraform, Chef / Puppet
7. Understanding programming languages like GO/Python, and Java
8. Working on databases like Mongo/Redis/Cassandra/Elasticsearch/Kafka.
9. Working experience on Cloud platform - AWS
10. Candidate should have Minimum 1.5 years stability per organization, and a clear reason for relocation.
Description
Job Summary:
As a DevOps Engineer at company, you will be working on building and operating infrastructure at scale, designing and implementing a variety of tools to enable product teams to build and deploy their services independently, improving observability across the board, and designing for security, resiliency, availability, and stability. If the prospect of ensuring system reliability at scale and exploring cutting-edge technology to solve problems, excites you, then this is your fit.
Job Responsibilities:
● Own end-to-end infrastructure right from non-prod to prod environment including self-managed DBs
● Codify our infrastructure
● Do what it takes to keep the uptime above 99.99%
● Understand the bigger picture and sail through the ambiguities
● Scale technology considering cost and observability and manage end-to-end processes
● Understand DevOps philosophy and evangelize the principles across the organization
● Strong communication and collaboration skills to break down the silos
Job Requirements:
● B.Tech. / B.E. degree in Computer Science or equivalent software engineering degree/experience
● Minimum 5 yrs of experience working as a DevOps/Infrastructure Consultant
● Must have a firm hold on the container orchestration tool Kubernetes
● Must have expertise in configuration management tools like Ansible, Terraform, Chef / Puppet
● Strong problem-solving skills, and ability to write scripts using any scripting language
● Understanding programming languages like GO/Python, and Java
● Comfortable working on databases like Mongo/Redis/Cassandra/Elasticsearch/Kafka.
What’s there for you?
Company’s team handles everything – infra, tooling, and self-manages a bunch of databases, such as
● 150+ microservices with event-driven architecture across different tech stacks Golang/ java/ node
● More than 100,000 Request per second on our edge gateways
● ~20,000 events per second on self-managed Kafka
● 100s of TB of data on self-managed databases
● 100s of real-time continuous deployment to production
● Self-managed infra supporting
● 100% OSS
JOB DETAILS:
- Job Title: Lead DevOps Engineer
- Industry: Ride-hailing
- Experience: 6-9 years
- Working Days: 5 days/week
- Work Mode: ONSITE
- Job Location: Bangalore
- CTC Range: Best in Industry
Required Skills: Cloud & Infrastructure Operations, Kubernetes & Container Orchestration, Monitoring, Reliability & Observability, Proficiency with Terraform, Ansible etc., Strong problem-solving skills with scripting (Python/Go/Shell)
Criteria:
1. Candidate must be from a product-based or scalable app-based start-ups company with experience handling large-scale production traffic.
2. Minimum 6 yrs of experience working as a DevOps/Infrastructure Consultant
3. Candidate must have 2 years of experience as an lead (handling team of 3 to 4 members at least)
4. Own end-to-end infrastructure right from non-prod to prod environment including self-managed
5. Candidate must have Self experience in database migration from scratch
6. Must have a firm hold on the container orchestration tool Kubernetes
7. Should have expertise in configuration management tools like Ansible, Terraform, Chef / Puppet
8. Understanding programming languages like GO/Python, and Java
9. Working on databases like Mongo/Redis/Cassandra/Elasticsearch/Kafka.
10. Working experience on Cloud platform -AWS
11. Candidate should have Minimum 1.5 years stability per organization, and a clear reason for relocation.
Description
Job Summary:
As a DevOps Engineer at company, you will be working on building and operating infrastructure at scale, designing and implementing a variety of tools to enable product teams to build and deploy their services independently, improving observability across the board, and designing for security, resiliency, availability, and stability. If the prospect of ensuring system reliability at scale and exploring cutting-edge technology to solve problems, excites you, then this is your fit.
Job Responsibilities:
● Own end-to-end infrastructure right from non-prod to prod environment including self-managed DBs
● Codify our infrastructure
● Do what it takes to keep the uptime above 99.99%
● Understand the bigger picture and sail through the ambiguities
● Scale technology considering cost and observability and manage end-to-end processes
● Understand DevOps philosophy and evangelize the principles across the organization
● Strong communication and collaboration skills to break down the silos
Job Requirements:
● B.Tech. / B.E. degree in Computer Science or equivalent software engineering degree/experience
● Minimum 6 yrs of experience working as a DevOps/Infrastructure Consultant
● Must have a firm hold on the container orchestration tool Kubernetes
● Must have expertise in configuration management tools like Ansible, Terraform, Chef / Puppet
● Strong problem-solving skills, and ability to write scripts using any scripting language
● Understanding programming languages like GO/Python, and Java
● Comfortable working on databases like Mongo/Redis/Cassandra/Elasticsearch/Kafka.
What’s there for you?
Company’s team handles everything – infra, tooling, and self-manages a bunch of databases, such as
● 150+ microservices with event-driven architecture across different tech stacks Golang/ java/ node
● More than 100,000 Request per second on our edge gateways
● ~20,000 events per second on self-managed Kafka
● 100s of TB of data on self-managed databases
● 100s of real-time continuous deployment to production
● Self-managed infra supporting
● 100% OSS
JOB DETAILS:
- Job Title: Senior Devops Engineer 1
- Industry: Ride-hailing
- Experience: 4-6 years
- Working Days: 5 days/week
- Work Mode: ONSITE
- Job Location: Bangalore
- CTC Range: Best in Industry
Required Skills: Cloud & Infrastructure Operations, Kubernetes & Container Orchestration, Monitoring, Reliability & Observability, Proficiency with Terraform, Ansible etc., Strong problem-solving skills with scripting (Python/Go/Shell)
Criteria:
1. Candidate must be from a product-based or scalable app-based startups company with experience handling large-scale production traffic.
2. Candidate must have strong Linux expertise with hands-on production troubleshooting and working knowledge of databases and middleware (Mongo, Redis, Cassandra, Elasticsearch, Kafka).
3. Candidate must have solid experience with Kubernetes.
4. Candidate should have strong knowledge of configuration management tools like Ansible, Terraform, and Chef / Puppet. Add on- Prometheus & Grafana etc.
5. Candidate must be an individual contributor with strong ownership.
6. Candidate must have hands-on experience with DATABASE MIGRATIONS and observability tools such as Prometheus and Grafana.
7. Candidate must have working knowledge of Go/Python and Java.
8. Candidate should have working experience on Cloud platform - AWS
9. Candidate should have Minimum 1.5 years stability per organization, and a clear reason for relocation.
Description
Job Summary:
As a DevOps Engineer at company, you will be working on building and operating infrastructure at scale, designing and implementing a variety of tools to enable product teams to build and deploy their services independently, improving observability across the board, and designing for security, resiliency, availability, and stability. If the prospect of ensuring system reliability at scale and exploring cutting-edge technology to solve problems, excites you, then this is your fit.
Job Responsibilities:
- Own end-to-end infrastructure right from non-prod to prod environment including self-managed DBs.
- Understanding the needs of stakeholders and conveying this to developers.
- Working on ways to automate and improve development and release processes.
- Identifying technical problems and developing software updates and ‘fixes’.
- Working with software developers to ensure that development follows established processes and works as intended.
- Do what it takes to keep the uptime above 99.99%.
- Understand DevOps philosophy and evangelize the principles across the organization.
- Strong communication and collaboration skills to break down the silos
Job Requirements:
- B.Tech. / B.E. degree in Computer Science or equivalent software engineering degree/experience.
- Minimum 4 yrs of experience working as a DevOps/Infrastructure Consultant.
- Strong background in operating systems like Linux.
- Understands the container orchestration tool Kubernetes.
- Proficient Knowledge of configuration management tools like Ansible, Terraform, and Chef / Puppet. Add on- Prometheus & Grafana etc.
- Problem-solving attitude, and ability to write scripts using any scripting language.
- Understanding programming languages like GO/Python, and Java.
- Basic understanding of databases and middlewares like Mongo/Redis/Cassandra/Elasticsearch/Kafka.
- Should be able to take ownership of tasks, and must be responsible. - Good communication skills
About the role
We are looking for an experienced AWS Cloud Engineer with strong Java and Python/Golang expertise to design, modernize, and migrate applications and infrastructure to AWS. The ideal candidate will have hands-on experience with cloud-native development, Java application modernization, and end-to-end AWS migrations, with a strong focus on scalability, security, performance, and cost optimization.
This role involves working across application migration, cloud-native development, and infrastructure automation, collaborating closely with DevOps, security, and product teams.
Key Responsibilities
- Lead and execute application and infrastructure migrations from on-premises or other cloud platforms to AWS
- Assess legacy Java-based applications and define migration strategies (rehost, re-platform, refactor)
- Design and develop cloud-native applications and services using Java, Python, or Golang
- Modify and optimize applications for AWS readiness and scalability
- Design and implement AWS-native architectures ensuring high availability, security, and cost efficiency
- Build and maintain serverless and containerized solutions on AWS
- Develop RESTful APIs and microservices for system integrations
- Implement Infrastructure as Code (IaC) using CloudFormation, Terraform, or AWS CDK
- Support and improve CI/CD pipelines for deployment and migration activities
- Plan and execute data migration, backup, and disaster recovery strategies
- Monitor, troubleshoot, and resolve production and migration-related issues with minimal downtime
- Ensure adherence to AWS security best practices, governance, and compliance standards
- Create and maintain architecture diagrams, runbooks, and migration documentation
- Perform post-migration validation, performance tuning, and optimization
Required Skills & Experience
- 5–10 years of overall IT experience with strong AWS exposure
- Hands-on experience with AWS services, including:
- EC2, Lambda, S3, RDS
- ECS / EKS, API Gateway
- VPC, Subnets, Route Tables, Security Groups
- IAM, Load Balancers (ALB/NLB), Auto Scaling
- CloudWatch, SNS, CloudTrail
- Strong development experience in Java (8+), Python, or Golang
- Experience migrating Java applications (Spring / Spring Boot preferred)
- Strong understanding of cloud-native, serverless, and microservices architectures
- Experience with CI/CD tools (Jenkins, GitHub Actions, GitLab CI, etc.)
- Hands-on experience with Linux/UNIX environments
- Proficiency with Git-based version control
- Strong troubleshooting, analytical, and problem-solving skills
Good to Have / Nice to Have
- Experience with Docker and Kubernetes (EKS)
- Knowledge of application modernization patterns
- Experience with Terraform, CloudFormation, or AWS CDK
- Database experience: MySQL, PostgreSQL, Oracle, DynamoDB
- Understanding of the AWS Well-Architected Framework
- Experience in large-scale or enterprise migration programs
- AWS Certifications (Developer Associate, Solutions Architect, or Professional)
Education
- Bachelor’s degree in Computer Science, Engineering, or a related field
We are looking for a DevOps Engineer with hands-on experience in managing production infrastructure using AWS, Kubernetes, and Terraform. The ideal candidate will have exposure to CI/CD tools and queueing systems, along with a strong ability to automate and optimize workflows.
Responsibilities:
* Manage and optimize production infrastructure on AWS, ensuring scalability and reliability.
* Deploy and orchestrate containerized applications using Kubernetes.
* Implement and maintain infrastructure as code (IaC) using Terraform.
* Set up and manage CI/CD pipelines using tools like Jenkins or Chef to streamline deployment processes.
* Troubleshoot and resolve infrastructure issues to ensure high availability and performance.
* Collaborate with cross-functional teams to define technical requirements and deliver solutions.
* Nice-to-have: Manage queueing systems like Amazon SQS, Kafka, or RabbitMQ.
Requirements:
* 2+ years of experience with AWS, including practical exposure to its services in production environments.
* Demonstrated expertise in Kubernetes for container orchestration.
* Proficiency in using Terraform for managing infrastructure as code.
* Exposure to at least one CI/CD tool, such as Jenkins or Chef.
* Nice-to-have: Experience managing queueing systems like SQS, Kafka, or RabbitMQ.
Who We Are
We're a DevOps and Automation company based in Bengaluru, India. We have delivered over 170 automation projects for 65+ global clients, including Fortune 500 enterprises that trust us with mission-critical infrastructure and operations. We're bootstrapped, profitable, and scaling quickly by consistently solving complex engineering challenges with precision and reliability.
What We Value
- Ownership: As part of our team, you're responsible for strategy and outcomes, not just completing assigned tasks.
- High Velocity: We move fast, iterate faster, and amplify our impact, always prioritizing quality over speed.
Who We Seek
We are hiring DevOps Engineers (6 months - 1 year experience) to join our DevOps team. You will work on infrastructure automation, CI/CD pipelines, cloud deployments, container orchestration, and system reliability.
This role is ideal for someone who wants to work with modern DevOps tooling and contribute to high-impact engineering decisions.
🌏 Job Location: Bengaluru (Work From Office)
What You Will Be Doing
CI/CD Pipeline Management
- Design, implement, and maintain efficient CI/CD pipelines using Jenkins, GitLab CI, Azure DevOps, or similar tools.
- Automate build, test, and deployment processes to increase delivery speed and reliability.
Infrastructure as Code (IaC)
- Provision and manage infrastructure on AWS, Azure, or GCP using Terraform, CloudFormation, or Ansible.
- Maintain scalable, secure, and cost-optimized environments.
Containerization & Orchestration
- Build and manage Docker-based environments.
- Deploy and scale workloads using Kubernetes.
Monitoring & Alerting
- Implement monitoring, logging, and alerting systems using Prometheus, Grafana, ELK Stack, Datadog, or similar.
- Develop dashboards and alerts to detect issues proactively.
System Reliability & Performance
- Implement systems for high availability, disaster recovery, and fault tolerance.
- Troubleshoot and optimize infrastructure performance.
Scripting & Automation
- Write automation scripts in Python, Bash, or Shell to streamline operations.
- Automate repetitive workflows to reduce manual intervention.
Collaboration & Best Practices
- Work closely with Development, QA, and Security teams to embed DevOps best practices into the SDLC.
- Follow security standards for deployments and infrastructure.
- Work efficiently with Unix/Linux systems and understand core networking concepts (DNS, DHCP, NAT, VPN, TCP/IP).
What We’re Looking For
- Strong understanding of Linux distributions (Ubuntu, CentOS, RHEL) and Windows environments.
- Proficiency with Git and experience using GitHub, GitLab, or Bitbucket.
- Ability to write automation scripts using Bash/Shell or Python.
- Basic knowledge of relational databases like MySQL or PostgreSQL.
- Familiarity with web servers such as NGINX or Apache2.
- Experience working with AWS, Azure, GCP, or DigitalOcean.
- Foundational understanding of Ansible for configuration management.
- Basic knowledge of Terraform or CloudFormation for IaC.
- Hands-on experience with Jenkins or GitLab CI/CD pipelines.
- Strong knowledge of Docker for containerization.
- Basic exposure to Kubernetes for orchestration.
- Familiarity with at least one programming language (Java, Node.js, or Python).
Benefits
🤝 Work directly with founders and engineering leaders.
💪 Drive projects that create real business impact, not busywork.
💡 Gain practical, industry-relevant skills you won’t learn in college.
🚀 Accelerate your growth by working on meaningful engineering challenges.
📈 Learn continuously with mentorship and structured development opportunities.
🤗 Be part of a collaborative, high-energy workplace that values innovation.
Who We Are
We're a DevOps and Automation company based in Bengaluru, India. We have delivered over 170 automation projects for 65+ global clients, including Fortune 500 enterprises that trust us with mission-critical infrastructure and operations. We're bootstrapped, profitable, and scaling quickly by consistently solving complex engineering challenges with precision and reliability.
What We Value
- Ownership: As part of our team, you're responsible for strategy and outcomes, not just completing assigned tasks.
- High Velocity: We move fast, iterate faster, and amplify our impact, always prioritizing quality over speed.
Who We Seek
We are hiring DevOps Engineers (6 months - 1 year experience) to join our DevOps team. You will work on infrastructure automation, CI/CD pipelines, cloud deployments, container orchestration, and system reliability.
This role is ideal for someone who wants to work with modern DevOps tooling and contribute to high-impact engineering decisions.
🌏 Job Location: Bengaluru (Work From Office)
What You Will Be Doing
CI/CD Pipeline Management
- Design, implement, and maintain efficient CI/CD pipelines using Jenkins, GitLab CI, Azure DevOps, or similar tools.
- Automate build, test, and deployment processes to increase delivery speed and reliability.
Infrastructure as Code (IaC)
- Provision and manage infrastructure on AWS, Azure, or GCP using Terraform, CloudFormation, or Ansible.
- Maintain scalable, secure, and cost-optimized environments.
Containerization & Orchestration
- Build and manage Docker-based environments.
- Deploy and scale workloads using Kubernetes.
Monitoring & Alerting
- Implement monitoring, logging, and alerting systems using Prometheus, Grafana, ELK Stack, Datadog, or similar.
- Develop dashboards and alerts to detect issues proactively.
System Reliability & Performance
- Implement systems for high availability, disaster recovery, and fault tolerance.
- Troubleshoot and optimize infrastructure performance.
Scripting & Automation
- Write automation scripts in Python, Bash, or Shell to streamline operations.
- Automate repetitive workflows to reduce manual intervention.
Collaboration & Best Practices
- Work closely with Development, QA, and Security teams to embed DevOps best practices into the SDLC.
- Follow security standards for deployments and infrastructure.
- Work efficiently with Unix/Linux systems and understand core networking concepts (DNS, DHCP, NAT, VPN, TCP/IP).
What We’re Looking For
- Strong understanding of Linux distributions (Ubuntu, CentOS, RHEL) and Windows environments.
- Proficiency with Git and experience using GitHub, GitLab, or Bitbucket.
- Ability to write automation scripts using Bash/Shell or Python.
- Basic knowledge of relational databases like MySQL or PostgreSQL.
- Familiarity with web servers such as NGINX or Apache2.
- Experience working with AWS, Azure, GCP, or DigitalOcean.
- Foundational understanding of Ansible for configuration management.
- Basic knowledge of Terraform or CloudFormation for IaC.
- Hands-on experience with Jenkins or GitLab CI/CD pipelines.
- Strong knowledge of Docker for containerization.
- Basic exposure to Kubernetes for orchestration.
- Familiarity with at least one programming language (Java, Node.js, or Python).
Benefits
🤝 Work directly with founders and engineering leaders.
💪 Drive projects that create real business impact, not busywork.
💡 Gain practical, industry-relevant skills you won’t learn in college.
🚀 Accelerate your growth by working on meaningful engineering challenges.
📈 Learn continuously with mentorship and structured development opportunities.
🤗 Be part of a collaborative, high-energy workplace that values innovation.
Who We Are
We're a DevOps and Automation company based in Bengaluru, India. We have successfully delivered over 170 automation projects for 65+ global businesses, including Fortune 500 companies that trust us with their mission-critical infrastructure and operations. We're bootstrapped, profitable, and scaling quickly by consistently solving high-impact engineering problems.
What We Value
Ownership: You take accountability for outcomes, not just tasks.
High Velocity: We iterate fast, learn constantly, and deliver with precision.
Who We Seek
We are looking for a DevOps Intern to join our DevOps team and gain hands-on experience working with real infrastructure, automation pipelines, and deployment environments. You will support CI/CD processes, cloud environments, monitoring, and system reliability while learning industry-standard tools and practices.
We’re seeking someone who is technically curious, eager to learn, and driven to build reliable systems in a fast-paced engineering environment.
🌏 Job Location: Bengaluru (Work From Office)
What You Will Be Doing
- Assist in deploying product updates, monitoring system performance, and identifying production issues.
- Contribute to building and improving CI/CD pipelines for automated deployments.
- Support the provisioning, configuration, and maintenance of cloud infrastructure.
- Work with tools like Docker, Jenkins, Git, and monitoring systems to streamline workflows.
- Help automate recurring operational processes using scripting and DevOps tools.
- Participate in backend integrations aligned with product or customer requirements.
- Collaborate with developers, QA, and operations to improve reliability and scalability.
- Gain exposure to containerization, infrastructure-as-code, and cloud platforms.
- Document processes, configurations, and system behaviours to support team efficiency.
- Learn and apply DevOps best practices in real-world environments.
What We’re Looking For
- Hands-on experience or coursework with Docker, Linux, or cloud fundamentals.
- Familiarity with Jenkins, Git, or basic CI/CD concepts.
- Basic understanding of AWS, Azure, or Google Cloud environments.
- Exposure to configuration management tools like Ansible, Puppet, or similar.
- Interest in Kubernetes, Terraform, or infrastructure-as-code practices.
- Ability to write or modify simple shell or Python scripts.
- Strong analytical and troubleshooting mindset.
- Good communication skills with the ability to articulate technical concepts clearly.
- Eagerness to learn, take initiative, and adapt in a fast-moving engineering environment.
- Attention to detail and a commitment to accuracy and reliability.
Benefits
🤝 Work directly with founders and senior engineers.
💪 Contribute to live projects that impact real customers and systems.
💡 Learn tools and practices that engineering programs rarely teach.
🚀 Accelerate your growth through real-world problem solving.
📈 Build a strong DevOps foundation with continuous learning opportunities.
🤗 Thrive in a collaborative environment that encourages experimentation and growth.
SENIOR INFORMATION SECURITY ENGINEER (DEVSECOPS)
Key Skills: Software Development Life Cycle (SDLC), CI/CD
About Company: Consumer Internet / E-Commerce
Company Size: Mid-Sized
Experience Required: 6 - 10 years
Working Days: 5 days/week
Office Location: Bengaluru [Karnataka]
Review Criteria:
Mandatory:
- Strong DevSecOps profile
- Must have 5+ years of hands-on experience in Information Security, with a primary focus on cloud security across AWS, Azure, and GCP environments.
- Must have strong practical experience working with Cloud Security Posture Management (CSPM) tools such as Prisma Cloud, Wiz, or Orca along with SIEM / IDS / IPS platforms
- Must have proven experience in securing Kubernetes and containerized environments including image security,runtime protection, RBAC, and network policies.
- Must have hands-on experience integrating security within CI/CD pipelines using tools such as Snyk, GitHub Advanced Security,or equivalent security scanning solutions.
- Must have solid understanding of core security domains including network security, encryption, identity and access management key management, and security governance including cloud-native security services like GuardDuty, Azure Security Center etc
- Must have practical experience with Application Security Testing tools including SAST, DAST, and SCA in real production environments
- Must have hands-on experience with security monitoring, incident response, alert investigation, root-cause analysis (RCA), and managing VAPT / penetration testing activities
- Must have experience securing infrastructure-as-code and cloud deployments using Terraform, CloudFormation, ARM, Docker, and Kubernetes
- B2B SaaS Product companies
- Must have working knowledge of globally recognized security frameworks and standards such as ISO 27001, NIST, and CIS with exposure to SOC2, GDPR, or HIPAA compliance environments
Preferred:
- Experience with DevSecOps automation, security-as-code, and policy-as-code implementations
- Exposure to threat intelligence platforms, cloud security monitoring, and proactive threat detection methodologies, including EDR / DLP or vulnerability management tools
- Must demonstrate strong ownership mindset, proactive security-first thinking, and ability to communicate risks in clear business language
Roles & Responsibilities:
We are looking for a Senior Information Security Engineer who can help protect our cloud infrastructure, applications, and data while enabling teams to move fast and build securely.
This role sits deep within our engineering ecosystem. You’ll embed security into how we design, build, deploy, and operate systems—working closely with Cloud, Platform, and Application Engineering teams. You’ll balance proactive security design with hands-on incident response, and help shape a strong, security-first culture across the organization.
If you enjoy solving real-world security problems, working close to systems and code, and influencing how teams build securely at scale, this role is for you.
What You’ll Do-
Cloud & Infrastructure Security:
- Design, implement, and operate cloud-native security controls across AWS, Azure, GCP, and Oracle.
- Strengthen IAM, network security, and cloud posture using services like GuardDuty, Azure Security Center and others.
- Partner with platform teams to secure VPCs, security groups, and cloud access patterns.
Application & DevSecOps Security:
- Embed security into the SDLC through threat modeling, secure code reviews, and security-by-design practices.
- Integrate SAST, DAST, and SCA tools into CI/CD pipelines.
- Secure infrastructure-as-code and containerized workloads using Terraform, CloudFormation, ARM, Docker, and Kubernetes.
Security Monitoring & Incident Response:
- Monitor security alerts and investigate potential threats across cloud and application layers.
- Lead or support incident response efforts, root-cause analysis, and corrective actions.
- Plan and execute VAPT and penetration testing engagements (internal and external), track remediation, and validate fixes.
- Conduct red teaming activities and tabletop exercises to test detection, response readiness, and cross-team coordination.
- Continuously improve detection, response, and testing maturity.
Security Tools & Platforms:
- Manage and optimize security tooling including firewalls, SIEM, EDR, DLP, IDS/IPS, CSPM, and vulnerability management platforms.
- Ensure tools are well-integrated, actionable, and aligned with operational needs.
Compliance, Governance & Awareness:
- Support compliance with industry standards and frameworks such as SOC2, HIPAA, ISO 27001, NIST, CIS, and GDPR.
- Promote secure engineering practices through training, documentation, and ongoing awareness programs.
- Act as a trusted security advisor to engineering and product teams.
Continuous Improvement:
- Stay ahead of emerging threats, cloud vulnerabilities, and evolving security best practices.
- Continuously raise the bar on a company's security posture through automation and process improvement.
Endpoint Security (Secondary Scope):
- Provide guidance on endpoint security tooling such as SentinelOne and Microsoft Defender when required.
Ideal Candidate:
- Strong hands-on experience in cloud security across AWS and Azure.
- Practical exposure to CSPM tools (e.g., Prisma Cloud, Wiz, Orca) and SIEM / IDS / IPS platforms.
- Experience securing containerized and Kubernetes-based environments.
- Familiarity with CI/CD security integrations (e.g., Snyk, GitHub Advanced Security, or similar).
- Solid understanding of network security, encryption, identity, and access management.
- Experience with application security testing tools (SAST, DAST, SCA).
- Working knowledge of security frameworks and standards such as ISO 27001, NIST, and CIS.
- Strong analytical, troubleshooting, and problem-solving skills.
Nice to Have:
- Experience with DevSecOps automation and security-as-code practices.
- Exposure to threat intelligence and cloud security monitoring solutions.
- Familiarity with incident response frameworks and forensic analysis.
- Security certifications such as CISSP, CISM, CCSP, or CompTIA Security+.
Perks, Benefits and Work Culture:
A wholesome opportunity in a fast-paced environment that will enable you to juggle between concepts, yet maintain the quality of content, interact and share your ideas and have loads of learning while at work. Work with a team of highly talented young professionals and enjoy the comprehensive benefits that company offers.
About Tarento:
Tarento is a fast-growing technology consulting company headquartered in Stockholm, with a strong presence in India and clients across the globe. We specialize in digital transformation, product engineering, and enterprise solutions, working across diverse industries including retail, manufacturing, and healthcare. Our teams combine Nordic values with Indian expertise to deliver innovative, scalable, and high-impact solutions.
We're proud to be recognized as a Great Place to Work, a testament to our inclusive culture, strong leadership, and commitment to employee well-being and growth. At Tarento, you’ll be part of a collaborative environment where ideas are valued, learning is continuous, and careers are built on passion and purpose.
Scope of Work:
- Support the migration of applications from Windows Server 2008 to Windows Server 2019 or 2022 in an IaaS environment.
- Migrate IIS websites, Windows Services, and related application components.
- Assist with migration considerations for SQL Server connections, instances, and basic data-related dependencies.
- Evaluate and migrate message queues (MSMQ or equivalent technologies).
- Document the existing environment, migration steps, and post-migration state.
- Work closely with DevOps, development, and infrastructure teams throughout the project.
Required Skills & Experience:
- Strong hands-on experience with IIS administration, configuration, and application migration.
- Proven experience migrating workloads between Windows Server versions, ideally legacy to modern.
- Knowledge of Windows Services setup, configuration, and troubleshooting.
- Practical understanding of SQL Server (connection strings, service accounts, permissions).
- Experience with queues IBM/MSMQ or similar) and their migration considerations.
- Ability to identify migration risks, compatibility constraints, and remediation options.
- Strong troubleshooting and analytical skills.
- Familiarity with Microsoft technologies (.Net, etc)
- Networking and Active Directory related knowledge
Desirable / Nice-to-Have
- Exposure to CI/CD tools, especially TeamCity and Octopus Deploy.
- Familiarity with Azure services and related tools (Terraform, etc)
- PowerShell scripting for automation or configuration tasks.
- Understanding enterprise change management and documentation practices.
- Security
Soft Skills
- Clear written and verbal communication.
- Ability to work independently while collaborating with cross-functional teams.
- Strong attention to detail and a structured approach to execution.
- Troubleshooting
- Willingness to learn.
Location & Engagement Details
We are looking for a Senior DevOps Consultant for an onsite role in Stockholm (Sundbyberg office). This opportunity is open to candidates currently based in Bengaluru who are willing to relocate to Sweden for the assignment.
The role will start with an initial 6-month onsite engagement, with the possibility of extension based on project requirements and performance.
ROLES AND RESPONSIBILITIES:
You will be responsible for architecting, implementing, and optimizing Dremio-based data Lakehouse environments integrated with cloud storage, BI, and data engineering ecosystems. The role requires a strong balance of architecture design, data modeling, query optimization, and governance enablement in large-scale analytical environments.
- Design and implement Dremio lakehouse architecture on cloud (AWS/Azure/Snowflake/Databricks ecosystem).
- Define data ingestion, curation, and semantic modeling strategies to support analytics and AI workloads.
- Optimize Dremio reflections, caching, and query performance for diverse data consumption patterns.
- Collaborate with data engineering teams to integrate data sources via APIs, JDBC, Delta/Parquet, and object storage layers (S3/ADLS).
- Establish best practices for data security, lineage, and access control aligned with enterprise governance policies.
- Support self-service analytics by enabling governed data products and semantic layers.
- Develop reusable design patterns, documentation, and standards for Dremio deployment, monitoring, and scaling.
- Work closely with BI and data science teams to ensure fast, reliable, and well-modeled access to enterprise data.
IDEAL CANDIDATE:
- Bachelor’s or Master’s in Computer Science, Information Systems, or related field.
- 5+ years in data architecture and engineering, with 3+ years in Dremio or modern lakehouse platforms.
- Strong expertise in SQL optimization, data modeling, and performance tuning within Dremio or similar query engines (Presto, Trino, Athena).
- Hands-on experience with cloud storage (S3, ADLS, GCS), Parquet/Delta/Iceberg formats, and distributed query planning.
- Knowledge of data integration tools and pipelines (Airflow, DBT, Kafka, Spark, etc.).
- Familiarity with enterprise data governance, metadata management, and role-based access control (RBAC).
- Excellent problem-solving, documentation, and stakeholder communication skills.
PREFERRED:
- Experience integrating Dremio with BI tools (Tableau, Power BI, Looker) and data catalogs (Collibra, Alation, Purview).
- Exposure to Snowflake, Databricks, or BigQuery environments.
- Experience in high-tech, manufacturing, or enterprise data modernization programs.
Exp: 7- 10 Years
CTC: up to 35 LPA
Skills:
- 6–10 years DevOps / SRE / Cloud Infrastructure experience
- Expert-level Kubernetes (networking, security, scaling, controllers)
- Terraform Infrastructure-as-Code mastery
- Hands-on Kafka production experience
- AWS cloud architecture and networking expertise
- Strong scripting in Python, Go, or Bash
- GitOps and CI/CD tooling experience
Key Responsibilities:
- Design highly available, secure cloud infrastructure supporting distributed microservices at scale
- Lead multi-cluster Kubernetes strategy optimized for GPU and multi-tenant workloads
- Implement Infrastructure-as-Code using Terraform across full infrastructure lifecycle
- Optimize Kafka-based data pipelines for throughput, fault tolerance, and low latency
- Deliver zero-downtime CI/CD pipelines using GitOps-driven deployment models
- Establish SRE practices with SLOs, p95 and p99 monitoring, and FinOps discipline
- Ensure production-ready disaster recovery and business continuity testing
If interested Kindly share your updated resume at 82008 31681
About Us:
Tradelab Technologies Pvt Ltd is not for those seeking comfort—we are for those hungry to make a mark in the trading and fintech industry.
Key Responsibilities
CI/CD and Infrastructure Automation
- Design, implement, and maintain CI/CD pipelines to support fast and reliable releases
- Automate deployments using tools such as Terraform, Helm, and Kubernetes
- Improve build and release processes to support high-performance and low-latency trading applications
- Work efficiently with Linux/Unix environments
Cloud and On-Prem Infrastructure Management
- Deploy, manage, and optimize infrastructure on AWS, GCP, and on-premises environments
- Ensure system reliability, scalability, and high availability
- Implement Infrastructure as Code (IaC) to standardize and streamline deployments
Performance Monitoring and Optimization
- Monitor system performance and latency using Prometheus, Grafana, and ELK stack
- Implement proactive alerting and fault detection to ensure system stability
- Troubleshoot and optimize system components for maximum efficiency
Security and Compliance
- Apply DevSecOps principles to ensure secure deployment and access management
- Maintain compliance with financial industry regulations such as SEBI
- Conduct vulnerability assessments and maintain logging and audit controls
Required Skills and Qualifications
- 2+ years of experience as a DevOps Engineer in a software or trading environment
- Strong expertise in CI/CD tools (Jenkins, GitLab CI/CD, ArgoCD)
- Proficiency in cloud platforms such as AWS and GCP
- Hands-on experience with Docker and Kubernetes
- Experience with Terraform or CloudFormation for IaC
- Strong Linux administration and networking fundamentals (TCP/IP, DNS, firewalls)
- Familiarity with Prometheus, Grafana, and ELK stack
- Proficiency in scripting using Python, Bash, or Go
- Solid understanding of security best practices including IAM, encryption, and network policies
Good to Have (Optional)
- Experience with low-latency trading infrastructure or real-time market data systems
- Knowledge of high-frequency trading environments
- Exposure to FIX protocol, FPGA, or network optimization techniques
- Familiarity with Redis or Nginx for real-time data handling
Why Join Us?
- Work with a team that expects and delivers excellence.
- A culture where risk-taking is rewarded, and complacency is not.
- Limitless opportunities for growth—if you can handle the pace.
- A place where learning is currency, and outperformance is the only metric that matters.
- The opportunity to build systems that move markets, execute trades in microseconds, and redefine fintech.
This isn’t just a job—it’s a proving ground. Ready to take the leap? Apply now.
Role Summary
Our CloudOps/DevOps teams are distributed across India, Canada, and Israel.
As a Manager, you will lead teams of Engineers and champion configuration management, cloud technologies, and continuous improvement. The role involves close collaboration with global leaders to ensure our applications, infrastructure, and processes remain scalable, secure, and supportable. You will work closely with Engineers across Dev, DevOps, and DBOps to design and implement solutions that improve customer value, reduce costs, and eliminate toil.
Key Responsibilities
- Guide the professional development of Engineers and support teams in meeting business objectives
- Collaborate with leaders in Israel on priorities, architecture, delivery, and product management
- Build secure, scalable, and self-healing systems
- Manage and optimize deployment pipelines
- Triage and remediate production issues
- Participate in on-call escalations
Key Qualifications
- Bachelor’s in CS or equivalent experience
- 3+ years managing Engineering teams
- 8+ years as a Site Reliability or Platform Engineer
- 5+ years administering Linux and Windows environments
- 3+ years programming/scripting (Python, JavaScript, PowerShell)
- Strong experience with OS internals, virtualization, storage, networking, and firewalls
- Experience maintaining On-Prem (90%) and Cloud (10%) environments (AWS, GCP, Azure)
JD for Cloud engineer
Job Summary:
We are looking for an experienced GCP Cloud Engineer to design, implement, and manage cloud-based solutions on Google Cloud Platform (GCP). The ideal candidate should have expertise in GKE (Google Kubernetes Engine), Cloud Run, Cloud Loadbalancer, Cloud function, Azure DevOps, and Terraform, with a strong focus on automation, security, and scalability.
You will work closely with development, operations, and security teams to ensure robust cloud infrastructure and CI/CD pipelines while optimizing performance and cost.
Key Responsibilities:
1. Cloud Infrastructure Design & Management
- Architect, deploy, and maintain GCP cloud resources via terraform/other automation.
- Implement Google Cloud Storage, Cloud SQL, filestore, for data storage and processing needs.
- Manage and configure Cloud Load Balancers (HTTP(S), TCP/UDP, and SSL Proxy) for high availability and scalability.
- Optimize resource allocation, monitoring, and cost efficiency across GCP environments.
2. Kubernetes & Container Orchestration
- Deploy, manage, and optimize workloads on Google Kubernetes Engine (GKE).
- Work with Helm charts for microservices deployments.
- Automate scaling, rolling updates, and zero-downtime deployments.
3. Serverless & Compute Services
- Deploy and manage applications on Cloud Run and Cloud Functions for scalable, serverless workloads.
- Optimize containerized applications running on Cloud Run for cost efficiency and performance.
4. CI/CD & DevOps Automation
- Design, implement, and manage CI/CD pipelines using Azure DevOps.
- Automate infrastructure deployment using Terraform, Bash and Powershell scripting
- Integrate security and compliance checks into the DevOps workflow (DevSecOps).
Required Skills & Qualifications:
✔ Experience: 8+ years in Cloud Engineering, with a focus on GCP.
✔ Cloud Expertise: Strong knowledge of GCP services (GKE, Compute Engine, IAM, VPC, Cloud Storage, Cloud SQL, Cloud Functions).
✔ Kubernetes & Containers: Experience with GKE, Docker, GKE Networking, Helm.
✔ DevOps Tools: Hands-on experience with Azure DevOps for CI/CD pipeline automation.
✔ Infrastructure-as-Code (IaC): Expertise in Terraform for provisioning cloud resources.
✔ Scripting & Automation: Proficiency in Python, Bash, or PowerShell for automation.
✔ Security & Compliance: Knowledge of cloud security principles, IAM, and compliance standards.
Required Skills & Qualifications:
✔ Experience: 4+ years in Cloud Engineering, with a focus on GCP.
✔ Cloud Expertise: Strong knowledge of GCP services (GKE, Compute Engine, IAM, VPC, Cloud Storage, Cloud SQL, Cloud Functions).
✔ Kubernetes & Containers: Experience with GKE, Docker, GKE Networking, Helm.
✔ DevOps Tools: Hands-on experience with Azure DevOps for CI/CD pipeline automation.
✔ Infrastructure-as-Code (IaC): Expertise in Terraform for provisioning cloud resources.
✔ Scripting & Automation: Proficiency in Python, Bash, or PowerShell for automation.
Job Summary :
We are looking for a proactive and skilled Senior DevOps Engineer to join our team and play a key role in building, managing, and scaling infrastructure for high-performance systems. The ideal candidate will have hands-on experience with Kubernetes, Docker, Python scripting, cloud platforms, and DevOps practices around CI/CD, monitoring, and incident response.
Key Responsibilities :
- Design, build, and maintain scalable, reliable, and secure infrastructure on cloud platforms (AWS, GCP, or Azure).
- Implement Infrastructure as Code (IaC) using tools like Terraform, Cloud Formation, or similar.
- Manage Kubernetes clusters, configure namespaces, services, deployments, and auto scaling.
CI/CD & Release Management :
- Build and optimize CI/CD pipelines for automated testing, building, and deployment of services.
- Collaborate with developers to ensure smooth and frequent deployments to production.
- Manage versioning and rollback strategies for critical deployments.
Containerization & Orchestration using Kubernetes :
- Containerize applications using Docker, and manage them using Kubernetes.
- Write automation scripts using Python or Shell for infrastructure tasks, monitoring, and deployment flows.
- Develop utilities and tools to enhance operational efficiency and reliability.
Monitoring & Incident Management :
- Analyze system performance and implement infrastructure scaling strategies based on load and usage trends.
- Optimize application and system performance through proactive monitoring and configuration tuning.
Desired Skills and Experience :
- Experience Required - 8+ yrs.
- Hands-on experience on cloud services like AWS, EKS etc.
- Ability to design a good cloud solution.
- Strong Linux troubleshooting, Shell Scripting, Kubernetes, Docker, Ansible, Jenkins Skills.
- Design and implement the CI/CD pipeline following the best industry practices using open-source tools.
- Use knowledge and research to constantly modernize our applications and infrastructure stacks.
- Be a team player and strong problem-solver to work with a diverse team.
- Having good communication skills.
Job Position: Lead II - Software Engineering
Domain: Information technology (IT)
Location: India - Thiruvananthapuram
Salary: Best in Industry
Job Positions: 1
Experience: 8 - 12 Years
Skills: .Net, Sql Azure, Rest Api, Vue.Js
Notice Period: Immediate – 30 Days
Job Summary:
We are looking for a highly skilled Senior .NET Developer with a minimum of 7 years of experience across the full software development lifecycle, including post-live support. The ideal candidate will have a strong background in .NET backend API development, Agile methodologies, and Cloud infrastructure (preferably Azure). You will play a key role in solution design, development, DevOps pipeline enhancement, and mentoring junior engineers.
Key Responsibilities:
- Design, develop, and maintain scalable and secure .NET backend APIs.
- Collaborate with product owners and stakeholders to understand requirements and translate them into technical solutions.
- Lead and contribute to Agile software delivery processes (Scrum, Kanban).
- Develop and improve CI/CD pipelines and support release cadence targets, using Infrastructure as Code tools (e.g., Terraform).
- Provide post-live support, troubleshooting, and issue resolution as part of full lifecycle responsibilities.
- Implement unit and integration testing to ensure code quality and system stability.
- Work closely with DevOps and cloud engineering teams to manage deployments on Azure (Web Apps, Container Apps, Functions, SQL).
- Contribute to front-end components when necessary, leveraging HTML, CSS, and JavaScript UI frameworks.
- Mentor and coach engineers within a co-located or distributed team environment.
- Maintain best practices in code versioning, testing, and documentation.
Mandatory Skills:
- 7+ years of .NET development experience, including API design and development
- Strong experience with Azure Cloud services, including:
- Web/Container Apps
- Azure Functions
- Azure SQL Server
- Solid understanding of Agile development methodologies (Scrum/Kanban)
- Experience in CI/CD pipeline design and implementation
- Proficient in Infrastructure as Code (IaC) – preferably Terraform
- Strong knowledge of RESTful services and JSON-based APIs
- Experience with unit and integration testing techniques
- Source control using Git
- Strong understanding of HTML, CSS, and cross-browser compatibility
Good-to-Have Skills:
- Experience with Kubernetes and Docker
- Knowledge of JavaScript UI frameworks, ideally Vue.js
- Familiarity with JIRA and Agile project tracking tools
- Exposure to Database as a Service (DBaaS) and Platform as a Service (PaaS) concepts
- Experience mentoring or coaching junior developers
- Strong problem-solving and communication skills
Job Title : Senior QA Automation Architect (Cloud & Kubernetes)
Experience : 6+ Years
Location : India (Multiple Offices)
Shift Timings : 12 PM to 9 PM (Noon Shift)
Working Days : 5 Days WFO (NO Hybrid)
About the Role :
We’re looking for a Senior QA Automation Architect with deep expertise in cloud-native systems, Kubernetes, and automation frameworks.
You’ll design scalable test architectures, enhance automation coverage, and ensure product reliability across hybrid-cloud and distributed environments.
Key Responsibilities :
- Architect and maintain test automation frameworks for microservices.
- Integrate automated tests into CI/CD pipelines (Jenkins, GitHub Actions).
- Ensure reliability, scalability, and observability of test systems.
- Work closely with DevOps and Cloud teams to streamline automation infrastructure.
Mandatory Skills :
- Kubernetes, Helm, Docker, Linux
- Cloud Platforms : AWS / Azure / GCP
- CI/CD Tools : Jenkins, GitHub Actions
- Scripting : Python, Pytest, Bash
- Monitoring & Performance : Prometheus, Grafana, Jaeger, K6
- IaC Practices : Terraform / Ansible
Good to Have :
- Experience with Service Mesh (Istio/Linkerd).
- Container Security or DevSecOps exposure.
Job Title: AWS DevOps Engineer
Experience Level: 5+ Years
Location: Bangalore, Pune, Hyderabad, Chennai and Gurgaon
Summary:
We are looking for a hands-on Platform Engineer with strong execution skills to provision and manage cloud infrastructure. The ideal candidate will have experience with Linux, AWS services, Kubernetes, and Terraform, and should be capable of troubleshooting complex issues in cloud and container environments.
Key Responsibilities:
- Provision AWS infrastructure using Terraform (IaC).
- Manage and troubleshoot Kubernetes clusters (EKS/ECS).
- Work with core AWS services: VPC, EC2, S3, RDS, Lambda, ALB, WAF, and CloudFront.
- Support CI/CD pipelines using Jenkins and GitHub.
- Collaborate with teams to resolve infrastructure and deployment issues.
- Maintain documentation of infrastructure and operational procedures.
Required Skills:
- 3+ years of hands-on experience in AWS infrastructure provisioning using Terraform.
- Strong Linux administration and troubleshooting skills.
- Experience managing Kubernetes clusters.
- Basic experience with CI/CD tools like Jenkins and GitHub.
- Good communication skills and a positive, team-oriented attitude.
Preferred:
- AWS Certification (e.g., Solutions Architect, DevOps Engineer).
- Exposure to Agile and DevOps practices.
- Experience with monitoring and logging tools.

Quantalent AI is hiring for a fastest growing fin-tech firm
Job Title: DevOps - 3
Roles and Responsibilities:
- Develop deep understanding of the end-to-end configurations, dependencies, customer requirements, and overall characteristics of the production services as the accountable owner for overall service operations
- Implementing best practices, challenging the status quo, and tab on industry and technical trends, changes, and developments to ensure the team is always striving for best-in-class work
- Lead incident response efforts, working closely with cross-functional teams to resolve issues quickly and minimize downtime. Implement effective incident management processes and post-incident reviews
- Participate in on-call rotation responsibilities, ensuring timely identification and resolution of infrastructure issues
- Possess expertise in designing and implementing capacity plans, accurately estimating costs and efforts for infrastructure needs.
- Systems and Infrastructure maintenance and ownership for production environments, with a continued focus on improving efficiencies, availability, and supportability through automation and well defined runbooks
- Provide mentorship and guidance to a team of DevOps engineers, fostering a collaborative and high-performing work environment. Mentor team members in best practices, technologies, and methodologies.
- Design for Reliability - Architect & implement solutions that keeps Infrastructure running with Always On availability and ensures high uptime SLA for the Infrastructure
- Manage individual project priorities, deadlines, and deliverables related to your technical expertise and assigned domains
- Collaborate with Product & Information Security teams to ensure the integrity and security of Infrastructure and applications. Implement security best practices and compliance standards.
Must Haves
- 5-8 years of experience as Devops / SRE / Platform Engineer.
- Strong expertise in automating Infrastructure provisioning and configuration using tools like Ansible, Packer, Terraform, Docker, Helm Charts etc.
- Strong skills in network services such as DNS, TLS/SSL, HTTP, etc
- Expertise in managing large-scale cloud infrastructure (preferably AWS and Oracle)
- Expertise in managing production grade Kubernetes clusters
- Experience in scripting using programming languages like Bash, Python, etc.
- Expertise in skill sets for centralized logging systems, metrics, and tooling frameworks such as ELK, Prometheus/VictoriaMetrics, and Grafana etc.
- Experience in Managing and building High scale API Gateway, Service Mesh, etc
- Systematic problem-solving approach, coupled with strong communication skills and a sense of ownership and drive
- Have a working knowledge of a backend programming language
- Deep knowledge & experience with Unix / Linux operating systems internals (Eg. filesystems, user management, etc)
- A working knowledge and deep understanding of cloud security concepts
- Proven track record of driving results and delivering high-quality solutions in a fast-paced environment
- Demonstrated ability to communicate clearly with both technical and non-technical project stakeholders, with the ability to work effectively in a cross-functional team environment.

A modern configuration management platform based on advanced
Key Responsibilities:
Kubernetes Management:
Deploy, configure, and maintain Kubernetes clusters on AKS, EKS, GKE, and OKE.
Troubleshoot and resolve issues related to cluster performance and availability.
Database Migration:
Plan and execute database migration strategies across multicloud environments, ensuring data integrity and minimal downtime.
Collaborate with database teams to optimize data flow and management.
Coding and Development:
Develop, test, and optimize code with a focus on enhancing algorithms and data structures for system performance.
Implement best coding practices and contribute to code reviews.
Cross-Platform Integration:
Facilitate seamless integration of services across different cloud providers to enhance interoperability.
Collaborate with development teams to ensure consistent application performance across environments.
Performance Optimization:
Monitor system performance metrics, identify bottlenecks, and implement effective solutions to optimize resource utilization.
Conduct regular performance assessments and provide recommendations for improvements.
Experience:
Minimum of 2+ years of experience in cloud computing, with a strong focus on Kubernetes management across multiple platforms.
Technical Skills:
Proficient in cloud services and infrastructure, including networking and security considerations.
Strong programming skills in languages such as Python, Go, or Java, with a solid understanding of algorithms and data structures.
Problem-Solving:
Excellent analytical and troubleshooting skills with a proactive approach to identifying and resolving issues.
Communication:
Strong verbal and written communication skills, with the ability to collaborate effectively with cross-functional teams.
Preferred Skills:
- Familiarity with CI/CD tools and practices.
- Experience with container orchestration and management tools.
- Knowledge of microservices architecture and design patterns.
Job Summary:
We are seeking an experienced and highly motivated Senior Python Developer to join our dynamic and growing engineering team. This role is ideal for a seasoned Python expert who thrives in a fast-paced, collaborative environment and has deep experience building scalable applications, working with cloud platforms, and automating infrastructure.
Key Responsibilities:
Develop and maintain scalable backend services and APIs using Python, with a strong emphasis on clean architecture and maintainable code.
Design and implement RESTful APIs using frameworks such as Flask or FastAPI, and integrate with relational databases using ORM tools like SQLAlchemy.
Work with major cloud platforms (AWS, GCP, or Oracle Cloud Infrastructure) using Python SDKs to build and deploy cloud-native applications.
Automate system and infrastructure tasks using tools like Ansible, Chef, or other configuration management solutions.
Implement and support Infrastructure as Code (IaC) using Terraform or cloud-native templating tools to manage resources effectively.
Work across both Linux and Windows environments, ensuring compatibility and stability across platforms.
Required Qualifications:
5+ years of professional experience in Python development, with a strong portfolio of backend/API projects.
Strong expertise in Flask, SQLAlchemy, and other Python-based frameworks and libraries.
Proficient in asynchronous programming and event-driven architecture using tools such as asyncio, Celery, or similar.
Solid understanding and hands-on experience with cloud platforms – AWS, Google Cloud Platform, or Oracle Cloud Infrastructure.
Experience using Python SDKs for cloud services to automate provisioning, deployment, or data workflows.
Practical knowledge of Linux and Windows environments, including system-level scripting and debugging.
Automation experience using tools such as Ansible, Chef, or equivalent configuration management systems.
Experience implementing and maintaining CI/CD pipelines with industry-standard tools.
Familiarity with Docker and container orchestration concepts (e.g., Kubernetes is a plus).
Hands-on experience with Terraform or equivalent infrastructure-as-code tools for managing cloud environments.
Excellent problem-solving skills, attention to detail, and a proactive mindset.
Strong communication skills and the ability to collaborate with diverse technical teams.
Preferred Qualifications (Nice to Have):
Experience with other Python frameworks (FastAPI, Django)
Knowledge of container orchestration tools like Kubernetes
Familiarity with monitoring tools like Prometheus, Grafana, or Datadog
Prior experience working in an Agile/Scrum environment
Contributions to open-source projects or technical blogs
Job Description
We are looking for an experienced GCP Cloud Engineer to design, implement, and manage cloud-based solutions on Google Cloud Platform (GCP). The ideal candidate should have expertise in GKE (Google Kubernetes Engine), Cloud Run, Cloud Loadbalancer, Cloud function, Azure DevOps, and Terraform, with a strong focus on automation, security, and scalability.
Work location: Pune/Mumbai/Bangalore
Experience: 4-7 Years
Joining: Mid of October
You will work closely with development, operations, and security teams to ensure robust cloud infrastructure and CI/CD pipelines while optimizing performance and cost.
Key Responsibilities:
1. Cloud Infrastructure Design & Management
· Architect, deploy, and maintain GCP cloud resources via terraform/other automation.
· Implement Google Cloud Storage, Cloud SQL, file store, for data storage and processing needs.
· Manage and configure Cloud Load Balancers (HTTP(S), TCP/UDP, and SSL Proxy) for high availability and scalability.
· Optimize resource allocation, monitoring, and cost efficiency across GCP environments.
2. Kubernetes & Container Orchestration
· Deploy, manage, and optimize workloads on Google Kubernetes Engine (GKE).
· Work with Helm charts, Istio, and service meshes for microservices deployments.
· Automate scaling, rolling updates, and zero-downtime deployments.
3. Serverless & Compute Services
· Deploy and manage applications on Cloud Run and Cloud Functions for scalable, serverless workloads.
· Optimize containerized applications running on Cloud Run for cost efficiency and performance.
4. CI/CD & DevOps Automation
· Design, implement, and manage CI/CD pipelines using Azure DevOps.
· Automate infrastructure deployment using Terraform, Bash and Power shell scripting
· Integrate security and compliance checks into the DevOps workflow (DevSecOps).
Required Skills & Qualifications:
✔ Experience: 4+ years in Cloud Engineering, with a focus on GCP.
✔ Cloud Expertise: Strong knowledge of GCP services (GKE, Compute Engine, IAM, VPC, Cloud Storage, Cloud SQL, Cloud Functions).
✔ Kubernetes & Containers: Experience with GKE, Docker, GKE Networking, Helm.
✔ DevOps Tools: Hands-on experience with Azure DevOps for CI/CD pipeline automation.
✔ Infrastructure-as-Code (IaC): Expertise in Terraform for provisioning cloud resources.
✔ Scripting & Automation: Proficiency in Python, Bash, or PowerShell for automation.
✔ Security & Compliance: Knowledge of cloud security principles, IAM, and compliance standards.
About Wissen Technology
Wissen Technology, established in 2015 and part of the Wissen Group (founded in 2000), is a specialized technology consulting company. We pride ourselves on delivering high-quality solutions for global organizations across Banking & Finance, Telecom, and Healthcare domains.
Here’s why Wissen Technology stands out:
Global Presence: Offices in US, India, UK, Australia, Mexico, and Canada.
Expert Team: Wissen Group comprises over 4000 highly skilled professionals worldwide, with Wissen Technology contributing 1400 of these experts. Our team includes graduates from prestigious institutions such as Wharton, MIT, IITs, IIMs, and NITs.
Recognitions: Great Place to Work® Certified.
Featured as a Top 20 AI/ML Vendor by CIO Insider (2020).
Impressive Growth: Achieved 400% revenue growth in 5 years without external funding.
Successful Projects: Delivered $650 million worth of projects to 20+ Fortune 500 companies.
For more details:
Website: www.wissen.com
Wissen Thought leadership : https://www.wissen.com/articles/
LinkedIn: Wissen Technology
Company information:
At LogixHealth, we're making intelligence matter throughout healthcare. LogixHealth has over two decades of experience providing full service coding, billing and revenue cycle solutions for emergency departments, hospitals and physician practices for millions of visits annually. LogixHealth provides ongoing coding, claims management and the latest business intelligence analytics for clients in over 40 states. For more information.
Role overview
Knowledge and Skill Sets:
· Office 365 Administration: Expertise in managing Office 365 services, including Exchange Online, SharePoint Online, Teams, and OneDrive.
· Intune Administration: Proficiency in Microsoft Intune, including device management, policy enforcement, and application deployment.
· Experience with virtualization platforms (Citrix, Nutanix, VMware, Hyper-V, etc.).
· Should have exposure to IaC and Terraform/Ansible
· Data Center Operations (DCO) and NOC Support: Experience in supporting DCO and NOC operations, with the ability to troubleshoot and resolve issues in a 24/7 environment.
· PowerShell and Scripting: Ability to create and use PowerShell scripts and other tools to automate tasks within Office 365 and Intune environments.
· Automation and IaC: Knowledge related to ansible, terraform and devops framework
· Security and Compliance: In-depth knowledge of security features in Office 365 and Intune, including MFA, DLP, and compliance tools.
· Backup and Recovery: Understanding of backup solutions for Office 365 data and disaster recovery planning.
· Monitoring Tools: Familiarity with monitoring tools for tracking the health and performance of Office 365 and Intune services.
· Communication: Strong communication skills for providing technical support and collaborating with IT teams.
· Documentation: Ability to create detailed documentation and training resources.
· 24/7 Availability: Commitment to providing round-the-clock support for critical Office 365 and Intune services.
What would you do here
Job Purpose:
The Collaboration tools Administrator is responsible for managing and maintaining the organization's Office 365, MS teams, Intune, SharePoint, Defender, MDM and IAM environments while supporting Data Center Operations (DCO) activities. The position requires 24/7 availability to support critical operations and respond promptly to incidents.
Role Description:
· Office 365 Administration: Manage and maintain Office 365 services, including user accounts, licenses, permissions, and configurations for Exchange Online, SharePoint Online, Teams, and OneDrive.
· Intune Administration: Oversee the administration of Microsoft Intune, including device enrollment, configuration policies, application deployment, and mobile device management (MDM) to ensure secure access to corporate resources.
· Security and Compliance: Implement and manage security measures within Office 365 and Intune, including multi-factor authentication (MFA), Data Loss Prevention (DLP), and compliance policies to protect data and meet regulatory requirements.
· Incident Response: Provide 24/7 on-call support for Office 365 and Intune-related incidents, ensuring quick resolution to minimize downtime and disruption.
· User Support: Offer technical support and troubleshooting for end-users, addressing issues related to Office 365, Intune, and other integrated services.
· Monitoring and Reporting: Continuously monitor the performance, security, and compliance of Office 365 and Intune services, generating regular reports on system health and usage.
· Automation and Scripting: Utilize PowerShell and other scripting tools to automate administrative tasks and improve operational efficiency within Office 365 and Intune environments.
· Backup and Recovery: Manage backup solutions for Office 365 data and participate in disaster recovery planning and execution to ensure business continuity.
· Documentation and Training: Develop and maintain detailed documentation of configurations, procedures, and best practices, and provide training to IT staff and end-users.
Key Deliverables:
· Reliable and secure operation of Office 365 and Intune services as part of the overall IT infrastructure.
· Effective integration of Office 365 and Intune with DCO and NOC operations.
· Timely incident response and resolution to ensure 24/7 availability of critical services.
· Efficient management of user accounts, licenses, and security settings within Office 365 and Intune.
· Regular monitoring, reporting, and auditing of Office 365 and Intune performance and security.
· Comprehensive documentation and training materials for internal use.
Thanks!
Job Type : Contract
Location : Bangalore
Experience : 5+yrs
The role focuses on cloud security engineering with a strong emphasis on GCP, while also covering AWS and Azure.
Required Skills:
- 5+ years of experience in software and/or cloud platform engineering, particularly focused on GCP environment.
- Knowledge of the Shared Responsibility Model; keen understanding of the security risks inherent in hosting cloud-based applications and data.
- Experience developing across the security assurance lifecycle (including prevent, detect, respond, and remediate controls)?Experience in configuring Public Cloud native security tooling and capabilities with a focus on Google Cloud Organizational policies/constraints, VPC SC, IAM policies and GCP APIs.
- Experience with Cloud Security Posture Management (CSPM) 3rd Party tools such as Wiz, Prisma, Check Point CloudGuard, etc.
- Experience in Policy-as-code (Rego) and OPA platform.
- Experience solutioning and configuring event-driven serverless-based security controls in Azure, including but not limited to technologies such as Azure Function, Automation Runbook, AWS Lambda and Google Cloud Functions.
- Deep understanding of DevOps processes and workflows.
- Working knowledge of the Secure SDLC process
- Experience with Infrastructure as Code (IaC) tooling, preferably Terraform.
- Familiarity with Logging and data pipeline concepts and architectures in cloud.
- Strong in scripting languages such as PowerShell or Python or Bash or Go.
- Knowledge of Agile best practices and methodologies
- Experience creating technical architecture documentation.? Excellent communication, written and interpersonal skills.
- Practical experience in designing and configuring CICD pipelines. Practical experience in GitHub Actions and Jenkins.
- Experience in ITSM.
- Ability to articulate complex technical concepts to non-technical stakeholders.
- Experience with risk control frameworks and engagements with risk and regulatory functions
- Experience in the financial industry would be a plus.
Job Title : Lead System Administrator / Team Leader – Server Administration (NOC)
Experience : 12 to 16 Years
Location : Bengaluru (Whitefield / Domlur) or Coimbatore
Work Mode : Initially Work From Office (5 days/week during probation), Hybrid thereafter (3 days WFO)
Salary : Up to ₹28 LPA (including 8% variable)
Notice Period : Immediate / Serving / up to 30 days
Shift Time : Flexible (11:00 AM – 8:00 PM)
Role Overview :
We are seeking an experienced Lead System Administrator / Team Leader to manage our server administration team and ensure the stability, performance, and security of our infrastructure. This is a hands-on leadership role that demands technical depth, strategic thinking, and excellent team management capabilities.
Mandatory Skills :
- Windows Server Administration
- Citrix, VMware, and Hypervisor Platforms
- 1–2 Years of Team Lead / Leadership Experience
- Scripting (PowerShell, Bash, etc.)
- Infrastructure as Code – Terraform / Ansible
- Monitoring, Backup, and Compliance Tools Exposure
- Experience in 24/7 Production Environments
- Strong Communication & Documentation Skills
Key Responsibilities :
- Lead and mentor a team of system/server administrators.
- Manage installation, configuration, and support of Windows-based physical & virtual servers.
- Ensure optimal uptime, performance, and availability of server infrastructure.
- Oversee Active Directory, DNS, DHCP, file servers, and backup systems.
- Implement disaster recovery strategies & capacity planning.
- Collaborate with security, application, and network teams.
- Create and maintain SOPs, asset inventories, and architectural documentation.
- Drive compliance with IT policies and audit standards.
- Provide on-call support and lead incident management for server-related issues.
Qualifications :
- Bachelor’s degree in Computer Science, IT, or related field.
- 10+ Years in server/system administration, including 1 to 2 years in a leadership capacity.
- Strong knowledge of Windows Server environments.
- Hands-on experience with Citrix, VMware, Nutanix, Hyper-V.
- Familiarity with Azure cloud platforms.
- Proficient in automation and scripting tools (PowerShell, Bash).
- Knowledge of Infrastructure as Code using Terraform and Ansible.
- Certifications like MCSA/MCSE, RHCE are a plus.
- Excellent communication, documentation, and team management skills.
Interview Process :
- L1 – Technical Interview (with Partner Team)
- L2 – Technical Interview (Client)
- L3 – Techno-Managerial Round
- L4 – HR Discussion
Job Title : Senior System Administrator
Experience : 7 to 12 Years
Location : Bangalore (Whitefield/Domlur) or Coimbatore
Work Mode :
- First 3 Months : Work From Office (5 Days)
- Post-Probation : Hybrid (3 Days WFO)
- Shift : Rotational (Day & Night)
- Notice Period : Immediate to 30 Days
- Salary : Up to ₹24 LPA (including 8% variable), slightly negotiable
Role Overview :
Seeking a Senior System Administrator with strong experience in server administration, virtualization, automation, and hybrid infrastructure. The role involves managing Windows environments, scripting, cloud/on-prem operations, and ensuring 24x7 system availability.
Mandatory Skills :
Windows Server, Virtualization (Citrix/VMware/Nutanix/Hyper-V), Office 365, Intune, PowerShell, Terraform/Ansible, CI/CD, Hybrid Cloud (Azure), Monitoring, Backup, NOC, DCO.
Key Responsibilities :
- Manage physical/virtual Windows servers and core services (AD, DNS, DHCP).
- Automate infrastructure using Terraform/Ansible.
- Administer Office 365, Intune, and ensure compliance.
- Support hybrid on-prem + Azure environments.
- Handle monitoring, backups, disaster recovery, and incident response.
- Collaborate on DevOps pipelines and write automation scripts (PowerShell).
Nice to Have :
MCSA/MCSE/RHCE, Azure admin experience, team leadership background
Interview Rounds :
L1 – Technical (Platform)
L2 – Technical
L3 – Techno-Managerial
L4 – HR

This is a U.S.-based healthcare technology and revenu
Job Title: Senior System Administrator
Location: Bangalore
Experience Required: 7+ Years
Work Mode:
- 5 Days Working
- Rotational Shifts
- Hybrid Work after probation
Job Description:
We are seeking a Senior System Administrator with 7+ years of hands-on experience in managing Windows Server environments, virtualization technologies, automation tools, and hybrid infrastructure (on-prem & Azure). The ideal candidate should possess strong problem-solving skills, be proficient in scripting, and have experience in Office 365 and Microsoft Intune administration.
Key Responsibilities:
- Manage and maintain Windows Server environments
- Handle virtualization platforms such as Citrix, Nutanix, VMware, Hyper-V
- Implement and maintain automation using tools like Ansible, Terraform, PowerShell
- Work with Infrastructure as Code (IaC) platforms and DevOps frameworks
- Support and manage Office 365 and Microsoft Intune
- Monitor and support Data Center Operations (DCO) and NOC
- Ensure security and compliance across systems
- Provide scripting and troubleshooting support for infrastructure automation
- Collaborate with teams for CI/CD pipeline integration
- Handle monitoring, backup, and disaster recovery processes
- Work effectively in a hybrid environment (on-prem and Azure)
Skills Required:
- Office 365 Administration
- Microsoft Intune Administration
- Security & Compliance
- Automation & Infrastructure as Code (IaC)
- Tools: PowerShell, Terraform, Ansible
- CI/CD and DevOps framework exposure
- Monitoring & Backup
- Data Center Operations (DCO) & NOC Support
- Hybrid environment experience (on-prem and Azure)
- Scripting & Troubleshooting
- PowerShell scripting for automation
∙Need 8+ years of experience in Devops CICD
∙Managing large-scale AWS deployments using Infrastructure as Code (IaC) and k8s developer tools
∙Managing build/test/deployment of very large-scale systems, bridging between developers and live stacks
∙Actively troubleshoot issues that arise during development and production
∙Owning, learning, and deploying SW in support of customer-facing applications
∙Help establish DevOps best practices
∙Actively work to reduce system costs
∙Work with open-source technologies, helping to ensure robustness and secureness of said technologies
∙Actively work with CI/CD, GIT and other component parts of the build and deployment system
∙Leading skills with AWS cloud stack
∙Proven implementation experience with Infrastructure as Code (Terraform, Terragrunt, Flux, Helm charts)
at scale
∙Proven experience with Kubernetes at scale
∙Proven experience with cloud management tools beyond AWS console (k9s, lens)
∙Strong communicator who people want to work with – must be thought of as the ultimate collaborator
∙Solid team player
∙Strong experience with Linux-based infrastructures and AWS
∙Strong experience with databases such as MySQL, Redshift, Elasticsearch, Mongo, and others
∙Strong knowledge of JavaScript, GIT
∙Agile practitioner
Looking for Fresher developers
Responsibilities:
- Implement integrations requested by customers
- Deploy updates and fixes
- Provide Level 2 technical support
- Build tools to reduce occurrences of errors and improve customer experience
- Develop software to integrate with internal back-end systems
- Perform root cause analysis for production errors
- Investigate and resolve technical issues
- Develop scripts to automate visualization
- Design procedures for system troubleshooting and maintenance
Requirements and skill:
Knowledge in DevOps Engineer or similar software engineering role
Good knowledge of Terraform, Kubernetes
Working knowledge on AWS, Google Cloud
You can directly contact me on nine three one six one two zero one three two
Job Title : Senior Consultant (Java / NodeJS + Temporal)
Experience : 5 to 12 Years
Location : Bengaluru, Chennai, Hyderabad, Pune, Mumbai, Gurugram, Coimbatore
Work Mode : Remote (Must be open to travel for occasional team meetups)
Notice Period : Immediate Joiners or Serving Notice
Interview Process :
- R1 : Tech Interview (60 mins)
- R2 : Technical Interview
- R3 : (Optional) Interview with Client
Job Summary :
We are seeking a Senior Backend Consultant with strong hands-on expertise in Temporal (BPM/Workflow Engine) and either Node.js or Java.
The ideal candidate will have experience in designing and developing microservices and process-driven applications, as well as orchestrating complex workflows using Temporal.io.
You will work on high-scale systems, collaborating closely with cross-functional teams.
Mandatory Skills :
Temporal.io, Node.js (or Java), React.js, Keycloak IAM, PostgreSQL, Terraform, Kubernetes, Azure, Jest, OpenAPI
Key Responsibilities :
- Design and implement scalable backend services using Node.js or Java.
- Build and manage complex workflow orchestrations using Temporal.io.
- Integrate with IAM solutions like Keycloak for role-based access control.
- Work with React (v17+), TypeScript, and component-driven frontend design.
- Use PostgreSQL for structured data persistence and optimized queries.
- Manage infrastructure using Terraform and orchestrate via Kubernetes.
- Leverage Azure Services like Blob Storage, API Gateway, and AKS.
- Write and maintain API documentation using Swagger/Postman/Insomnia.
- Conduct unit and integration testing using Jest.
- Participate in code reviews and contribute to architectural decisions.
Must-Have Skills :
- Temporal.io – BPMN modeling, external task workers, Operate, Tasklist
- Node.js + TypeScript (preferred) or strong Java experience
- React.js (v17+) and component-driven UI development
- Keycloak IAM, PostgreSQL, and modern API design
- Infrastructure automation with Terraform, Kubernetes
- Experience in using GitFlow, OpenAPI, Jest for testing
Nice-to-Have Skills :
- Blockchain integration experience for secure KYC/identity flows
- Custom Camunda Connectors or exporter plugin development
- CI/CD experience using Azure DevOps or GitHub Actions
- Identity-based task completion authorization enforcement
About the Role:
We are looking for a skilled AWS DevOps Engineer to join our Cloud Operations team in Bangalore. This hybrid role is ideal for someone with hands-on experience in AWS and a strong background in application migration from on-premises to cloud environments. You'll play a key role in driving cloud adoption, optimizing infrastructure, and ensuring seamless cloud operations.
Key Responsibilities:
- Manage and maintain AWS cloud infrastructure and services.
- Lead and support application migration projects from on-prem to cloud.
- Automate infrastructure provisioning using Infrastructure as Code (IaC) tools.
- Monitor cloud environments and optimize cost, performance, and reliability.
- Collaborate with development, operations, and security teams to implement DevOps best practices.
- Troubleshoot and resolve infrastructure and deployment issues.
Required Skills:
- 3–5 years of experience in AWS cloud environment.
- Proven experience with on-premises to cloud application migration.
- Strong understanding of AWS core services (EC2, VPC, S3, IAM, RDS, etc.).
- Solid scripting skills (Python, Bash, or similar).
Good to Have:
- Experience with Terraform for Infrastructure as Code.
- Familiarity with Kubernetes for container orchestration.
- Exposure to CI/CD tools like Jenkins, GitLab, or AWS CodePipeline.
Required Skills:
- Experience in systems administration, SRE or DevOps focused role
- Experience in handling production support (on-call)
- Good understanding of the Linux operating system and networking concepts.
- Demonstrated competency with the following AWS services: ECS, EC2, EBS, EKS, S3, RDS, ELB, IAM, Lambda.
- Experience with Docker containers and containerization concepts
- Experience with managing and scaling Kubernetes clusters in a production environment
- Experience building scalable infrastructure in AWS with Terraform.
- Strong knowledge of Protocol-level such as HTTP/HTTPS, SMTP, DNS, and LDAP
- Experience monitoring production systems
- Expertise in leveraging Automation / DevOps principles, experience with operational tools, and able to apply best practices for infrastructure and software deployment (Ansible).
- HAProxy, Nginx, SSH, MySQL configuration and operation experience
- Ability to work seamlessly with software developers, QA, project managers, and business development
- Ability to produce and maintain written documentation
General Description:Owns all technical aspects of software development for assigned applications
Participates in the design and development of systems & application programs
Functions as Senior member of an agile team and helps drive consistent development practices – tools, common components, and documentation
Required Skills:
In depth experience configuring and administering EKS clusters in AWS.
In depth experience in configuring **Splunk SaaS** in AWS environments especially in **EKS**
In depth understanding of OpenTelemetry and configuration of **OpenTelemetry Collectors**
In depth knowledge of observability concepts and strong troubleshooting experience.
Experience in implementing comprehensive monitoring and logging solutions in AWS using **CloudWatch**.
Experience in **Terraform** and Infrastructure as code.
Experience in **Helm**Strong scripting skills in Shell and/or python.
Experience with large-scale distributed systems and architecture knowledge (Linux/UNIX and Windows operating systems, networking, storage) in a cloud computing or traditional IT infrastructure environment.
Must have a good understanding of cloud concepts (Storage /compute/network).
Experience collaborating with several cross functional teams to architect observability pipelines for various GCP services like GKE, cloud run Big Query etc.
Experience with Git and GitHub.Experience with code build and deployment using GitHub actions, and Artifact Registry.
Proficient in developing and maintaining technical documentation, ADRs, and runbooks.
General Description:
Owns all technical aspects of software development for assigned applications.
Participates in the design and development of systems & application programs.
Functions as Senior member of an agile team and helps drive consistent development practices – tools, common components, and documentation.
Required Skills:
In depth experience configuring and administering EKS clusters in AWS.
In depth experience in configuring **DataDog** in AWS environments especially in **EKS**
In depth understanding of OpenTelemetry and configuration of **OpenTelemetry Collectors**
In depth knowledge of observability concepts and strong troubleshooting experience.
Experience in implementing comprehensive monitoring and logging solutions in AWS using **CloudWatch**.
Experience in **Terraform** and Infrastructure as code.
Experience in **Helm**
Strong scripting skills in Shell and/or python.
Experience with large-scale distributed systems and architecture knowledge (Linux/UNIX and Windows operating systems, networking, storage) in a cloud computing or traditional IT infrastructure environment.
Must have a good understanding of cloud concepts (Storage /compute/network).
Experience in Collaborating with several cross functional teams to architect observability pipelines for various GCP services like GKE, cloud run Big Query etc.
Experience with Git and GitHub.
Proficient in developing and maintaining technical documentation, ADRs, and runbooks.
Level of skills and experience:
5 years of hands-on experience in using Python, Spark,Sql.
Experienced in AWS Cloud usage and management.
Experience with Databricks (Lakehouse, ML, Unity Catalog, MLflow).
Experience using various ML models and frameworks such as XGBoost, Lightgbm, Torch.
Experience with orchestrators such as Airflow and Kubeflow.
Familiarity with containerization and orchestration technologies (e.g., Docker, Kubernetes).
Fundamental understanding of Parquet, Delta Lake and other data file formats.
Proficiency on an IaC tool such as Terraform, CDK or CloudFormation.
Strong written and verbal English communication skill and proficient in communication with non-technical stakeholderst
Mandatory Skills:
- AZ-104 (Azure Administrator) experience
- CI/CD migration expertise
- Proficiency in Windows deployment and support
- Infrastructure as Code (IaC) in Terraform
- Automation using PowerShell
- Understanding of SDLC for C# applications (build/ship/run strategy)
- Apache Kafka experience
- Azure web app
Good to Have Skills:
- AZ-400 (Azure DevOps Engineer Expert)
- AZ-700 Designing and Implementing Microsoft Azure Networking Solutions
- Apache Pulsar
- Windows containers
- Active Directory and DNS
- SAST and DAST tool understanding
- MSSQL database
- Postgres database
- Azure security
Position Name : Product Engineer (Backend Heavy)
Experience : 3 to 5 Years
Location : Bengaluru (Work From Office, 5 Days a Week)
Positions : 2
Notice Period : Immediate joiners or candidates serving notice (within 30 days)
Role Overview :
We’re looking for Product Engineers who are passionate about building scalable backend systems in the FinTech & payments domain. If you enjoy working on complex challenges, contributing to open-source projects, and driving impactful innovations, this role is for you!
What You’ll Do :
- Develop scalable APIs and backend services.
- Design and implement core payment systems.
- Take end-to-end ownership of product development from zero to one.
- Work on database design, query optimization, and system performance.
- Experiment with new technologies and automation tools.
- Collaborate with product managers, designers, and engineers to drive innovation.
What We’re Looking For :
- 3+ Years of professional backend development experience.
- Proficiency in any backend programming language (Ruby on Rails experience is a plus but not mandatory).
- Experience in building APIs and working with relational databases.
- Strong communication skills and ability to work in a team.
- Open-source contributions (minimum 50 stars on GitHub preferred).
- Experience in building and delivering 0 to 1 products.
- Passion for FinTech & payment systems.
- Familiarity with CI/CD, DevOps practices, and infrastructure management.
- Knowledge of payment protocols and financial regulations (preferred but not mandatory)
Main Technical Skills :
- Backend : Ruby on Rails, PostgreSQL
- Infrastructure : GCP, AWS, Terraform (fully automated infrastructure)
- Security : Zero-trust security protocol managed via Teleport
Company Overview
Adia Health revolutionizes clinical decision support by enhancing diagnostic accuracy and personalizing care. It modernizes the diagnostic process by automating optimal lab test selection and interpretation, utilizing a combination of expert medical insights, real-world data, and artificial intelligence. This approach not only streamlines the diagnostic journey but also ensures precise, individualized patient care by integrating comprehensive medical histories and collective platform knowledge.
Position Overview
We are seeking a talented and experienced Site Reliability Engineer/DevOps Engineer to join our dynamic team. The ideal candidate will be responsible for ensuring the reliability, scalability, and performance of our infrastructure and applications. You will collaborate closely with development, operations, and product teams to automate processes, implement best practices, and improve system reliability.
Key Responsibilities
- Design, implement, and maintain highly available and scalable infrastructure solutions using modern DevOps practices.
- Automate deployment, monitoring, and maintenance processes to streamline operations and increase efficiency.
- Monitor system performance and troubleshoot issues, ensuring timely resolution to minimize downtime and impact on users.
- Implement and manage CI/CD pipelines to automate software delivery and ensure code quality.
- Manage and configure cloud-based infrastructure services to optimize performance and cost.
- Collaborate with development teams to design and implement scalable, reliable, and secure applications.
- Implement and maintain monitoring, logging, and alerting solutions to proactively identify and address potential issues.
- Conduct periodic security assessments and implement appropriate measures to ensure the integrity and security of systems and data.
- Continuously evaluate and implement new tools and technologies to improve efficiency, reliability, and scalability.
- Participate in on-call rotation and respond to incidents promptly to ensure system uptime and availability.
Qualifications
- Bachelor's degree in Computer Science, Engineering, or related field
- Proven experience (5+ years) as a Site Reliability Engineer, DevOps Engineer, or similar role
- Strong understanding of cloud computing principles and experience with AWS
- Experience of building and supporting complex CI/CD pipelines using Github
- Experience of building and supporting infrastructure as a code using Terraform
- Proficiency in scripting and automating tools
- Solid understanding of networking concepts and protocols
- Understanding of security best practices and experience implementing security controls in cloud environments
- Knowing modern security requirements like SOC2, HIPAA, HITRUST will be a solid advantage.
Dear Candidate,
We are urgently Hiring AWS Cloud Engineer for Bangalore Location.
Position: AWS Cloud Engineer
Location: Bangalore
Experience: 8-11 yrs
Skills: Aws Cloud
Salary: Best in Industry (20-25% Hike on the current ctc)
Note:
only Immediate to 15 days Joiners will be preferred.
Candidates from Tier 1 companies will only be shortlisted and selected
Candidates' NP more than 30 days will get rejected while screening.
Offer shoppers will be rejected.
Job description:
Description:
Title: AWS Cloud Engineer
Prefer BLR / HYD – else any location is fine
Work Mode: Hybrid – based on HR rule (currently 1 day per month)
Shift Timings 24 x 7 (Work in shifts on rotational basis)
Total Experience in Years- 8+ yrs, 5 yrs of relevant exp is required.
Must have- AWS platform, Terraform, Redshift / Snowflake, Python / Shell Scripting
Experience and Skills Requirements:
Experience:
8 years of experience in a technical role working with AWS
Mandatory
Technical troubleshooting and problem solving
AWS management of large-scale IaaS PaaS solutions
Cloud networking and security fundamentals
Experience using containerization in AWS
Working Data warehouse knowledge Redshift and Snowflake preferred
Working with IaC – Terraform and Cloud Formation
Working understanding of scripting languages including Python and Shell
Collaboration and communication skills
Highly adaptable to changes in a technical environment
Optional
Experience using monitoring and observer ability toolsets inc. Splunk, Datadog
Experience using Github Actions
Experience using AWS RDS/SQL based solutions
Experience working with streaming technologies inc. Kafka, Apache Flink
Experience working with a ETL environments
Experience working with a confluent cloud platform
Certifications:
Minimum
AWS Certified SysOps Administrator – Associate
AWS Certified DevOps Engineer - Professional
Preferred
AWS Certified Solutions Architect – Associate
Responsibilities:
Responsible for technical delivery of managed services across NTT Data customer account base. Working as part of a team providing a Shared Managed Service.
The following is a list of expected responsibilities:
To manage and support a customer’s AWS platform
To be technical hands on
Provide Incident and Problem management on the AWS IaaS and PaaS Platform
Involvement in the resolution or high priority Incidents and problems in an efficient and timely manner
Actively monitor an AWS platform for technical issues
To be involved in the resolution of technical incidents tickets
Assist in the root cause analysis of incidents
Assist with improving efficiency and processes within the team
Examining traces and logs
Working with third party suppliers and AWS to jointly resolve incidents
Good to have:
Confluent Cloud
Snowflake
Best Regards,
Minakshi Soni
Executive - Talent Acquisition (L2)
Rigel Networks
Worldwide Locations: USA | HK | IN
🚀 We're Hiring: Python AWS Fullstack Developer at InfoGrowth! 🚀
Join InfoGrowth as a Python AWS Fullstack Developer and be a part of our dynamic team driving innovative cloud-based solutions!
Job Role: Python AWS Fullstack Developer
Location: Bangalore & Pune
Mandatory Skills:
- Proficiency in Python programming.
- Hands-on experience with AWS services and migration.
- Experience in developing cloud-based applications and pipelines.
- Familiarity with DynamoDB, OpenSearch, and Terraform (preferred).
- Solid understanding of front-end technologies: ReactJS, JavaScript, TypeScript, HTML, and CSS.
- Experience with Agile methodologies, Git, CI/CD, and Docker.
- Knowledge of Linux (preferred).
Preferred Skills:
- Understanding of ADAS (Advanced Driver Assistance Systems) and automotive technologies.
- AWS Certification is a plus.
Why Join InfoGrowth?
- Work on cutting-edge technology in a fast-paced environment.
- Collaborate with talented professionals passionate about driving change in the automotive and tech industries.
- Opportunities for professional growth and development through exciting projects.
🔗 Apply Now to elevate your career with InfoGrowth and make a difference in the automotive sector!
Scoutflo is a platform that automates complex infrastructure requirements for Kubernetes Infrastructure.
Job Description:
- In-depth knowledge of full-stack development principles and best practices.
- Expertise in building web applications with strong proficiency in languages like
- Node.js, React, and Go.
- Experience developing and consuming RESTful & gRPC API Protocols.
- Familiarity with CI/CD workflows and DevOps processes.
- Solid understanding of cloud platforms and container orchestration
- technologies
- Experience with Kubernetes pipelines and workflows using tools like Argo CD.
- Experience with designing and building user-friendly interfaces.
- Excellent understanding of distributed systems, databases, and APIs.
- A passion for writing clean, maintainable, and well-documented code.
- Strong problem-solving skills and the ability to work independently as well as
- collaboratively.
- Excellent communication and interpersonal skills.
- Experience with building self-serve platforms or user onboarding experiences.
- Familiarity with Infrastructure as Code (IaC) tools like Terraform.
- A strong understanding of security best practices for Kubernetes deployments.
- Grasp on setting up Network Architecture for distributed systems.
Must have:
1) Experience with managing Infrastructure on AWS/GCP or Azure
2) Managed Infrastructure on Kubernetes
As a Kafka Administrator at Cargill you will work across the full set of data platform technologies spanning on-prem and SAS solutions empowering highly performant modern data centric solutions. Your work will play a critical role in enabling analytical insights and process efficiencies for Cargill’s diverse and complex business environments. You will work in a small team who shares your passion for building, configuring, and supporting platforms while sharing, learning and growing together.
- Develop and recommend improvements to standard and moderately complex application support processes and procedures.
- Review, analyze and prioritize incoming incident tickets and user requests.
- Perform programming, configuration, testing and deployment of fixes or updates for application version releases.
- Implement security processes to protect data integrity and ensure regulatory compliance.
- Keep an open channel of communication with users and respond to standard and moderately complex application support requests and needs.
MINIMUM QUALIFICATIONS
- 2-4 year of minimum experience
- Knowledge of Kafka cluster management, alerting/monitoring, and performance tuning
- Full ecosystem Kafka administration (kafka, zookeeper, kafka-rest, connect)
- Experience implementing Kerberos security
- Preferred:
- Experience in Linux system administration
- Authentication plugin experience such as basic, SSL, and Kerberos
- Production incident support including root cause analysis
- AWS EC2
- Terraform
The Cloud Engineer will design and develop the capabilities of the company cloud platform and the automation of application deployment pipelines to the cloud platform. In this role, you will be an essential partner and technical specialist for cloud platform development and Operations.
Key Accountabilities
Participate in a dynamic development environment to engineer evolving customer solutions on Azure. Support SAP Application teams for their requirements related to Application and release management. Develop automation capabilities in the cloud platform to enable provisioning and upgrades of cloud services.
Design continuous integration delivery pipelines with infrastructure as code, automation, and testing capabilities to facilitate automated deployment of applications.
Develop testable code to automate Cloud platform capabilities and Cloud platform observability tools. Engineering support to implementation/ POC of new tools and techniques.
Independently handle support of critical SAP Applications Infrastructure deployed on Azure. Other duties as assigned.
Qualifications
Minimum Qualifications
Bachelor’s degree in a related field or equivalent experience
Minimum of 5 years of related work experience
PREFERRED QUALIFICATIONS
Supporting complex application development activities in DevOps environment.
Building and supporting fully automated cloud platform solutions as Infrastructure as Code. Working with cloud services platform primarily on Azure and automating the Cloud infrastructure life cycle with tools such as Terraform, GitHub Actions.
With scripting and programming languages such as Python, Go, PowerShell.
Good knowledge of applying Azure Cloud adoption framework and Implementing Microsoft well architected framework.
Experience with Observability Tools, Cloud Infrastructure security services on Azure, Azure networking topologies and Azure Virtual WAN.
Experience automating Windows and Linux operating system deployments and management in automatically scaling deployments.
Good to have:
1) Managing cloud infra using IAC methods- terraform and go/golang
2) Knowledge about complex enterprise networking- LAN, WAN, VNET, VLAN
3) Good understanding of application architecture- databases, tiered architecture
Devops Engineer(Permanent)
Experience: 8 to 12 yrs
Location: Remote for 2-3 months (Any Mastek Location- Chennai/Mumbai/Pune/Noida/Gurgaon/Ahmedabad/Bangalore)
Max Salary = 28 LPA (including 10% variable)
Notice Period: Immediate/ max 10days
Mandatory Skills: Either Splunk/Datadog, Gitlab, Retail Domain
· Bachelor’s degree in Computer Science/Information Technology, or in a related technical field or equivalent technology experience.
· 10+ years’ experience in software development
· 8+ years of experience in DevOps
· Mandatory Skills: Either Splunk/Datadog,Gitalb,EKS,Retail domain experience
· Experience with the following Cloud Native tools: Git, Jenkins, Grafana, Prometheus, Ansible, Artifactory, Vault, Splunk, Consul, Terraform, Kubernetes
· Working knowledge of Containers, i.e., Docker Kubernetes, ideally with experience transitioning an organization through its adoption
· Demonstrable experience with configuration, orchestration, and automation tools such as Jenkins, Puppet, Ansible, Maven, and Ant to provide full stack integration
· Strong working knowledge of enterprise platforms, tools and principles including Web Services, Load Balancers, Shell Scripting, Authentication, IT Security, and Performance Tuning
· Demonstrated understanding of system resiliency, redundancy, failovers, and disaster recovery
· Experience working with a variety of vendor APIs including cloud, physical and logical infrastructure devices
· Strong working knowledge of Cloud offerings & Cloud DevOps Services (EC2, ECS, IAM, Lambda, Cloud services, AWS CodeBuild, CodeDeploy, Code Pipeline etc or Azure DevOps, API management, PaaS)
· Experience managing and deploying Infrastructure as Code, using tools like Terraform Helm charts etc.
· Manage and maintain standards for Devops tools used by the team
We have an exciting and rewarding opportunity for you to take your software engineering career to the next level.
As a Software Engineer III at JPMorgan Chase within the Asset & Wealth Management, you serve as a seasoned member of an agile team to design and deliver trusted market-leading technology products in a secure, stable, and scalable way. You are responsible for carrying out critical technology solutions across multiple technical areas within various business functions in support of the firm’s business objectives.
Job responsibilities
- Executes software solutions, design, development, and technical troubleshooting with ability to think beyond routine or conventional approaches to build solutions or break down technical problems
- Creates secure and high-quality production code and maintains algorithms that run synchronously with appropriate systems
- Produces architecture and design artifacts for complex applications while being accountable for ensuring design constraints are met by software code development
- Gathers, analyzes, synthesizes, and develops visualizations and reporting from large, diverse data sets in service of continuous improvement of software applications and systems
- Proactively identifies hidden problems and patterns in data and uses these insights to drive improvements to coding hygiene and system architecture
- Contributes to software engineering communities of practice and events that explore new and emerging technologies
- Adds to team culture of diversity, equity, inclusion, and respect
Required qualifications, capabilities, and skills
- Formal training or certification on software engineering concepts and 3+ years applied experience
- Expert level in the programming on Python. Experience designing and building APIs using popular frameworks such as Flask, Fast API
- Familiar with site reliability concepts, principles, and practices
- Experience maintaining a Cloud-base infrastructure
- Familiar with observability such as white and black box monitoring, service level objective alerting, and telemetry collection using tools such as Grafana, Dynatrace, Prometheus, Datadog, Splunk, and others
- Emerging knowledge of software, applications and technical processes within a given technical discipline (e.g., Cloud, artificial intelligence, Android, etc.)
- Emerging knowledge of continuous integration and continuous delivery tools (e.g., Jenkins, Jules, Spinnaker, BitBucket, GitLab, Terraform, etc.)
- Emerging knowledge of common networking technologies
Preferred qualifications, capabilities, and skills
- General knowledge of financial services industry
- Experience working on public cloud environment using wrappers and practices that are in use at JPMC
- Knowledge on Terraform, containers and container orchestration, especially Kubernetes preferred
Position: Senior Backend Developer (NodeJS)
Experience: 5+ Years
Location: Bengaluru
CodeCraft Technologies multi-award-winning creative engineering company offering design and technology solutions on mobile, web and cloud platforms.
We are looking for an enthusiastic and self-driven Backend Engineer to join our team.
Roles and Responsibilities:
● Develop high-quality software design and architecture
● Identify, prioritize and execute tasks in the software development life cycle.
● Develop tools and applications by producing clean, efficient code
● Automate tasks through appropriate tools and scripting
● Review and debug code
● Perform validation and verification testing
● Collaborate with cross-functional teams to fix and improve products
● Document development phases and monitor systems
● Ensure software is up-to-date with the latest technologies
Desired Profile:
● NodeJS [Typescript]
● MongoDB [NoSQL DB]
● MySQL, PostgreSQL
● AWS - S3, Lambda, API Gateway, Cloud Watch, ECR, ECS, Fargate, SQS / SNS
● Terraform, Kubernetes, Docker
● Good Understanding of Serverless Architecture
● Proven experience as a Senior Software Engineer
● Extensive experience in software development, scripting and project management
● Experience using system monitoring tools (e.g. New Relic) and automated testing frameworks
● Familiarity with various operating systems (Linux, Mac OS, Windows)
● Analytical mind with problem-solving aptitude
● Ability to work independently
Good to Have:
● Actively contribute to relevant open-source projects, demonstrating a commitment to community collaboration and continuous learning.
● Share knowledge and insights gained from open-source contributions with the development team
● AWS Solutions Architect Professional Certification
● AWS DevOps Professional Certification
● Multi-Cloud/ hybrid cloud experience
● Experience in building CI/CD pipelines using AWS services
Now, more than ever, the Toast team is committed to our customers. We’re taking steps to help restaurants navigate these unprecedented times with technology, resources, and community. Our focus is on building a restaurant platform that helps restaurants adapt, take control, and get back to what they do best: building the businesses they love. And because our technology is purpose-built for restaurants by restaurant people, restaurants can trust that we’ll deliver on their needs for today while investing in experiences that will power their restaurant of the future.
At Toast, our Site Reliability Engineers (SREs) are responsible for keeping all customer-facing services and other Toast production systems running smoothly. SREs are a blend of pragmatic operators and software craftspeople who apply sound software engineering principles, operational discipline, and mature automation to our environments and our codebase. Our decisions are based on instrumentation and continuous observability, as well as predictions and capacity planning.
About this roll* (Responsibilities)
- Gather and analyze metrics from both operating systems and applications to assist in performance tuning and fault finding
- Partner with development teams to improve services through rigorous testing and release procedures
- Participate in system design consulting, platform management, and capacity planning
- Create sustainable systems and services through automation and uplift
- Balance feature development speed and reliability with well-defined service level objectives
Troubleshooting and Supporting Escalations:
- Gather and analyze metrics from both operating systems and applications to assist in performance tuning and fault finding
- Diagnose performance bottlenecks and implement optimizations across infrastructure, databases, web, and mobile applications
- Implement strategies to increase system reliability and performance through on-call rotation and process optimization
- Perform and run blameless RCAs on incidents and outages aggressively, looking for answers that will prevent the incident from ever happening again
Do you have the right ingredients? (Requirements)
- Extensive industry experience with at least 7+ years in SRE and/or DevOps roles
- Polyglot technologist/generalist with a thirst for learning
- Deep understanding of cloud and microservice architecture and the JVM
- Experience with tools such as APM, Terraform, Ansible, GitHub, Jenkins, and Docker
- Experience developing software or software projects in at least four languages, ideally including two of Go, Python, and Java
- Experience with cloud computing technologies ( AWS cloud provider preferred)
Bread puns are encouraged but not required





















