
A DevSecOps Staff Engineer integrates security into DevOps practices, designing secure CI/CD
pipelines, building and automating secure cloud infrastructure and ensuring compliance across
development, operations, and security teams.
Responsibilities
• Design, build and maintain secure CI/CD pipelines utilizing DevSecOps principles and
practices to increase automation and reduce human involvement in the process
• Integrate tools of SAST, DAST, SCA, etc. within pipelines to enable automated application
building, testing, securing and deployment.
• Implement security controls for cloud platforms (AWS, GCP), including IAM, container
security (EKS/ECS), and data encryption for services like S3 or BigQuery, etc.
• Automate vulnerability scanning, monitoring, and compliance processes by collaborating
with DevOps and Development teams to minimize risks in deployment pipelines.
• Suggesting architecture improvements, recommending process improvements.
• Review cloud deployment architectures and implement required security controls.
• Mentor other engineers on security practices and processes.
Requirements
• Bachelor's degree, preferably in CS or a related field, or equivalent experience
• 10+ years of overall industry experience with AWS Certified - Security Specialist.• Must have implementation experience using security tools and processes related to SAST,
DAST and Pen Testing
• AWS-specific: 5+ years’ experience with using a broad range of AWS technologies (e.g.
EC2, RDS, ELB, S3, VPC, CloudWatch) to develop and maintain an Amazon AWS based
cloud solution, with an emphasis on best practice cloud security.
• Experienced with CI/CD tool chain (GitHub Actions, Packages, Jenkins, etc.)
• Passionate about solving security challenges and being informed of available and
emerging security threats and various security technologies.
• Must be familiar with the OWASP Top 10 Security Risks and Controls
• Good skills in at least one or more scripting languages: Python, Bash
• Good knowledge in Kubernetes, Docker Swarm or other cluster management software.
• Willing to work in shifts as required
Good to Have
• AWS Certified DevOps Engineer
• Observability: Experience with system monitoring tools (e.g. CloudWatch, New Relic,
etc.).
• Experience with Terraform/Ansible/Chef/Puppet
• Operating Systems: Windows and Linux system administration.
Perks:
● Day off on the 3rd Friday of every month (one long weekend each month)
● Monthly Wellness Reimbursement Program to promote health well-being
● Monthly Office Commutation Reimbursement Program
● Paid paternity and maternity leaves

About Forbes Advisor
About
Forbes Advisor is a global consumer-finance platform under the Forbes brand, dedicated to simplifying decisions around money, insurance, loans, banking, real estate and more. Its mission is “Smart Financial Decisions Made Simple.”
The site publishes deep research-backed reviews, product comparisons, advice and guides — all of which help people make confident financial choices.
Headquartered in Jersey City, New Jersey (US) at 499 Washington Blvd
Tech stack
Candid answers by the company
It produces expert-written content, guides and product reviews across consumer finance topics (credit cards, loans, savings, insurance, investing, mortgages). It also powers comparison-tools and helps users make smarter financial choices.
Similar jobs
Job Description: DevOps Engineer
Location: Bangalore / Hybrid / Remote
Company: LodgIQ
Industry: Hospitality / SaaS / Machine Learning
About LodgIQ
Headquartered in New York, LodgIQ delivers a revolutionary B2B SaaS platform to the
travel industry. By leveraging machine learning and artificial intelligence, we enable precise
forecasting and optimized pricing for hotel revenue management. Backed by Highgate
Ventures and Trilantic Capital Partners, LodgIQ is a well-funded, high-growth startup with a
global presence.
Role Summary:
We are seeking a Senior DevOps Engineer with 5+ years of strong hands-on experience in
AWS, Kubernetes, CI/CD, infrastructure as code, and cloud-native technologies. This
role involves designing and implementing scalable infrastructure, improving system
reliability, and driving automation across our cloud ecosystem.
Key Responsibilities:
• Architect, implement, and manage scalable, secure, and resilient cloud
infrastructure on AWS
• Lead DevOps initiatives including CI/CD pipelines, infrastructure automation,
and monitoring
• Deploy and manage Kubernetes clusters and containerized microservices
• Define and implement infrastructure as code using
Terraform/CloudFormation
• Monitor production and staging environments using tools like CloudWatch,
Prometheus, and Grafana
• Support MongoDB and MySQL database administration and optimization
• Ensure high availability, performance tuning, and cost optimization
• Guide and mentor junior engineers, and enforce DevOps best practices
• Drive system security, compliance, and audit readiness in cloud environments
• Collaborate with engineering, product, and QA teams to streamline release
processes
Required Qualifications:
• 5+ years of DevOps/Infrastructure experience in production-grade environments
• Strong expertise in AWS services: EC2, EKS, IAM, S3, RDS, Lambda, VPC, etc.
• Proven experience with Kubernetes and Docker in production
• Proficient with Terraform, CloudFormation, or similar IaC tools
• Hands-on experience with CI/CD pipelines using Jenkins, GitHub Actions, or
similar
• Advanced scripting in Python, Bash, or Go
• Solid understanding of networking, firewalls, DNS, and security protocols
• Exposure to monitoring and logging stacks (e.g., ELK, Prometheus, Grafana)
• Experience with MongoDB and MySQL in cloud environments
Preferred Qualifications:
• AWS Certified DevOps Engineer or Solutions Architect
• Experience with service mesh (Istio, Linkerd), Helm, or ArgoCD
• Familiarity with Zero Downtime Deployments, Canary Releases, and Blue/Green
Deployments
• Background in high-availability systems and incident response
• Prior experience in a SaaS, ML, or hospitality-tech environment
Tools and Technologies You’ll Use:
• Cloud: AWS
• Containers: Docker, Kubernetes, Helm
• CI/CD: Jenkins, GitHub Actions
• IaC: Terraform, CloudFormation
• Monitoring: Prometheus, Grafana, CloudWatch
• Databases: MongoDB, MySQL
• Scripting: Bash, Python
• Collaboration: Git, Jira, Confluence, Slack
Why Join Us?
• Competitive salary and performance bonuses.
• Remote-friendly work culture.
• Opportunity to work on cutting-edge tech in AI and ML.
• Collaborative, high-growth startup environment.
• For more information, visit http://www.lodgiq.com
Job Title: Senior DevOps Engineer
Location: Sector 39, Gurgaon (Onsite)
Employment Type: Full-Time
Working Days: 6 Days (Alternate Saturdays Working)
Experience Required: 5+ Years
Team Role: Lead & Mentor a team of 3–4 engineers
About the Role
We are seeking a highly skilled Senior DevOps Engineer to lead our infrastructure and automation initiatives while mentoring a small team. This role involves setting up and managing physical and cloud-based servers, configuring storage systems, and implementing automation to ensure high system availability and reliability. The ideal candidate will have strong Linux administration skills, hands-on experience with DevOps tools, and the leadership capabilities to guide and grow the team.
Key Responsibilities
Infrastructure & Server Management (60%)
- Set up, configure, and manage bare-metal (physical) servers as well as cloud-based environments.
- Configure network bonding, firewalls, and system security for optimal performance and reliability.
- Implement and maintain high-availability solutions for mission-critical systems.
Queue Systems (Kafka / RabbitMQ) (15%)
- Deploy and manage message queue systems to support high-throughput, real-time data exchange.
- Ensure reliable event-driven communication between distributed services.
Storage Systems (SAN/NAS) (15%)
- Configure and manage Storage Area Networks (SAN) and Network Attached Storage (NAS).
- Optimize storage performance, redundancy, and availability.
Database Administration (5%)
- Administer and optimize MariaDB, MySQL, MongoDB, Redis, and Elasticsearch.
- Handle backup, recovery, replication, and performance tuning.
General DevOps & Automation
- Deploy product updates, patches, and fixes while ensuring minimal downtime.
- Design and manage CI/CD pipelines using Jenkins or similar tools.
- Administer and automate workflows with Docker, Kubernetes, Ansible, AWS, and Git.
- Manage web and application servers (Apache httpd, Tomcat).
- Implement monitoring, logging, and alerting systems (Nagios, HAProxy, Keepalived).
- Conduct root cause analysis and implement automation to reduce manual interventions.
- Mentor a team of 3–4 engineers, fostering best practices and continuous improvement.
Required Skills & Qualifications
✅ 5+ years of proven DevOps engineering experience
✅ Strong expertise in Linux administration & shell scripting
✅ Hands-on experience with bare-metal server management & storage systems
✅ Proficiency in Docker, Kubernetes, AWS, Jenkins, Git, and Ansible
✅ Experience with Kafka or RabbitMQ in production environments
✅ Knowledge of CI/CD, automation, monitoring, and high-availability tools (Nagios, HAProxy, Keepalived)
✅ Excellent problem-solving, troubleshooting, and leadership abilities
✅ Strong communication skills with the ability to mentor and lead teams
Good to Have
- Experience in Telecom projects involving SMS, voice, or real-time data handling.
We are seeking an experienced Lead DevOps Engineer with deep expertise in Kubernetes infrastructure design and implementation. This role requires someone who can architect, build, and manage enterprise-grade Kubernetes clusters from the ground up. You’ll lead modernization initiatives, shape infrastructure strategy, and work with cutting-edge cloud-native technologies.
🚀 Key Responsibilities
Infrastructure Design & Implementation
- Architect and design enterprise-grade Kubernetes clusters across AWS, Azure, and GCP.
- Build production-ready Kubernetes infrastructure with HA, scalability, and security best practices.
- Implement Infrastructure as Code with Terraform, Helm, and GitOps workflows.
- Set up monitoring, logging, and observability for Kubernetes workloads.
- Design and execute backup and disaster recovery strategies for containerized applications.
Leadership & Team Management
- Lead a team of 3–4 DevOps engineers, providing technical mentorship.
- Drive best practices in containerization, orchestration, and cloud-native development.
- Collaborate with development teams to optimize deployment strategies.
- Conduct code reviews and maintain infrastructure quality standards.
- Build knowledge-sharing culture with documentation and training.
Operational Excellence
- Manage and scale CI/CD pipelines integrated with Kubernetes.
- Implement security policies (RBAC, network policies, container scanning).
- Optimize cluster performance and cost-efficiency.
- Automate operations to minimize manual interventions.
- Ensure 99.9% uptime for production workloads.
Strategic Planning
- Define the infrastructure roadmap aligned with business needs.
- Evaluate and adopt new cloud-native technologies.
- Perform capacity planning and cloud cost optimization.
- Drive risk assessment and mitigation strategies.
🛠 Must-Have Technical Skills
Kubernetes Expertise
- 6+ years of hands-on Kubernetes experience in production.
- Deep knowledge of Kubernetes architecture (etcd, API server, scheduler, kubelet).
- Advanced Kubernetes networking (CNI, Ingress, Service mesh).
- Strong grasp of Kubernetes storage (CSI, PVs, StorageClasses).
- Experience with Operators and Custom Resource Definitions (CRDs).
Infrastructure as Code
- Terraform (advanced proficiency).
- Helm (developing and managing complex charts).
- Config management tools (Ansible, Chef, Puppet).
- GitOps workflows (ArgoCD, Flux).
Cloud Platforms
- Hands-on experience with at least 2 of the following:
- AWS: EKS, EC2, VPC, IAM, CloudFormation
- Azure: AKS, VNets, ARM templates
- GCP: GKE, Compute Engine, Deployment Manager
CI/CD & DevOps Tools
- Jenkins, GitLab CI, GitHub Actions, Azure DevOps
- Docker (advanced optimization and security practices)
- Container registries (ECR, ACR, GCR, Docker Hub)
- Strong Git workflows and branching strategies
Monitoring & Observability
- Prometheus & Grafana (metrics and dashboards)
- ELK/EFK stack (centralized logging)
- Jaeger/Zipkin (tracing)
- AlertManager (intelligent alerting)
💡 Good-to-Have Skills
- Service Mesh (Istio, Linkerd, Consul)
- Serverless (Knative, OpenFaaS, AWS Lambda)
- Running databases in Kubernetes (Postgres, MongoDB operators)
- ML pipelines (Kubeflow, MLflow)
- Security tools (Aqua, Twistlock, Falco, OPA)
- Compliance (SOC2, PCI-DSS, GDPR)
- Python/Go for automation
- Advanced Shell scripting (Bash/PowerShell)
🎓 Qualifications
- Bachelor’s in Computer Science, Engineering, or related field.
- Certifications (preferred):
- Certified Kubernetes Administrator (CKA)
- Certified Kubernetes Application Developer (CKAD)
- Cloud provider certifications (AWS/Azure/GCP).
Experience
- 6–7 years of DevOps/Infrastructure engineering.
- 4+ years of Kubernetes in production.
- 2+ years in a lead role managing teams.
- Experience with large-scale distributed systems and microservices.
Experience - 2+ Years
Requirements:
● Should have at least 2+ years of DevOps experience
● Should have experience with Kubernetes
● Should have experience with Terraform/Helm
● Should have experience in building scalable server-side systems
● Should have experience in cloud infrastructure and designing databases
● Having experience with NodeJS/TypeScript/AWS is a bonus
● Having experience with WebRTC is a bonus

We are now seeking a talented and motivated individual to contribute to our product in the Cloud data
protection space. Ability to clearly comprehend customer needs in a cloud environment, excellent
troubleshooting skills, and the ability to focus on problem resolution until completion are a requirement.
Responsibilities Include:
Review proposed feature requirements
Create test plan and test cases
Analyze performance, diagnosis, and troubleshooting
Enter and track defects
Interact with customers, partners, and development teams
Researching customer issues and product initiatives
Provide input for service documentation
Required Skills:
Bachelor's degree in Computer Science, Information Systems or related discipline
3+ years' experience inclusive of Software as a Service and/or DevOps engineering experience
Experience with AWS services like VPC, EC2, RDS, SES, ECS, Lambda, S3, ELB
Experience with technologies such as REST, Angular, Messaging, Databases, etc.
Strong troubleshooting skills and issue isolation skills
Possess excellent communication skills (written and verbal English)
Must be able to work as an individual contributor within a team
Ability to think outside the box
Experience in configuring infrastructure
Knowledge of CI / CD
Desirable skills:
Programming skills in scripting languages (e.g., python, bash)
Knowledge of Linux administration
Knowledge of testing tools/frameworks: TestNG, Selenium, etc
Knowledge of Identity and Security
Candidate must have a minimum of 8+ years of IT experience
IST time zone.
candidates should have hands-on experience in
DevOps (GitLab, Artifactory, SonarQube, AquaSec, Terraform & Docker / K8")
Thanks & Regards,
Anitha. K
TAG Specialist
Why LiftOff?
We at LiftOff specialize in product creation, for our main forte lies in helping Entrepreneurs realize their dream. We have helped businesses and entrepreneurs launch more than 70 plus products.
Many on the team are serial entrepreneurs with a history of successful exits.
As a Devops Engineer, you will work directly with our founders and alongside our engineers on a variety of software projects covering various languages, frameworks, and application architectures.
Must Have
*Work experience of at least 2 years with Kubernetes.
*Hands-on experience working with Kubernetes. Preferably on Azure Cloud.
*Well-versed with Kubectl
*Experience in using Azure Monitor, setting up analytics and reports for Azure containers and services.
*Monitoring and observability
*Setting Alerts and auto-scaling
Nice to have
*Scripting and automation
*Experience with Jenkins or any sort of CI/CD pipelines
*Past experience in setting up cloud infrastructure, configurations and database backups
*Experience with Azure App Service
*Experience of setting up web socket-based applications.
*Working knowledge of Azure APIM
We are a group of passionate people driven by core values. We strive to make every process transparent and have flexible work timings along with excellent startup culture and vibe.
- Public clouds, such as AWS, Azure, or Google Cloud Platform
- Automation technologies, such as Kubernetes or Jenkins
- Configuration management tools, such as Puppet or Chef
- Scripting languages, such as Python or Ruby
Role – Devops
Experience 3 – 6 Years
Roles & Responsibilities –
- 3-6 years of experience in deploying and managing highly scalable fault resilient systems
- Strong experience in container orchestration and server automation tools such as Kubernetes, Google Container Engine, Docker Swarm, Ansible, Terraform
- Strong experience with Linux-based infrastructures, Linux/Unix administration, AWS, Google Cloud, Azure
- Strong experience with databases such as MySQL, Hadoop, Elasticsearch, Redis, Cassandra, and MongoDB.
- Knowledge of scripting languages such as Java, JavaScript, Python, PHP, Groovy, Bash.
- Experience in configuring CI/CD pipelines using Jenkins, GitLab CI, Travis.
- Proficient in technologies such as Docker, Kafka, Raft and Vagrant
- Experience in implementing queueing services such as RabbitMQ, Beanstalkd, Amazon SQS and knowledge in ElasticStack is a plus.

Skills required:
Strong knowledge and experience of cloud infrastructure (AWS, Azure or GCP), systems, network design, and cloud migration projects.
Strong knowledge and understanding of CI/CD processes tools (Jenkins/Azure DevOps) is a must.
Strong knowledge and understanding of Docker & Kubernetes is a must.
Strong knowledge of Python, along with one more language (Shell, Groovy, or Java).
Strong prior experience using automation tools like Ansible, Terraform.
Architect systems, infrastructure & platforms using Cloud Services.
Strong communication skills. Should have demonstrated the ability to collaborate across teams and organizations.
Benefits of working with OpsTree Solutions:
Opportunity to work on the latest cutting edge tools/technologies in DevOps
Knowledge focused work culture
Collaboration with very enthusiastic DevOps experts
High growth trajectory
Opportunity to work with big shots in the IT industry








