Cutshort logo

50+ CI/CD Jobs in India

Apply to 50+ CI/CD Jobs on CutShort.io. Find your next job, effortlessly. Browse CI/CD Jobs and apply today!

icon
Poshmark

at Poshmark

3 candid answers
1 recruiter
Reshika Mendiratta
Posted by Reshika Mendiratta
Chennai
7yrs+
Upto ₹35L / yr (Varies
)
Test Automation (QA)
Software Testing (QA)
skill iconPython
skill iconJava
Appium
+5 more

About Poshmark

Poshmark is a leading fashion resale marketplace powered by a vibrant, highly engaged community of buyers and sellers and real-time social experiences. Designed to make online selling fun, more social and easier than ever, Poshmark empowers its sellers to turn their closet into a thriving business and share their style with the world. Since its founding in 2011, Poshmark has grown its community to over 130 million users and generated over $10 billion in GMV, helping sellers realize billions in earnings, delighting buyers with deals and one-of-a-kind items, and building a more sustainable future for fashion. For more information, please visit www.poshmark.com, and for company news, visit newsroom.poshmark.com.


About the role

We are looking for a Lead Software Development Engineer In Test (Lead SDET) who will define, design, and drive the automation and quality engineering strategy across Poshmark. You will take a hands-on leadership role in building scalable test automation frameworks and infrastructure while partnering closely with Engineering, Product, and QA Engineering teams.

You will have a significant impact on the quality of Poshmark’s growing products and services by creating the next generation of tools and frameworks that enable faster development, better testability, and higher confidence releases. You will influence software design, promote strong engineering practices.


Responsibilities

  • Test Harnesses and Infrastructure: Design, implement, and maintain scalable test harnesses and testing infrastructure for web, mobile (iOS & Android), and APIs. Own the long-term architecture and evolution of automation frameworks. Leverages AI-assisted tools to improve test design, increase automation coverage, and reduce manual effort across web, mobile, and API testing.
  • Automation Framework Leadership: Lead the design, enhancement, and optimization of automation frameworks using tools such as Selenium, Appium, Postman, and AI-driven solutions. Ensure frameworks are reliable, maintainable, and scalable, while establishing and enforcing strong coding and testing standards through regular code reviews.
  • Product Quality and Engineering Partnership: Actively monitor product development and usage to identify quality gaps and risks. Partner with developers to improve testability, prevent defects, and integrate testing early in the development lifecycle. Embed automation into CI/CD pipelines to enable continuous testing and rapid feedback
  • Metrics, Reporting, and Continuous Improvement: Define and track quality metrics such as automation coverage, defect trends, and execution stability. Create automated reporting solutions to support data-driven quality decisions. Continuously improve testing processes, tools, and workflows across teams
  • Leadership and Mentorship: Mentor and guide the team on automation best practices and framework design, while translating complex initiatives into clear, actionable goals that enable effective execution.


6-Month Accomplishments

  • Stabilize and enhance existing automation frameworks to improve reliability and execution consistency
  • Uses AI-driven insights to continuously optimize test coverage, execution efficiency, and defect detection effectiveness.
  • Collaborate with development teams to add new capabilities to the automation framework
  • Ensure regular, stable execution of regression tests across multiple platforms
  • Identify and resolve high-impact quality issues early in the development cycle


12+ Month Accomplishments

  • Establish a scalable, future-ready automation architecture deeply integrated with CI/CD pipelines
  • Guide the team in adopting AI-assisted testing practices that complement existing automation frameworks and quality processes.
  • Lead efforts to optimize test execution time through parallelization and smarter automation strategies
  • Mentor and grow a high-performing automation team with clear technical standards
  • Drive measurable improvements in product quality and reduction of production defects


Qualifications

  • 7+ years of experience in software testing with a strong focus on test automation
  • 4+ years of hands-on programming experience using languages such as Python or Java
  • Proven experience designing and scaling automation systems for web, mobile, and APIs
  • Strong expertise with frameworks and tools such as Appium, Selenium, WebDriver, and CI/CD systems
  • Experience across all phases of automation including GUI and integration.
  • Hands-on experience with Jira, Confluence, GitHub, Unix commands, and Jenkins
  • Excellent communication, problem-solving, and technical leadership skills
Read more
Celcom Solutions Global

at Celcom Solutions Global

4 candid answers
2 recruiters
Bisman Gill
Posted by Bisman Gill
Bengaluru (Bangalore)
8yrs+
Upto ₹18L / yr (Varies
)
skill iconJava
skill iconSpring Boot
skill iconAmazon Web Services (AWS)
CI/CD
skill iconDocker

Job Title: Technical Lead (Java/Spring Boot/Cloud)

Location: Bangalore

Experience: 8 to 12 Years


Overview

We are seeking a highly accomplished and charismatic Technical Lead to drive thedesign, development, and delivery of high-volume, scalable, and secure enterpriseapplications. The ideal candidate will possess deep expertise in the Java ecosystem, particularly with Spring Boot and Microservices Architecture, coupled with significant

experience in Cloud Solutions (AWS/Azure) and DevOps practices. This role requiresa proven leader capable of setting "big picture" strategy while mentoring a high-performing team.

Key Responsibilities

Architecture Design

  • Lead the architecture and design of complex, scalable, and secure cloud-native applications using Java/J2EE and the Spring Boot Framework.
  • Design and implement Microservices Architecture and RESTful/SOAP APIs.
  • Spearhead Cloud Solution Architecture, including the design and optimization of cloud-based infrastructure deployment with auto-scaling, fault-tolerant, and reliability capabilities (AWS/Azure).
  • Guide teams on applying Architecture Concepts, Architectural Styles, and Design Patterns (e.g., UML, Object-Oriented Analysis and Design).
  • Solution Architect complex migrations of enterprise applications to Cloud.
  • Conduct Proof-of-Concepts (PoC) for new technologies like Blockchain (Hyper Ledger) for solutions such as Identity Management.

Technical Leadership & Development

  • Lead the entire software development process from conception to completion within an Agile/Waterfall and Cleanroom Engineering environment.
  • Define and enforce best practices and coding standards for Java development, ensuring code quality, security, and performance optimization.
  • Implement and manage CI/CD Pipelines &; DevOps Practices to automate software delivery.
  • Oversee cloud migration and transformation programs for enterprise applications, focusing on reducing infrastructure costs and improving scalability.
  • Troubleshoot and resolve complex technical issues related to the Java/Spring Boot stack, databases (SQL Server, Oracle, My-SQL, Postgres SQL, Elastic Search, Redis), and cloud components.
  • Ensure the adoption of Test Driven Development (TDD), Unit Testing, and Mock Test-Driven Development practices.

People & Delivery Management

  • Act as a Charismatic people leader and Transformative Force, building and mentoring high-performing teams from the ground up.
  • Drive Delivery Management, collaborating with stakeholders to align technical solutions with business objectives and managing large-scale programs from initiation to delivery.
  • Utilize Excellent Communication & Presentation Skills to articulate technical strategies to both technical and non-technical stakeholders.
  • Champion organizational change, driving adoption of new processes, ways of working, and technology platforms.


Required Technical Skills

  • Languages: Java (JDK1.5+), Spring Core Framework, Spring Batch, Java Server Pages (JSP), Servlets, Apache Struts, JSON, Hibernate.
  • Cloud: Extensive experience with Amazon Web Services (AWS) (Solution Architect certification preferred) and familiarity with Azure.
  • DevOps/Containerization: CI/CD Pipelines, Docker.
  • Databases: Strong proficiency in MS SQL Server, Oracle, My-SQL, Postgres SQL, and NoSQL/Caching (Elastic Search, Redis).


Education and Certifications

  • Master's or Bachelor's degree in a relevant field.
  • Certified Amazon Web Services Solution Architect (or equivalent).
  • Experience or certification in leadership is a plus.
Read more
Wohlig Transformations Pvt Ltd
Apoorva Lakshkar
Posted by Apoorva Lakshkar
Mumbai
5 - 8 yrs
₹10L - ₹15L / yr
skill iconNodeJS (Node.js)
skill iconPython
RabbitMQ
skill iconPostgreSQL
BigQuery
+5 more


About Allvest :


- AI-driven financial planning and portfolio management platform

- Secure, data-backed portfolio oversight aligned with regulatory standards

- Building cutting-edge fintech solutions for intelligent investment decisions


Role Overview :


- Architect and build scalable, high-performance backend systems

- Work on mission-critical systems handling real-time market data and portfolio analytics

- Ensure regulatory compliance and secure financial transactions


Key Responsibilities :


- Design, develop, and maintain robust backend services and APIs using NodeJS and Python

- Build event-driven architectures using RabbitMQ and Kafka for real-time data processing

- Develop data pipelines integrating PostgreSQL and BigQuery for analytics and warehousing

- Ensure system reliability, performance, and security with focus on low-latency operations

- Lead technical design discussions, code reviews, and mentor junior developers

- Optimize database queries, implement caching strategies, and enhance system performance

- Collaborate with cross-functional teams to deliver end-to-end features

- Implement monitoring, logging, and observability solutions


Required Skills & Experience :


- 5+ years of professional backend development experience

- Strong expertise in NodeJS and Python for production-grade applications

- Proven experience designing RESTful APIs and microservices architectures

- Strong proficiency in PostgreSQL including query optimization and database design

- Hands-on experience with RabbitMQ and Kafka for event-driven systems

- Experience with BigQuery or similar data warehousing solutions

- Solid understanding of distributed systems, scalability patterns, and high-traffic applications

- Strong knowledge of authentication, authorization, and security best practices in financial applications

- Experience with Git, CI/CD pipelines, and modern development workflows

- Excellent problem-solving and debugging skills across distributed systems


Preferred Qualifications :


- Prior experience in fintech, banking, or financial services

- Familiarity with cloud platforms (GCP/AWS/Azure) and containerization (Docker, Kubernetes)

- Knowledge of frontend technologies for full-stack collaboration

- Experience with Redis or Memcached

- Understanding of regulatory requirements (KYC, compliance, data privacy)

- Open-source contributions or tech community participation


What We Offer :


- Opportunity to work on cutting-edge fintech platform with modern technology stack

- Collaborative environment with experienced team from leading financial institutions

- Competitive compensation with equity participation

- Challenging problems at the intersection of finance, AI, and technology

- Career growth in fast-growing startup environment


Location: Mumbai (Phoenix Market City, Kurla West)


Also Apply at https://wohlig.keka.com/careers/jobdetails/122768



Read more
Inflectionio

at Inflectionio

1 candid answer
Renu Philip
Posted by Renu Philip
Bengaluru (Bangalore)
3 - 5 yrs
₹20L - ₹30L / yr
skill iconAmazon Web Services (AWS)
skill iconKubernetes
skill iconJenkins
Chef
CI/CD
+6 more

We are looking for a DevOps Engineer with hands-on experience in managing production infrastructure using AWS, Kubernetes, and Terraform. The ideal candidate will have exposure to CI/CD tools and queueing systems, along with a strong ability to automate and optimize workflows.


Responsibilities: 

* Manage and optimize production infrastructure on AWS, ensuring scalability and reliability.

* Deploy and orchestrate containerized applications using Kubernetes.

* Implement and maintain infrastructure as code (IaC) using Terraform.

* Set up and manage CI/CD pipelines using tools like Jenkins or Chef to streamline deployment processes.

* Troubleshoot and resolve infrastructure issues to ensure high availability and performance.

* Collaborate with cross-functional teams to define technical requirements and deliver solutions.

* Nice-to-have: Manage queueing systems like Amazon SQS, Kafka, or RabbitMQ.



Requirements: 

* 2+ years of experience with AWS, including practical exposure to its services in production environments.

* Demonstrated expertise in Kubernetes for container orchestration.

* Proficiency in using Terraform for managing infrastructure as code.

* Exposure to at least one CI/CD tool, such as Jenkins or Chef.

* Nice-to-have: Experience managing queueing systems like SQS, Kafka, or RabbitMQ.

Read more
Heaven Designs

at Heaven Designs

1 product
Reshika Mendiratta
Posted by Reshika Mendiratta
Remote only
2yrs+
Upto ₹12L / yr (Varies
)
skill iconPython
skill iconDjango
RESTful APIs
DevOps
CI/CD
+8 more

Backend Engineer (Python / Django + DevOps)


Company: SurgePV (A product by Heaven Designs Pvt. Ltd.)


About SurgePV

SurgePV is an AI-first solar design software built from more than a decade of hands-on experience designing and engineering thousands of solar installations at Heaven Designs. After working with nearly every solar design tool in the market, we identified major gaps in speed, usability, and intelligence—particularly for rooftop solar EPCs.

Our vision is to build the most powerful and intuitive solar design platform for rooftop installers, covering fast PV layouts, code-compliant engineering, pricing, proposals, and financing in a single workflow. SurgePV enables small and mid-sized solar EPCs to design more systems, close more deals, and accelerate the clean energy transition globally.

As SurgePV scales, we are building a robust backend platform to support complex geometry, pricing logic, compliance rules, and workflow automation at scale.


Role Overview

We are seeking a Backend Engineer (Python / Django + DevOps) to own and scale SurgePV’s core backend systems. You will be responsible for designing, building, and maintaining reliable, secure, and high-performance services that power our solar design platform.

This role requires strong ownership—you will work closely with the founders, frontend engineers, and product team to make architectural decisions and ensure the platform remains fast, observable, and scalable as global usage grows.


Key Responsibilities

  • Design, develop, and maintain backend services and REST APIs that power PV design, pricing, and core product workflows.
  • Collaborate with the founding team on system architecture, including authentication, authorization, billing, permissions, integrations, and multi-tenant design.
  • Build secure, scalable, and observable systems with structured logging, metrics, alerts, and rate limiting.
  • Own DevOps responsibilities for backend services, including Docker-based containerization, CI/CD pipelines, and production deployments.
  • Optimize PostgreSQL schemas, migrations, indexes, and queries for computation-heavy and geospatial workloads.
  • Implement caching strategies and performance optimizations where required.
  • Integrate with third-party APIs such as CRMs, financing providers, mapping platforms, and satellite or irradiance data services.
  • Write clean, maintainable, well-tested code and actively participate in code reviews to uphold engineering quality.

Required Skills & Qualifications (Must-Have)

  • 2–5 years of experience as a Backend Engineer.
  • Strong proficiency in Python and Django / Django REST Framework.
  • Solid computer science fundamentals, including data structures, algorithms, and basic distributed systems concepts.
  • Proven experience designing and maintaining REST APIs in production environments.
  • Hands-on DevOps experience, including:
  • Docker and containerized services
  • CI/CD pipelines (GitHub Actions, GitLab CI, or similar)
  • Deployments on cloud platforms such as AWS, GCP, Azure, or DigitalOcean
  • Strong working knowledge of PostgreSQL, including schema design, migrations, indexing, and query optimization.
  • Strong debugging skills and a habit of instrumenting systems using logs, metrics, and alerts.
  • Ownership mindset with the ability to take systems from spec → implementation → production → iteration.

Good-to-Have Skills

  • Experience working in early-stage startups or building 0→1 products.
  • Familiarity with Kubernetes or other container orchestration tools.
  • Experience with Infrastructure as Code (Terraform, Pulumi).
  • Exposure to monitoring and observability stacks such as Prometheus, Grafana, ELK, or similar tools.
  • Prior exposure to solar, CAD/geometry, geospatial data, or financial/pricing workflows.

What We Offer

  • Real-world impact: every feature you ship helps accelerate solar adoption on real rooftops.
  • Opportunity to work across backend engineering, DevOps, integrations, and performance optimization.
  • A mission-driven, fast-growing product focused on sustainability and clean energy.
Read more
MyOperator - VoiceTree Technologies

at MyOperator - VoiceTree Technologies

1 video
3 recruiters
Vijay Muthu
Posted by Vijay Muthu
Remote only
3 - 5 yrs
₹8L - ₹12L / yr
skill iconKubernetes
skill iconAmazon Web Services (AWS)
Amazon EC2
AWS RDS
AWS opensearch
+22 more

About MyOperator

MyOperator is a Business AI Operator, a category leader that unifies WhatsApp, Calls, and AI-powered chat & voice bots into one intelligent business communication platform. Unlike fragmented communication tools, MyOperator combines automation, intelligence, and workflow integration to help businesses run WhatsApp campaigns, manage calls, deploy AI chatbots, and track performance — all from a single, no-code platform. Trusted by 12,000+ brands including Amazon, Domino's, Apollo, and Razorpay, MyOperator enables faster responses, higher resolution rates, and scalable customer engagement — without fragmented tools or increased headcount.


Job Summary

We are looking for a skilled and motivated DevOps Engineer with 3+ years of hands-on experience in AWS cloud infrastructure, CI/CD automation, and Kubernetes-based deployments. The ideal candidate will have strong expertise in Infrastructure as Code, containerization, monitoring, and automation, and will play a key role in ensuring high availability, scalability, and security of production systems.


Key Responsibilities

  • Design, deploy, manage, and maintain AWS cloud infrastructure, including EC2, RDS, OpenSearch, VPC, S3, ALB, API Gateway, Lambda, SNS, and SQS.
  • Build, manage, and operate Kubernetes (EKS) clusters and containerized workloads.
  • Containerize applications using Docker and manage deployments with Helm charts
  • Develop and maintain CI/CD pipelines using Jenkins for automated build and deployment processes
  • Provision and manage infrastructure using Terraform (Infrastructure as Code)
  • Implement and manage monitoring, logging, and alerting solutions using Prometheus and Grafana
  • Write and maintain Python scripts for automation, monitoring, and operational tasks
  • Ensure high availability, scalability, performance, and cost optimization of cloud resources
  • Implement and follow security best practices across AWS and Kubernetes environments
  • Troubleshoot production issues, perform root cause analysis, and support incident resolution
  • Collaborate closely with development and QA teams to streamline deployment and release processes

Required Skills & Qualifications

  • 3+ years of hands-on experience as a DevOps Engineer or Cloud Engineer.
  • Strong experience with AWS services, including:
  • EC2, RDS, OpenSearch, VPC, S3
  • Application Load Balancer (ALB), API Gateway, Lambda
  • SNS and SQS.
  • Hands-on experience with AWS EKS (Kubernetes)
  • Strong knowledge of Docker and Helm charts
  • Experience with Terraform for infrastructure provisioning and management
  • Solid experience building and managing CI/CD pipelines using Jenkins
  • Practical experience with Prometheus and Grafana for monitoring and alerting
  • Proficiency in Python scripting for automation and operational tasks
  • Good understanding of Linux systems, networking concepts, and cloud security
  • Strong problem-solving and troubleshooting skills

Good to Have (Preferred Skills)

  • Exposure to GitOps practices
  • Experience managing multi-environment setups (Dev, QA, UAT, Production)
  • Knowledge of cloud cost optimization techniques
  • Understanding of Kubernetes security best practices
  • Experience with log aggregation tools (e.g., ELK/OpenSearch stack)

Language Preference

  • Fluency in English is mandatory.
  • Fluency in Hindi is preferred.
Read more
httpswwwicloudemscomvlog
AMISHA SRIVASTAVA
Posted by AMISHA SRIVASTAVA
Remote only
3 - 6 yrs
₹4L - ₹10L / yr
skill iconPHP
SQL
skill iconNodeJS (Node.js)
skill iconMongoDB
skill iconPostgreSQL
+6 more


We are seeking a highly skilled software developer with proven experience in developing and scaling education ERP solutions. The ideal candidate should have strong expertise in Node.js or PHP (Laravel), MySQL, and MongoDB, along with hands-on experience in implementing ERP modules such as HR, Exams, Inventory, Learning Management System (LMS), Admissions, Fee Management, and Finance.


Key Responsibilities

Design, develop, and maintain scalable Education ERP modules.

Work on end-to-end ERP features, including HR, exams, inventory, LMS, admissions, fees, and finance.

Build and optimize REST APIs/GraphQL services and ensure seamless integrations.

Optimize system performance, scalability, and security for high-volume ERP usage.

Conduct code reviews, enforce coding standards, and mentor junior developers.

Stay updated with emerging technologies and recommend improvements for ERP solutions.


Required Skills & Qualifications

Strong expertise in Node.js and PHP (Laravel, Core PHP).

Proficiency with MySQL, MongoDB, and PostgreSQL (database design & optimization).

Frontend knowledge: JavaScript, jQuery, HTML, CSS (React/Vue preferred).

Experience with REST APIs, GraphQL, and third-party integrations (payment gateways, SMS, and email).

Hands-on with Git/GitHub, Docker, and CI/CD pipelines.


Familiarity with cloud platforms (AWS, Azure, GCP) is a plus.

4+ years of professional development experience, with a minimum of 2 years in ERP systems.

Preferred Experience


Prior work in the education ERP domain.

Deep knowledge of HR, Exam, Inventory, LMS, Admissions, Fees & Finance modules.

Exposure to high-traffic enterprise applications.

Strong leadership, mentoring, and problem-solving abilities


Benefit:

Permanent Work From Home

Read more
OpsTree Solutions

at OpsTree Solutions

4 candid answers
1 recruiter
Nikita Sinha
Posted by Nikita Sinha
Mumbai
3 - 4 yrs
Upto ₹13L / yr (Varies
)
skill iconPython
Google Cloud Platform (GCP)
skill iconKubernetes
CI/CD

Key Responsibilities

  • Automation & Reliability: Automate infrastructure and operational processes to ensure high reliability, scalability, and security.
  • Cloud Infrastructure Design: Gather GCP infrastructure requirements, evaluate solution options, and implement best-fit cloud architectures.
  • Infrastructure as Code (IaC): Design, develop, and maintain infrastructure using Terraform and Ansible.
  • CI/CD Ownership: Build, manage, and maintain robust CI/CD pipelines using Jenkins, ensuring system reliability and performance.
  • Container Orchestration: Manage Docker containers and self-managed Kubernetes clusters across multiple cloud environments.
  • Monitoring & Observability: Implement and manage cloud-native monitoring solutions using Prometheus, Grafana, and the ELK stack.
  • Proactive Issue Resolution: Troubleshoot and resolve infrastructure and application issues across development, testing, and production environments.
  • Scripting & Automation: Develop efficient automation scripts using Python and one or more of Node.js, Go, or Shell scripting.
  • Security Best Practices: Maintain and enhance the security of cloud services, Kubernetes clusters, and deployment pipelines.
  • Cross-functional Collaboration: Work closely with engineering, product, and security teams to design and deploy secure, scalable infrastructure.


Read more
Hiver

at Hiver

2 recruiters
Bisman Gill
Posted by Bisman Gill
HSR Layout, BLR
2 - 4 yrs
Upto ₹14L / yr (Varies
)
Selenium
Manual testing
Automation Testing
Software Testing (QA)
CI/CD
+1 more

About us:

Hiver offers teams the simplest way to provide outstanding and personalized customer service. As a customer service solution built on Gmail, Hiver is intuitive, super easy to learn, and delightful to use. Hiver is used by thousands of teams at some of the best-known companies in the world to provide attentive, empathetic, and human service to their customers at scale. We’re a top-rated product on G2 and rank very highly on customer satisfaction. 


At Hiver, we obsess about being world-class at everything we do. Our product is loved by our customers, our content engages a very wide audience, our customer service is one of the highest rated in the industry, and our sales team is as driven about doing right by our customers as they are by hitting their numbers. We’re profitably run and are backed by notable investors. K1 Capital led our most recent round of $27 million. Before that, we raised from Kalaari Capital, Kae Capital, and Citrix Startup Accelerator. 


Opportunity:

We are looking for a QA Engineer whose key goals would be to drive software quality and reduce risks in our releases. This would involve doing both functional tests as well as building the automated tests for our CI systems. Expect lots of challenges and high levels of ownership and autonomy. Come up with testing procedures to validate functional, system, and performance requirements for new features Ensuring the quality of releases for the features by running the test cases and reporting them Write and maintain automated test suites for functional and performance testing Keep the manual test cases updatedParticipate in the product feature design and specification with Product Managers, UX engineers, and developers


What We are looking for?


  • 2+ years of total experience in a QA role.
  • Should be able to write quality test cases on the problem statement.
  • 1+ years of experience in QA automation with any of the Selenium, Playwright or Webdriver.io etc.
  • Knowledge of at least one high-level programming language like Python, Javascript, Java etc.
  • Hands-on experience with Code version control systems like Git.
  • Work with the Customer Support team to reproduce customer problems and provide solutions to customers.


Good to have skills?


  • Experience with RESTful API testing tools like Postman and performance testing tools like JMeter.
  • Hands-on experience with Build and Continuous Integration (CI) systems like Jenkins.
  • Experience working with Linux/Unix platforms and security aspects of testing is a plus.



Read more
Appiness Interactive Pvt. Ltd.
Bengaluru (Bangalore)
4 - 10 yrs
₹6L - ₹12L / yr
skill iconPython
skill iconDjango
skill iconReact.js
skill iconNextJs (Next.js)
skill iconPostgreSQL
+2 more

Location : Bengaluru, India

Type : Full-time

Experience :4-7 Years

Mode :Hybrid


The Role

We're looking for a Full Stack Engineer who thrives on building high-performance applications at scale. You'll work across our entire stack—from optimizing PostgreSQL queries on 140M+ records to crafting intuitive React interfaces. This is a high-impact role where your code directly influences how sales teams discover and engage with prospects worldwide.

What You'll Do

  • Build and optimize REST APIs using Django REST Framework handling millions of records
  • Design and implement complex database queries, indexes, and caching strategies for PostgreSQL
  • Develop responsive, high-performance front-end interfaces with Next.js and React
  • Implement Redis caching layers and optimize query performance for sub-second response times
  • Design and implement smart search/filter systems with complex logic
  • Collaborate on data pipeline architecture for processing large datasets
  • Write clean, testable code with comprehensive unit and integration tests
  • Participate in code reviews, architecture discussions, and technical planning

Required Skills

  • 4-7 years of professional experience in full stack development
  • Strong proficiency in Python and Django/Django REST Framework
  • Expert-level PostgreSQL knowledge: query optimization, indexing, EXPLAIN ANALYZE, partitioning
  • Solid experience with Next.js, React, and modern JavaScript/TypeScript
  • Experience with state management (Zustand, Redux, or similar)
  • Working knowledge of Redis for caching and session management
  • Familiarity with AWS services (RDS, EC2, S3, CloudFront)
  • Understanding of RESTful API design principles and best practices
  • Experience with Git, CI/CD pipelines, and agile development workflows

Nice to Have

  • Experience with Elasticsearch for full-text search at scale
  • Knowledge of data scraping, ETL pipelines, or data enrichment
  • Experience with Celery for async task processing
  • Familiarity with Tailwind CSS and modern UI/UX practices
  • Previous work on B2B SaaS or data-intensive applications
  • Understanding of security best practices and anti-scraping measures


Our Tech Stack

Backend

Python, Django REST Framework

Frontend

Next.js, React, Zustand, Tailwind CSS

Database

PostgreSQL 17, Redis

Infrastructure

AWS (RDS, EC2, S3, CloudFront), Docker

Tools

GitHub, pgBouncer


Why Join Us

  • Work on a product processing 140M+ records—real scale, real challenges
  • Direct impact on product direction and technical decisions
  • Modern tech stack with room to experiment and innovate
  • Collaborative team environment with a focus on growth
  • Competitive compensation and flexible hybrid work model


Read more
Service Co

Service Co

Agency job
via Vikash Technologies by Rishika Teja
Bengaluru (Bangalore)
8 - 13 yrs
₹15L - ₹30L / yr
skill iconPython
PySpark
SQL
CI/CD
databricks
+1 more

Strong programming skills in Python and PySpark for large-scale data processing.


• Proficiency in SQL for data manipulation, analysis, and performance tuning.


• Experience with Dataform, Dataproc, and BigQuery for data pipeline development and orchestration.


• Hands-on experience with Kafka and Confluent for real-time data streaming.


• Knowledge of Cloud Scheduler and Dataflow for automation and workflow management.


• Familiarity with DBT, Machine Learning, and AI concepts is a good advantage.


• Understanding of Data Governance principles and implementation practices.


• Experience using Git for version control and CI/CD pipelines for automated deployments.


• Working knowledge of Infrastructure as Code (IaC) for cloud resource management and automation. 

Read more
PGAGI
Javeriya Shaik
Posted by Javeriya Shaik
Remote only
0 - 1 yrs
₹1 - ₹2 / mo
skill iconDocker
skill iconKubernetes
prometheus
skill icongrafana
DevOps
+1 more

About PGAGI:

At PGAGI, we believe in a future where AI and human intelligence coexist in harmony, creating a world that is smarter, faster, and better. We are not just building AI; we are shaping a future where AI is a fundamental and positive force for businesses, societies, and the planet.


Position Overview:

PGAGI Consultancy Pvt. Ltd. is seeking a proactive and motivated DevOps Intern with around 3-6 months of hands-on experience to support our AI model deployment and infrastructure initiatives. This role is ideal for someone looking to deepen their expertise in DevOps practices tailored to AI/ML environments, including CI/CD automation, cloud infrastructure, containerization, and monitoring.


Key Responsibilities:

AI Model Deployment & Integration

  • Assist in containerizing and deploying AI/ML models into production using Docker.
  • Support integration of models into existing systems and APIs.

Infrastructure Management

  • Help manage cloud and on-premise environments to ensure scalability and consistency.
  • Work with Kubernetes for orchestration and environment scaling.

CI/CD Pipeline Automation

  • Collaborate on building and maintaining automated CI/CD pipelines (e.g., GitHub Actions, Jenkins).
  • Implement basic automated testing and rollback mechanisms.

Hosting & Web Environment Management

  • Assist in managing hosting platforms, web servers, and CDN configurations.
  • Support DNS, load balancer setups, and ensure high availability of web services.

Monitoring, Logging & Optimization

  • Set up and maintain monitoring/logging tools like Prometheus and Grafana.
  • Participate in troubleshooting and resolving performance bottlenecks.

Security & Compliance

  • Apply basic DevSecOps practices including security scans and access control implementations.
  • Follow security and compliance checklists under supervision.

Cost & Resource Management

  • Monitor resource usage and suggest cost optimization strategies in cloud environments.

Documentation

  • Maintain accurate documentation for deployment processes and incident responses.

Continuous Learning & Innovation

  • Suggest improvements to workflows and tools.
  • Stay updated with the latest DevOps and AI infrastructure trends.


Requirements:

  • Around 6 months of experience in a DevOps or related technical role (internship or professional).
  • Basic understanding of Docker, Kubernetes, and CI/CD tools like GitHub Actions or Jenkins.
  • Familiarity with cloud platforms (AWS, GCP, or Azure) and monitoring tools (e.g., Prometheus, Grafana).
  • Exposure to scripting languages (e.g., Bash, Python) is a plus.
  • Strong problem-solving skills and eagerness to learn.
  • Good communication and documentation abilities.

Compensation

  • Joining Bonus: INR 2,500 one-time bonus upon joining.
  • Monthly Stipend: Base stipend of INR 8,000 per month, with the potential to increase up to INR 20,000 based on performance evaluations.
  • Performance-Based Pay Scale: Eligibility for monthly performance-based bonuses, rewarding exceptional project contributions and teamwork.
  • Additional Benefits: Access to professional development opportunities, including workshops, tech talks, and mentoring sessions.


Ready to kick-start your DevOps journey in a dynamic AI-driven environment? Apply now

#Devops #Docker #Kubernetes #DevOpsIntern

Read more
Service Co
Mumbai, Navi Mumbai
9 - 14 yrs
₹15L - ₹38L / yr
skill iconAmazon Web Services (AWS)
Windows Azure
Google Cloud Platform (GCP)
Terraform
skill iconKubernetes
+1 more

Multi-Cloud Operations: Minimum 2 public clouds (GCP & Azure preferred) Kubernetes: 


Strong hands-on experience with production clusters DevOps: CI/CD pipelines, automation, IaC (Terraform preferred)


Troubleshooting: Deep Linux, networking, performance, and distributed systems debugging 

Read more
SimplyFI Softech

at SimplyFI Softech

2 candid answers
Nikita Sinha
Posted by Nikita Sinha
Mumbai
2 - 4 yrs
Upto ₹7L / yr (Varies
)
skill iconKubernetes
skill iconDocker
skill iconJenkins
Team Management
CI/CD
+1 more

We are looking for a DevOps Engineer with hands-on experience in automating, monitoring, and scaling cloud-native infrastructure.

You will play a critical role in building and maintaining high-availability, secure, and scalable CI/CD pipelines for our AI- and blockchain-powered FinTech platforms.


You will work closely with Engineering, QA, and Product teams to streamline deployments, optimize cloud environments, and ensure reliable production systems.


Key Responsibilities

  • Design, build, and maintain CI/CD pipelines using Jenkins, GitHub Actions, or GitLab CI
  • Manage cloud infrastructure using Infrastructure as Code (IaC) tools such as Terraform, Ansible, or CloudFormation
  • Deploy, manage, and monitor applications on AWS, Azure, or GCP
  • Ensure high availability, scalability, and performance of production environments
  • Implement security best practices across infrastructure and DevOps workflows
  • Automate environment provisioning, deployments, backups, and monitoring
  • Configure and manage containerized applications using Docker and Kubernetes
  • Collaborate with developers to improve build, release, and deployment processes
  • Monitor systems using tools like Prometheus, Grafana, ELK Stack, or CloudWatch
  • Perform root cause analysis (RCA) and support production incident response

Required Skills & Experience

  • 2+ years of experience in DevOps, Cloud Engineering, or Infrastructure Automation
  • Strong hands-on experience with AWS, Azure, or GCP
  • Proven experience in setting up and managing CI/CD pipelines
  • Proficiency in Docker, Kubernetes, and container orchestration
  • Experience with Terraform, Ansible, or similar IaC tools
  • Knowledge of monitoring, logging, and alerting systems
  • Strong scripting skills using Shell, Bash, or Python
  • Good understanding of Git, version control, and branching strategies
  • Experience supporting production-grade SaaS or enterprise platforms


Read more
Vola Finance

at Vola Finance

1 video
2 recruiters
Reshika Mendiratta
Posted by Reshika Mendiratta
Bengaluru (Bangalore)
4yrs+
Upto ₹20L / yr (Varies
)
skill iconPython
FastAPI
RESTful APIs
GraphQL
skill iconAmazon Web Services (AWS)
+7 more

Python Backend Developer

We are seeking a skilled Python Backend Developer responsible for managing the interchange of data between the server and the users. Your primary focus will be on developing server-side logic to ensure high performance and responsiveness to requests from the front end. You will also be responsible for integrating front-end elements built by your coworkers into the application, as well as managing AWS resources.


Roles & Responsibilities

  • Develop and maintain scalable, secure, and robust backend services using Python
  • Design and implement RESTful APIs and/or GraphQL endpoints
  • Integrate user-facing elements developed by front-end developers with server-side logic
  • Write reusable, testable, and efficient code
  • Optimize components for maximum performance and scalability
  • Collaborate with front-end developers, DevOps engineers, and other team members
  • Troubleshoot and debug applications
  • Implement data storage solutions (e.g., PostgreSQL, MySQL, MongoDB)
  • Ensure security and data protection

Mandatory Technical Skill Set

  • Implementing optimal data storage (e.g., PostgreSQL, MySQL, MongoDB, S3)
  • Python backend development experience
  • Design, implement, and maintain CI/CD pipelines using tools such as Jenkins, GitLab CI/CD, or GitHub Actions
  • Implemented and managed containerization platforms such as Docker and orchestration tools like Kubernetes
  • Previous hands-on experience in:
  • EC2, S3, ECS, EMR, VPC, Subnets, SQS, CloudWatch, CloudTrail, Lambda, SageMaker, RDS, SES, SNS, IAM, S3, Backup, AWS WAF
  • SQL
Read more
Bengaluru (Bangalore)
7 - 10 yrs
₹10L - ₹15L / yr
IAM
skill iconKubernetes
CI/CD
Security awareness

Key Responsibilities

  • Lead the design, development, and evolution of our AWS platform services and automation frameworks.
  • Build scalable, secure, and reusable infrastructure using Terraform, AWS CDK, and GitOps workflows.
  • Develop and maintain CI/CD pipelines to support rapid delivery and reliable releases. nerdrabbit.com
  • Architect solutions leveraging core AWS services such as EKS, Lambda, IAM, CloudFront, Control Tower, and other platform components.
  • Define, enforce, and monitor security, governance, and compliance guardrails using AWS Config, SCPs, AWS SSO, and least-privilege IAM policies.
  • Enable observability and reliability with tools like CloudWatch, Grafana, Prometheus, and OpenTelemetry.
  • Collaborate with cross-functional teams to deliver internal platform capabilities with clear SLAs and support models.
  • Mentor and guide teams in AWS best practices to improve productivity, operational excellence, and cloud cost optimization.

Mandatory Skills & Expertise

  • AWS Platform Mastery: In-depth experience with Kubernetes (EKS), serverless (Lambda), IAM, CloudFront, Control Tower, and related platform services.
  • Infrastructure as Code: Expert in Terraform, AWS CDK, or equivalent for building production-grade IaC.
  • CI/CD & Observability: Strong experience with GitHub Actions and observability stacks (CloudWatch, Grafana, Prometheus, OpenTelemetry).
  • Security & Governance: Demonstrated ability to implement governance guardrails (SCPs, AWS Config) and enforce least-privilege access.
  • Platform-as-a-Product: Mindset for treating internal platform as a product with service ownership, SLAs, and measurable developer experience improvements.


Read more
Ekloud INC
ashwini rathod
Posted by ashwini rathod
india
6 - 15 yrs
₹15L - ₹25L / yr
DevOps
API
Meta-data management
CI/CD
CI\CD VERSION
+12 more

Salesforce DevOps Engineer


Responsibilities

  • Support the design and implement of the DevOps strategy. This includes but is not limited to the CI/CD workflow (Version Control, and automated deployments) ,Sandbox Management , Documenting DevOps releases , Overseeing the Developer Workflow and ensuring Code Reviews take place.
  • Work closely with QA, Tech Leads, Senior Devs and Architects to ensure the smooth delivery of build artefacts into Salesforce environments.
  • Implement scripts utilising the Salesforce Metadata API and SFDX
  • Refine technical user stories as required, articulate clearly the technical solution required to meet a specific DevOps requirement.
  • Support the Tech Lead with ensuring best practises are adhered to, providing feedback as required.
  • Maintain the development workflow, guide and effectively communicate the workflow to Development teams
  • Design, implement, and maintain CI/CD pipelines.
  • Automate infrastructure provisioning and configuration management.
  • Monitor system performance and troubleshoot issues.
  • Ensure security and compliance across all environments.


Required Skills & Experience

  • Proficiency in CI/CD tools such as GitHub Actions.
  • 5+ years in Salesforce Development
  • Strong experience CI/CD technologies, Git (version control), Salesforce Metadata API, SFDX
  • Expertise in large-scale integration using SOAP, REST, Streaming (including Lightning Events), and Metadata APIs, facilitating the seamless connection of Salesforce with other systems.
  • Excellent technical documentation skills
  • Excellent communication skills


Desired Skills

  • Comfortable and effective in leading developer, ensuring project success and team cohesion
  • Financial Services industry experience

●Experience in working in both agile and waterfall methodologies.



Read more
AdTech Industry

AdTech Industry

Agency job
via Peak Hire Solutions by Dhara Thakkar
Noida
8 - 12 yrs
₹60L - ₹80L / yr
DevOps
Apache Spark
Apache Airflow
skill iconMachine Learning (ML)
Pipeline management
+13 more

Review Criteria:

  • Strong MLOps profile
  • 8+ years of DevOps experience and 4+ years in MLOps / ML pipeline automation and production deployments
  • 4+ years hands-on experience in Apache Airflow / MWAA managing workflow orchestration in production
  • 4+ years hands-on experience in Apache Spark (EMR / Glue / managed or self-hosted) for distributed computation
  • Must have strong hands-on experience across key AWS services including EKS/ECS/Fargate, Lambda, Kinesis, Athena/Redshift, S3, and CloudWatch
  • Must have hands-on Python for pipeline & automation development
  • 4+ years of experience in AWS cloud, with recent companies
  • (Company) - Product companies preferred; Exception for service company candidates with strong MLOps + AWS depth

 

Preferred:

  • Hands-on in Docker deployments for ML workflows on EKS / ECS
  • Experience with ML observability (data drift / model drift / performance monitoring / alerting) using CloudWatch / Grafana / Prometheus / OpenSearch.
  • Experience with CI / CD / CT using GitHub Actions / Jenkins.
  • Experience with JupyterHub/Notebooks, Linux, scripting, and metadata tracking for ML lifecycle.
  • Understanding of ML frameworks (TensorFlow / PyTorch) for deployment scenarios.

 

Job Specific Criteria:

  • CV Attachment is mandatory
  • Please provide CTC Breakup (Fixed + Variable)?
  • Are you okay for F2F round?
  • Have candidate filled the google form?

 

Role & Responsibilities:

We are looking for a Senior MLOps Engineer with 8+ years of experience building and managing production-grade ML platforms and pipelines. The ideal candidate will have strong expertise across AWS, Airflow/MWAA, Apache Spark, Kubernetes (EKS), and automation of ML lifecycle workflows. You will work closely with data science, data engineering, and platform teams to operationalize and scale ML models in production.

 

Key Responsibilities:

  • Design and manage cloud-native ML platforms supporting training, inference, and model lifecycle automation.
  • Build ML/ETL pipelines using Apache Airflow / AWS MWAA and distributed data workflows using Apache Spark (EMR/Glue).
  • Containerize and deploy ML workloads using Docker, EKS, ECS/Fargate, and Lambda.
  • Develop CI/CT/CD pipelines integrating model validation, automated training, testing, and deployment.
  • Implement ML observability: model drift, data drift, performance monitoring, and alerting using CloudWatch, Grafana, Prometheus.
  • Ensure data governance, versioning, metadata tracking, reproducibility, and secure data pipelines.
  • Collaborate with data scientists to productionize notebooks, experiments, and model deployments.

 

Ideal Candidate:

  • 8+ years in MLOps/DevOps with strong ML pipeline experience.
  • Strong hands-on experience with AWS:
  • Compute/Orchestration: EKS, ECS, EC2, Lambda
  • Data: EMR, Glue, S3, Redshift, RDS, Athena, Kinesis
  • Workflow: MWAA/Airflow, Step Functions
  • Monitoring: CloudWatch, OpenSearch, Grafana
  • Strong Python skills and familiarity with ML frameworks (TensorFlow/PyTorch/Scikit-learn).
  • Expertise with Docker, Kubernetes, Git, CI/CD tools (GitHub Actions/Jenkins).
  • Strong Linux, scripting, and troubleshooting skills.
  • Experience enabling reproducible ML environments using Jupyter Hub and containerized development workflows.

 

Education:

  • Master’s degree in computer science, Machine Learning, Data Engineering, or related field. 
Read more
Deqode

at Deqode

1 recruiter
Samiksha Agrawal
Posted by Samiksha Agrawal
Mumbai
3 - 6 yrs
₹5L - ₹15L / yr
DevOps
Google Cloud Platform (GCP)
Terraform
skill iconJenkins
CI/CD
+2 more

Role: Senior Platform Engineer (GCP Cloud)

Experience Level: 3 to 6 Years

Work location: Mumbai

Mode : Hybrid


Role & Responsibilities:

  • Build automation software for cloud platforms and applications
  • Drive Infrastructure as Code (IaC) adoption
  • Design self-service, self-healing monitoring and alerting tools
  • Automate CI/CD pipelines (Git, Jenkins, SonarQube, Docker)
  • Build Kubernetes container platforms
  • Introduce new cloud technologies for business innovation

Requirements:

  • Hands-on experience with GCP Cloud
  • Knowledge of cloud services (compute, storage, network, messaging)
  • IaC tools experience (Terraform/CloudFormation)
  • SQL & NoSQL databases (Postgres, Cassandra)
  • Automation tools (Puppet/Chef/Ansible)
  • Strong Linux administration skills
  • Programming: Bash/Python/Java/Scala
  • CI/CD pipeline expertise (Jenkins, Git, Maven)
  • Multi-region deployment experience
  • Agile/Scrum/DevOps methodology


Read more
Deqode

at Deqode

1 recruiter
purvisha Bhavsar
Posted by purvisha Bhavsar
Remote only
3.5 - 7 yrs
₹8L - ₹25L / yr
Internet of Things (IOT)
Manual testing
Test Automation (QA)
API
CI/CD
+1 more

𝐇𝐢 𝐂𝐨𝐧𝐧𝐞𝐜𝐭𝐢𝐨𝐧𝐬! 👋 𝐖𝐞𝐥𝐜𝐨𝐦𝐞 𝐭𝐨 2026! 🎉

Starting the new year with an exciting opportunity!

Deqode 𝐈𝐒 𝐇𝐈𝐑𝐈𝐍𝐆! 💻


Hiring: Senior Test Engineer at Deqode

⭐ Experience: 3.5+ Years

⭐ Work Mode:- Remote

⏱️ Notice Period: Immediate Joiners

(Only immediate joiners & candidates serving notice period)


Role Summary:

Looking for a Senior Test Engineer with strong experience in IoT manual and automation testing, including API, protocol, and performance testing.


🌟 Key Responsibilities:

✅ Test IoT devices, APIs, microservices, and cloud systems

✅Perform manual, integration, regression, and security testing

✅Automate API and protocol-level test cases

✅Execute performance and load testing

✅Integrate tests with CI/CD pipelines


🌟 Required Skills:

✅IoT protocols (MQTT, REST, HTTP/HTTPS)

✅Networking concepts

✅API testing: REST Assured, Karate, Postman/Newman

✅Automation frameworks: Pytest / JUnit / TestNG

✅Performance testing: JMeter

✅Understanding of IoT security (certificates, encryption, OTA)


Read more
Deqode

at Deqode

1 recruiter
purvisha Bhavsar
Posted by purvisha Bhavsar
Mumbai, Navi Mumbai
2 - 4 yrs
₹3L - ₹7L / yr
skill iconJava
skill iconSpring Boot
Application Support
Quarkus
JIRA
+4 more

🚀 Hiring: Java Developer at Deqode

⭐ Experience: 2+ Years

📍 Location: Mumbai

⭐ Work Mode:- 5 Days Work from Office

⏱️ Notice Period: Immediate Joiners

(Only immediate joiners & candidates serving notice period)


We are looking for a Java Developer (Mid/Senior) to join our Implementation & Application Support team supporting critical fintech platforms. The role involves backend development, application monitoring, incident management, and close collaboration with customers. Senior developers will handle escalations, mentor juniors, and drive operational excellence.


Key Responsibilities (Brief)

✅ Develop and support Java applications (Spring Boot / Quarkus).

✅Monitor applications and resolve production issues.

✅Manage incidents, perform root cause analysis, and handle ITSM tickets.

✅Collaborate with customers and internal teams.

✅(Senior) Lead escalations and mentor junior engineers.


Top Skills Required

✅ Java, Spring Boot, Quarkus

✅Application Support & Incident Management

✅ServiceNow / JIRA / ITSM tools

✅Monitoring & Production Support

✅Kafka, Redis, Solace, Aerospike (Good to have)

✅Docker, Kubernetes, CI/CD (Plus)


Read more
E-Commerce Industry

E-Commerce Industry

Agency job
via Peak Hire Solutions by Dhara Thakkar
Bengaluru (Bangalore)
6 - 10 yrs
₹30L - ₹50L / yr
Security Information and Event Management (SIEM)
Information security governance
ISO/IEC 27001:2005
Systems Development Life Cycle (SDLC)
Software Development
+67 more

SENIOR INFORMATION SECURITY ENGINEER (DEVSECOPS)

Key Skills: Software Development Life Cycle (SDLC), CI/CD

About Company: Consumer Internet / E-Commerce

Company Size: Mid-Sized

Experience Required: 6 - 10 years

Working Days: 5 days/week

Office Location: Bengaluru [Karnataka]


Review Criteria:

Mandatory:

  • Strong DevSecOps profile
  • Must have 5+ years of hands-on experience in Information Security, with a primary focus on cloud security across AWS, Azure, and GCP environments.
  • Must have strong practical experience working with Cloud Security Posture Management (CSPM) tools such as Prisma Cloud, Wiz, or Orca along with SIEM / IDS / IPS platforms
  • Must have proven experience in securing Kubernetes and containerized environments including image security,runtime protection, RBAC, and network policies.
  • Must have hands-on experience integrating security within CI/CD pipelines using tools such as Snyk, GitHub Advanced Security,or equivalent security scanning solutions.
  • Must have solid understanding of core security domains including network security, encryption, identity and access management key management, and security governance including cloud-native security services like GuardDuty, Azure Security Center etc
  • Must have practical experience with Application Security Testing tools including SAST, DAST, and SCA in real production environments
  • Must have hands-on experience with security monitoring, incident response, alert investigation, root-cause analysis (RCA), and managing VAPT / penetration testing activities
  • Must have experience securing infrastructure-as-code and cloud deployments using Terraform, CloudFormation, ARM, Docker, and Kubernetes
  • B2B SaaS Product companies
  • Must have working knowledge of globally recognized security frameworks and standards such as ISO 27001, NIST, and CIS with exposure to SOC2, GDPR, or HIPAA compliance environments


Preferred:

  • Experience with DevSecOps automation, security-as-code, and policy-as-code implementations
  • Exposure to threat intelligence platforms, cloud security monitoring, and proactive threat detection methodologies, including EDR / DLP or vulnerability management tools
  • Must demonstrate strong ownership mindset, proactive security-first thinking, and ability to communicate risks in clear business language


Roles & Responsibilities:

We are looking for a Senior Information Security Engineer who can help protect our cloud infrastructure, applications, and data while enabling teams to move fast and build securely.


This role sits deep within our engineering ecosystem. You’ll embed security into how we design, build, deploy, and operate systems—working closely with Cloud, Platform, and Application Engineering teams. You’ll balance proactive security design with hands-on incident response, and help shape a strong, security-first culture across the organization.


If you enjoy solving real-world security problems, working close to systems and code, and influencing how teams build securely at scale, this role is for you.


What You’ll Do-

Cloud & Infrastructure Security:

  • Design, implement, and operate cloud-native security controls across AWS, Azure, GCP, and Oracle.
  • Strengthen IAM, network security, and cloud posture using services like GuardDuty, Azure Security Center and others.
  • Partner with platform teams to secure VPCs, security groups, and cloud access patterns.


Application & DevSecOps Security:

  • Embed security into the SDLC through threat modeling, secure code reviews, and security-by-design practices.
  • Integrate SAST, DAST, and SCA tools into CI/CD pipelines.
  • Secure infrastructure-as-code and containerized workloads using Terraform, CloudFormation, ARM, Docker, and Kubernetes.


Security Monitoring & Incident Response:

  • Monitor security alerts and investigate potential threats across cloud and application layers.
  • Lead or support incident response efforts, root-cause analysis, and corrective actions.
  • Plan and execute VAPT and penetration testing engagements (internal and external), track remediation, and validate fixes.
  • Conduct red teaming activities and tabletop exercises to test detection, response readiness, and cross-team coordination.
  • Continuously improve detection, response, and testing maturity.


Security Tools & Platforms:

  • Manage and optimize security tooling including firewalls, SIEM, EDR, DLP, IDS/IPS, CSPM, and vulnerability management platforms.
  • Ensure tools are well-integrated, actionable, and aligned with operational needs.


Compliance, Governance & Awareness:

  • Support compliance with industry standards and frameworks such as SOC2, HIPAA, ISO 27001, NIST, CIS, and GDPR.
  • Promote secure engineering practices through training, documentation, and ongoing awareness programs.
  • Act as a trusted security advisor to engineering and product teams.


Continuous Improvement:

  • Stay ahead of emerging threats, cloud vulnerabilities, and evolving security best practices.
  • Continuously raise the bar on a company's security posture through automation and process improvement.


Endpoint Security (Secondary Scope):

  • Provide guidance on endpoint security tooling such as SentinelOne and Microsoft Defender when required.


Ideal Candidate:

  • Strong hands-on experience in cloud security across AWS and Azure.
  • Practical exposure to CSPM tools (e.g., Prisma Cloud, Wiz, Orca) and SIEM / IDS / IPS platforms.
  • Experience securing containerized and Kubernetes-based environments.
  • Familiarity with CI/CD security integrations (e.g., Snyk, GitHub Advanced Security, or similar).
  • Solid understanding of network security, encryption, identity, and access management.
  • Experience with application security testing tools (SAST, DAST, SCA).
  • Working knowledge of security frameworks and standards such as ISO 27001, NIST, and CIS.
  • Strong analytical, troubleshooting, and problem-solving skills.


Nice to Have:

  • Experience with DevSecOps automation and security-as-code practices.
  • Exposure to threat intelligence and cloud security monitoring solutions.
  • Familiarity with incident response frameworks and forensic analysis.
  • Security certifications such as CISSP, CISM, CCSP, or CompTIA Security+.


Perks, Benefits and Work Culture:

A wholesome opportunity in a fast-paced environment that will enable you to juggle between concepts, yet maintain the quality of content, interact and share your ideas and have loads of learning while at work. Work with a team of highly talented young professionals and enjoy the comprehensive benefits that company offers.

Read more
Talent Pro
Bengaluru (Bangalore)
6 - 10 yrs
₹30L - ₹60L / yr
Software Development Life Cycle (SDLC)
CI/CD

Mandatory (Experience 1) – Must have 5+ years of hands-on experience in Information Security, with a primary focus on cloud security across AWS, Azure, and GCP environments.

Mandatory (Experience 2) – Must have strong practical experience working with Cloud Security Posture Management (CSPM) tools such as Prisma Cloud, Wiz, or Orca along with SIEM / IDS / IPS platforms

Mandatory (Experience 3) – Must have proven experience in securing Kubernetes and containerized environments including image security,runtime protection, RBAC, and network policies.

Mandatory (Experience 4) – Must have hands-on experience integrating security within CI/CD pipelines using tools such as Snyk, GitHub Advanced Security,or equivalent security scanning solutions.

Mandatory (Experience 5) – Must have solid understanding of core security domains including network security, encryption, identity and access management key management, and security governance including cloud-native security services like GuardDuty, Azure Security Center etc

Mandatory (Experience 6) – Must have practical experience with Application Security Testing tools including SAST, DAST, and SCA in real production environments

Mandatory (Experience 7) – Must have hands-on experience with security monitoring, incident response, alert investigation, root-cause analysis (RCA), and managing VAPT / penetration testing activities

Mandatory (Experience 8) – Must have experience securing infrastructure-as-code and cloud deployments using Terraform, CloudFormation, ARM, Docker, and Kubernetes

Mandatory (Core Skill) – Must have working knowledge of globally recognized security frameworks and standards such as ISO 27001, NIST, and CIS with exposure to SOC2, GDPR, or HIPAA compliance environments

Read more
Foyforyou
Hardika Bhansali
Posted by Hardika Bhansali
Mumbai
2 - 7 yrs
₹3L - ₹15L / yr
skill iconSwift
User Interface (UI) Design
CI/CD
skill iconGit
RESTful APIs

iOS Developer – FOY (FoyForYou.com)

Function: Software Engineering → Mobile Development, Backend Collaboration Skills: Swift, SwiftUI/UIKit, MVVM/MVP, REST APIs, Xcode, CI/CD

About FOY FOY (FoyForYou.com) is one of India’s fastest-growing beauty & wellness destinations. We offer customers a curated range of 100% authentic products, trusted brands, and a frictionless shopping experience. Our mission is to make beauty effortless, personal, and accessible for every Indian. As we scale fast and build a mobile-first commerce ecosystem, we're strengthening our engineering team with passionate builders who care deeply about user experience.

Job Description:

We’re looking for an iOS Developer (4–8 years) who wants to craft deeply polished mobile experiences and play an active role in shaping FOY’s product direction—not just implement tickets. You'll work on performance, app architecture, offline handling, animations, and end-to-end features that impact millions of users.

Responsibilities

● Work closely with product & design to influence feature strategy and user experience on iOS.

● Build a fast, stable, and intuitive FOY iOS app using Swift, SwiftUI, UIKit, and modern architecture patterns.

● Optimize the app for performance, memory usage, network efficiency, and battery consumption.

● Integrate cleanly with FOY’s backend APIs and ensure reliability across devices.

● Own the delivery pipeline with unit tests, automation, continuous integration, and code reviews.

● Diagnose and solve issues using crash logs, performance tools, and debugging tools.

● Collaborate cross-functionally with Android, backend, QA, and product teams to deliver a seamless commerce experience.

Requirements:

● 4–8 years of experience building and shipping iOS apps.

● Proven experience shipping at least one iOS app—professionally or via a significant side project.

● Solid expertise in Swift, SwiftUI/UIKit, and mobile architecture patterns (MVVM/MVP/Clean Architecture).

● Strong understanding of networking, REST APIs, async programming (Combine, async/await), and local data caching.

● Ability to debug production issues and trace them across client–server boundaries.

● A strong sense of ownership, attention to detail, and user-centric thinking.

● Passion for solving meaningful user problems, not just building features.

Bonus Points

● A GitHub/portfolio with code samples or open-source contributions.

● Experience with fast-moving consumer apps, e-commerce, or high-scale mobile applications.

● Understanding of advanced topics like custom rendering, animations, or performance profiling.

Why Build Your Career at FOY?

At FOY, we’re transforming how India shops for beauty—and building that future requires creativity, ownership, and speed.

We hire for 3 core qualities:

1. Rockstar Team Players Your work will directly impact business, growth, and customer experience.

2. Ownership With Passion You’ll be given important projects to drive independently—minimal hierarchy, maximum impact.

3. Big Dreamers We’re scaling quickly and building boldly. If you dream big and execute fast, you’ll thrive here.

Join Us If bringing world-class iOS experiences to millions excites you, we’d love to meet you.

Apply now and be part of FOY’s journey to redefine beauty commerce in India.

Read more
Ekloud INC
Remote only
7 - 15 yrs
₹6L - ₹25L / yr
skill icon.NET
Fullstack Developer
skill iconReact.js
cloud platforms
Windows Azure
+11 more

Hiring :.NET Full Stack Developer with ReactJS

Designation: Team Lead

Location: Bidadi, Bengaluru (Karnataka) ,Hybrid mode

Relevant Experience: 7-10 years

Preferred Qualifications

• Bachelors in CSE with minimum 7-10 years of relevant experience

• Exposure to cloud platforms (Azure) and API Gateway.

• Knowledge of microservices architecture.

• Experience with unit testing frameworks (xUnit, NUnit).

Required Skills & Qualifications

• Strong hands-on experience in .NET Core, C#, .Net framework (.NET Core, .NET 5+) and API development.

• Experience with RESTful API design and development.

• Strong experience on ReactJS for front-end development.

• Expertise in SQL Server (queries, stored procedures, performance tuning).

• Experience in system integration, especially with SAP.

• Ability to manage and mentor a team effectively.

• Strong requirement gathering and client communication skills.

• Familiarity with Git, CI/CD pipelines, and Agile methodologies.

Role Overview

• Design, develop, and maintain scalable backend services using .NET technologies.

• Work on ReactJS components as well as UI integration and ensure seamless communication between front-end and back-end.

• Write clean, efficient, and well-documented code.

• Lead and mentor a team of developers/Testing, ensuring adherence to best practices and timely delivery.

• Good exposure to Agile and scrum methodology

• Design and implement secure RESTful APIs using .NET Core.

• Apply best practices for authentication, authorization, and data security.

• Develop and maintain integrations with multiple systems, including SAP.

• Design and optimize SQL Server queries, stored procedures, and schemas.

• Gather requirements from clients and translate them into technical specifications.

• Implement Excel file uploaders and data processing workflows.

• Coordinate with stakeholders, manage timelines, and ensure quality deliverables.

• Troubleshoot and debug issues, ensuring smooth operation of backend systems.

Read more
Bookxpert Private Limited
Abhijith Neeli
Posted by Abhijith Neeli
Guntur, Hyderabad
3 - 5 yrs
₹5L - ₹10L / yr
skill iconReact.js
skill iconJavascript
skill iconHTML/CSS
RESTful APIs
UI/UX
+15 more


About the Role:

We are seeking a skilled and enthusiastic React.js Web Developer to join our technology team. The ideal candidate will be responsible for building high-quality user interfaces, enhancing user experience, and developing efficient web applications.


Key Responsibilities:


1. Develop responsive, interactive, and high-performing web applications using React.jsJavaScript/TypeScript, and modern front-end libraries.

2. Translate UI/UX wireframes into high-quality code and reusable components.

3. Optimize components for maximum performance across various devices and browsers.

4. Work with the team to design, structure, and maintain scalable front-end application architecture.

5. Integrate REST APIs, third-party services, and internal tools into the application.

6. Manage application state using tools such as ReduxContext API, or other state management libraries.

7. Write clean, readable, and well-documented code following best industry practices.

8. Conduct thorough debugging, troubleshooting, and performance enhancements.

9. Assist in deployment processes and ensure the application works smoothly in production.

10. Familiarity with CI/CD pipelines is an added advantage.

11. Collaborate with the team on planning, development, and code reviews.

12. Stay updated with the latest technologies and development best practices.


Required Skills & Qualifications:


  • Bachelors degree in Computer Science, IT, or related field (or equivalent experience).
  • 2 - 3+ years of experience in React JS development.
  • Strong proficiency in JavaScript (ES6+), HTML5, CSS3.
  • Hands-on experience with React Hooks, Redux, Context API, and component-based architecture.
  • Good understanding of REST APIs and asynchronous request handling.
  • Experience with build tools like Webpack, Babel, Vite, etc.
  • Familiarity with Git/GitHub and version control workflows.
  • Knowledge of responsive design and cross-browser compatibility.
  • Strong problem-solving and analytical abilities.
  • Ability to work independently as well as in a team environment.
  • Time management skills and ability to meet deadlines.
  • A positive attitude and willingness to learn new technologies.


Why Join Us?


  • Competitive Salary and Professional development opportunities and training.
  • Opportunity to work with cutting-edge technologies in a fast-paced environment.
  • A supportive environment that encourages learning and growth.
  • Collaborative team culture focused on creativity and continuous improvement.


Read more
Procedure

at Procedure

4 candid answers
3 recruiters
Adithya K
Posted by Adithya K
Remote only
5 - 10 yrs
₹40L - ₹60L / yr
Software Development
skill iconAmazon Web Services (AWS)
skill iconPython
TypeScript
skill iconPostgreSQL
+3 more

Procedure is hiring for Drover.


This is not a DevOps/SRE/cloud-migration role — this is a hands-on backend engineering and architecture role where you build the platform powering our hardware at scale.


About Drover

Ranching is getting harder. Increased labor costs and a volatile climate are placing mounting pressure to provide for a growing population. Drover is empowering ranchers to efficiently and sustainably feed the world by making it cheaper and easier to manage livestock, unlock productivity gains, and reduce carbon footprint with rotational grazing. Not only is this a $46B opportunity, you'll be working on a climate solution with the potential for real, meaningful impact.


We use patent-pending low-voltage electrical muscle stimulation (EMS) to steer and contain cows, replacing the need for physical fences or electric shock. We are building something that has never been done before, and we have hundreds of ranches on our waitlist.


Drover is founded by Callum Taylor (ex-Harvard), who comes from 5 generations of ranching, and Samuel Aubin, both of whom grew up in Australian ranching towns and have an intricate understanding of the problem space. We are well-funded and supported by Workshop Ventures, a VC firm with experience in building unicorn IoT companies.


We're looking to assemble a team of exceptional talent with a high eagerness to dive headfirst into understanding the challenges and opportunities within ranching.


About The Role

As our founding cloud engineer, you will be responsible for building and scaling the infrastructure that powers our IoT platform, connecting thousands of devices across ranches nationwide.


Because we are an early-stage startup, you will have high levels of ownership in what you build. You will play a pivotal part in architecting our cloud infrastructure, building robust APIs, and ensuring our systems can scale reliably. We are looking for someone who is excited about solving complex technical challenges at the intersection of IoT, agriculture, and cloud computing.


What You'll Do

  • Develop Drover IoT cloud architecture from the ground up (it’s a green field project)
  • Design and implement services to support wearable devices, mobile app, and backend API
  • Implement data processing and storage pipelines
  • Create and maintain Infrastructure-as-Code
  • Support the engineering team across all aspects of early-stage development -- after all, this is a startup


Requirements

  • 5+ years of experience developing cloud architecture on AWS
  • In-depth understanding of various AWS services, especially those related to IoT
  • Expertise in cloud-hosted, event-driven, serverless architectures
  • Expertise in programming languages suitable for AWS micro-services (eg: TypeScript, Python)
  • Experience with networking and socket programming
  • Experience with Kubernetes or similar orchestration platforms
  • Experience with Infrastructure-as-Code tools (e.g., Terraform, AWS CDK)
  • Familiarity with relational databases (PostgreSQL)
  • Familiarity with Continuous Integration and Continuous Deployment (CI/CD)


Nice To Have

  • Bachelor’s or Master’s degree in Computer Science, Software Engineering, Electrical Engineering, or a related field


Read more
Tarento Group

at Tarento Group

3 candid answers
1 recruiter
Reshika Mendiratta
Posted by Reshika Mendiratta
Bengaluru (Bangalore)
8yrs+
Upto ₹30L / yr (Varies
)
skill iconJava
skill iconSpring Boot
Microservices
Windows Azure
RESTful APIs
+7 more

About Tarento:

 

Tarento is a fast-growing technology consulting company headquartered in Stockholm, with a strong presence in India and clients across the globe. We specialize in digital transformation, product engineering, and enterprise solutions, working across diverse industries including retail, manufacturing, and healthcare. Our teams combine Nordic values with Indian expertise to deliver innovative, scalable, and high-impact solutions.

 

We're proud to be recognized as a Great Place to Work, a testament to our inclusive culture, strong leadership, and commitment to employee well-being and growth. At Tarento, you’ll be part of a collaborative environment where ideas are valued, learning is continuous, and careers are built on passion and purpose.


Job Summary:

We are seeking a highly skilled and self-driven Senior Java Backend Developer with strong experience in designing and deploying scalable microservices using Spring Boot and Azure Cloud. The ideal candidate will have hands-on expertise in modern Java development, containerization, messaging systems like Kafka, and knowledge of CI/CD and DevOps practices.


Key Responsibilities:

  • Design, develop, and deploy microservices using Spring Boot on Azure cloud platforms.
  • Implement and maintain RESTful APIs, ensuring high performance and scalability.
  • Work with Java 11+ features including Streams, Functional Programming, and Collections framework.
  • Develop and manage Docker containers, enabling efficient development and deployment pipelines.
  • Integrate messaging services like Apache Kafka into microservice architectures.
  • Design and maintain data models using PostgreSQL or other SQL databases.
  • Implement unit testing using JUnit and mocking frameworks to ensure code quality.
  • Develop and execute API automation tests using Cucumber or similar tools.
  • Collaborate with QA, DevOps, and other teams for seamless CI/CD integration and deployment pipelines.
  • Work with Kubernetes for orchestrating containerized services.
  • Utilize Couchbase or similar NoSQL technologies when necessary.
  • Participate in code reviews, design discussions, and contribute to best practices and standards.


Required Skills & Qualifications:

  • Strong experience in Java (11 or above) and Spring Boot framework.
  • Solid understanding of microservices architecture and deployment on Azure.
  • Hands-on experience with Docker, and exposure to Kubernetes.
  • Proficiency in Kafka, with real-world project experience.
  • Working knowledge of PostgreSQL (or any SQL DB) and data modeling principles.
  • Experience in writing unit tests using JUnit and mocking tools.
  • Experience with Cucumber or similar frameworks for API automation testing.
  • Exposure to CI/CD tools, DevOps processes, and Git-based workflows.


Nice to Have:

  • Azure certifications (e.g., Azure Developer Associate)
  • Familiarity with Couchbase or other NoSQL databases.
  • Familiarity with other cloud providers (AWS, GCP)
  • Knowledge of observability tools (Prometheus, Grafana, ELK)


Soft Skills:

  • Strong problem-solving and analytical skills.
  • Excellent verbal and written communication.
  • Ability to work in an agile environment and contribute to continuous improvement.


Why Join Us:

  • Work on cutting-edge microservice architectures
  • Strong learning and development culture
  • Opportunity to innovate and influence technical decisions
  • Collaborative and inclusive work environment
Read more
Industrial Automation

Industrial Automation

Agency job
via Michael Page by Pramod P
Bengaluru (Bangalore), Bommasandra Industrial Area
8 - 13 yrs
₹20L - ₹44L / yr
skill iconPython
skill iconC++
skill iconRust
gitlab
DevOps
+4 more

Mode Employment – Fulltime and Permanent

Working Location: Bommasandra Industrial Area, Hosur Main Road, Bangalore

Working Days: 5 days

Working Model: Hybrid - 3 days WFO and 2 days Home


Position Overview

As the Lead Software Engineer in our Research & Innovation team, you’ll play a strategic role in establishing and driving the technical vision for industrial AI solutions. Working closely with the Lead AI Engineer, you will form a leadership tandem to define the roadmap for the team, cultivate an innovative culture, and ensure that projects are strategically aligned with the organization’s goals. Your leadership will be crucial in developing, mentoring, and empowering the team as we expand, helping create an environment where innovative ideas can translate seamlessly from research to industry-ready products.


Key Responsibilities:

  • Define and drive the technical strategy for embedding AI into industrial automation products, with a focus on scalability, quality, and industry compliance.
  • Lead the development of a collaborative, high-performing engineering team, mentoring junior engineers, automation experts, and researchers.
  • Establish and oversee processes and standards for agile and DevOps practices, ensuring project alignment with strategic goals.
  • Collaborate with stakeholders to align project goals, define priorities, and manage timelines, while driving innovative, research-based solutions.
  • Act as a key decision-maker on technical issues, architecture, and system design, ensuring long-term maintainability and scalability of solutions.
  • Ensure adherence to industry standards, certifications, and compliance, and advocate for industry best practices within the team.
  • Stay updated on software engineering trends and AI applications in embedded systems, incorporating the latest advancements into the team’s strategic planning.


Qualifications:

  • Bachelor’s or Master’s degree in Computer Science, Engineering, or related field.
  • Extensive experience in software engineering, with a proven track record of leading technical teams, ideally in manufacturing or embedded systems.
  • Strong expertise in Python and C++/Rust, Gitlab toolchains, and system architecture for embedded applications.
  • Experience in DevOps, CI/CD, and agile methodologies, with an emphasis on setting and maintaining high standards across a team.
  • Exceptional communication and collaboration skills in English.
  • Willingness to travel as needed.


Preferred:

  • Background in driving team culture, agile project management, and experience embedding AI in industrial products.
  • Familiarity with sociocratic or consent-based management practices.
  • Knowledge in embedded programming is an advantage.
Read more
Bits In Glass

at Bits In Glass

3 candid answers
Nikita Sinha
Posted by Nikita Sinha
Hyderabad, Pune, Mohali
5 - 8 yrs
Upto ₹30L / yr (Varies
)
skill iconJava
skill iconPython
CI/CD
skill iconReact.js
skill iconAngular (2+)

Design, build, and operate end-to-end web and API solutions (front end + back end) with strong automation, observability, and production reliability. You will own features from concept through deployment and steady state, including incident response and continuous improvement.


Key Responsibilities:

Engineering & Delivery

  • Translate business requirements into technical designs, APIs, and data models.
  • Develop back-end services using Java and Python, and front-end components using React / Angular / Vue (where applicable).
  • Build REST / GraphQL APIs, batch jobs, streaming jobs, and system integration adapters.
  • Write efficient SQL/NoSQL queries; optimize schemas, indexes, and data flows (ETL / CDC as needed).

Automation, CI/CD & Operations

  • Automate builds, testing, packaging, and deployments using CI/CD pipelines.
  • Create Linux shell and Python scripts for operational tasks, environment automation, and diagnostics.
  • Manage configuration, feature flags, environment parity, and Infrastructure as Code (where applicable).

Reliability, Security & Quality

  • Embed security best practices: authentication/authorization, input validation, secrets management, TLS.
  • Implement unit, integration, contract, and performance tests with enforced quality gates.
  • Add observability: structured logs, metrics, traces, health checks, dashboards, and alerts.
  • Apply resilience patterns: retries, timeouts, circuit breakers, and graceful degradation.

Production Ownership

  • Participate in on-call rotations, incident triage, RCA, and permanent fixes.
  • Refactor legacy code and reduce technical debt with measurable impact.
  • Maintain technical documentation, runbooks, and architecture decision records (ADRs).

Collaboration & Leadership

  • Mentor peers and contribute to engineering standards and best practices.
  • Work closely with Product, QA, Security, and Ops to balance scope, risk, and timelines.

Qualifications

Must Have

  • Strong experience in Java (core concepts, concurrency, REST frameworks).
  • Strong Python experience (services + scripting).
  • Solid Linux skills with automation using shell/Python.
  • Web services expertise: REST/JSON, API design, versioning, pagination, error handling.
  • Databases: Relational (SQL tuning, transactions) plus exposure to NoSQL / caching (Redis).
  • CI/CD tools: Git, pipelines, artifact management.
  • Testing frameworks: JUnit, PyTest, API testing tools.
  • Observability tools: Prometheus, Grafana, ELK, OpenTelemetry (or equivalents).
  • Strong production support mindset with incident management, SLA/SLO awareness, and RCA experience.

Good to Have

  • Messaging & streaming platforms: Kafka, MQ.
  • Infrastructure as Code: Terraform, Ansible.
  • Cloud exposure: AWS / Azure / GCP, including managed data services.
  • Front-end experience with React / Angular / Vue and TypeScript.
  • Deployment strategies: feature flags, canary, blue/green.
  • Knowledge of cost optimization and capacity planning.

Key Performance Indicators (KPIs)

  • Deployment frequency & change failure rate
  • Mean Time to Detect (MTTD) & Mean Time to Recover (MTTR)
  • API latency (p95) and availability vs SLOs
  • Defect escape rate & automated test coverage
  • Technical debt reduction (items resolved per quarter)
  • Incident recurrence trend (continuous reduction)

Soft Skills

  • End-to-end ownership mindset
  • Data-driven decision making
  • Bias for automation and simplification
  • Proactive risk identification
  • Clear, timely, and effective communication

About the Company – Bits In Glass

  • 20+ years of industry experience
  • Merged with Crochet Technologies in 2021 to form a larger global organization
  • Offices in Pune, Hyderabad, and Chandigarh
  • Top 30 global Pega partner and sponsor of PegaWorld
  • Elite Appian Partner since 2008
  • Operations across US, Canada, UK, and India
  • Dedicated Global Pega Center of Excellence

Employee Benefits

  • Career Growth: Clear advancement paths and learning opportunities
  • Challenging Projects: Global, cutting-edge client work
  • Global Exposure: Collaboration with international teams
  • Flexible Work Arrangements: Work-life balance support
  • Comprehensive Benefits: Competitive compensation, health insurance, paid time off
  • Learning & Upskilling: AI-enabled Pega solutions, data engineering, integrations, cloud migration

Company Culture & Values

  • Collaborative & Inclusive: Teamwork, innovation, and respect for diverse ideas
  • Continuous Learning: Certifications and skill development encouraged
  • Integrity: Ethical and transparent practices
  • Excellence: High standards in delivery
  • Client-Centricity: Tailored solutions with measurable impact


Read more
Avhan Technologies Pvt Ltd
Nikita Sinha
Posted by Nikita Sinha
Mumbai
4 - 8 yrs
Upto ₹8.3L / yr (Varies
)
skill iconAmazon Web Services (AWS)
AWS Lambda
API
Amazon S3
Platform as a Service (PaaS)
+3 more

To design, automate, and manage scalable cloud infrastructure that powers real-time AI and communication workloads globally.


Key Responsibilities

  • Implement and mange CI/CD pipelines (GitHub Actions, Jenkins, or GitLab).
  • Manage Kubernetes/EKS clusters
  • Implement infrastructure as code (provisioning via Terraform, CloudFormation, Pulumi etc).
  • Implement observability (Grafana, Loki, Prometheus, ELK/CloudWatch).
  • Enforce security/compliance guardrails (GDPR, DPDP, ISO 27001, PCI, HIPPA).
  • Drive cost-optimization and zero-downtime deployment strategies.
  • Collaborate with developers to containerize and deploy services.

Required Skills & Experience

  • 4–8 years in DevOps or Cloud Infrastructure roles.
  • Proficiency with AWS (EKS, Lambda, API Gateway, S3, IAM).
  • Experience with infrastructure-as-code and CI/CD automation.
  • Familiarity with monitoring, alerting, and incident management.

What Success Looks Like

  • < 10 min build-to-deploy cycle.
  • 99.999 % uptime with proactive incident response.
  • Documented and repeatable DevOps workflows.
Read more
AbleCredit

at AbleCredit

2 candid answers
Utkarsh Apoorva
Posted by Utkarsh Apoorva
Bengaluru (Bangalore)
5 - 8 yrs
₹20L - ₹35L / yr
CI/CD
DevOps
Security Information and Event Management (SIEM)
ISO/IEC 27001:2005

Role: Senior Security Engineer

Salary: INR 20-35L per annum

Performance Bonus: Up to 10% of the base salary

Location: Hulimavu, Bangalore, India

Experience: 5-8 years



About AbleCredit:

AbleCredit has built a foundational AI platform to help BFSI enterprises reduce OPEX by up to 70% by powering workflows for onboarding, claims, credit, and collections. Our GenAI model achieves over 95% accuracy in understanding Indian dialects and excels in financial analysis.


The company was founded in June 2023 by Utkarsh Apoorva (IIT Delhi, built Reshamandi, Guitarstreet, Edulabs); Harshad Saykhedkar (IITB, ex-AI Lead at Slack); and Ashwini Prabhu (IIML, co-founder of Mythiksha, ex-Product Head at Reshamandi, HandyTrain).




What Work You’ll Do

  • Be the guardian of trust — every system you secure will protect millions of data interactions.
  • Operate like a builder, not a gatekeeper — automate guardrails that make security invisible but ever-present.
  • You’ll define what ‘secure by default’ means for a next-generation AI SaaS platform.
  • Own the security posture of our cloud-native SaaS platform — design, implement, and enforce security controls across AWS, Linux, and Kubernetes (EKS) environments.
  • Drive security compliance initiatives such as SOC 2 Type II, ISO 27001, and RBI-aligned frameworks — build systems that enforce, not just document, compliance.
  • Architect defense-in-depth systems across EC2, S3, IAM, and VPC layers, ensuring secure configuration, least-privilege access, and continuous compliance.
  • Build and automate security pipelines — integrate AWS Security Hub, GuardDuty, Inspector, WAF, and CloudTrail into continuous detection and response systems.
  • Lead vulnerability management and incident readiness — identify, prioritize, and remediate vulnerabilities across the stack while ensuring traceable audit logs.
  • Implement and maintain zero-trust and least-privilege access controls using AWS IAM, SSO, and modern secrets management tools like AWS SSM or Vault.
  • Serve as a trusted advisor — train developers, review architecture, and proactively identify risks before they surface.




The Skills You Have..

  • Deep hands-on experience with AWS security architecture — IAM, VPCs, EKS, EC2, S3, CloudTrail, Security Hub, WAF, GuardDuty, and Inspector.
  • Strong background in Linux hardening, container security, and DevSecOps automation.
  • Proficiency with infrastructure-as-code (Terraform, CloudFormation) and integrating security controls into provisioning.
  • Knowledge of zero-trust frameworks, least-privilege IAM, and secrets management (Vault, SSM, KMS).
  • Experience with SIEM and monitoring tools — configuring alerts, analyzing logs, and responding to incidents.
  • Familiarity with compliance automation and continuous assurance — especially SOC 2, ISO 27001, or RBI frameworks.
  • Understanding of secure software supply chains — dependency scanning, artifact signing, and policy enforcement in CI/CD.
  • Ability to perform risk assessment, threat modeling, and architecture review collaboratively with engineering teams.



What You Should Have Done in the Past

  • Secured cloud-native SaaS systems built entirely on AWS (EC2, EKS, S3, IAM, VPC).
  • Led or contributed to SOC 2 Type II or ISO 27001 certification initiatives, ideally in a regulated industry such as FinTech.
  • Designed secure CI/CD pipelines with integrated code scanning, image validation, and secrets rotation.
  • (Bonus) Built internal security automation frameworks or tooling for continuous monitoring and compliance checks.





Read more
TrumetricAI
Yashika Tiwari
Posted by Yashika Tiwari
Bengaluru (Bangalore)
3 - 7 yrs
₹12L - ₹20L / yr
skill iconAmazon Web Services (AWS)
CI/CD
skill iconGit
skill iconDocker
skill iconKubernetes

Key Responsibilities:

  • Design, implement, and maintain scalable, secure, and cost-effective infrastructure on AWS and Azure
  • Set up and manage CI/CD pipelines for smooth code integration and delivery using tools like GitHub Actions, Bitbucket Runners, AWS Code build/deploy, Azure DevOps, etc.
  • Containerize applications using Docker and manage orchestration with Kubernetes, ECS, Fargate, AWS EKS, Azure AKS.
  • Manage and monitor production deployments to ensure high availability and performance
  • Implement and manage CDN solutions using AWS CloudFront and Azure Front Door for optimal content delivery and latency reduction
  • Define and apply caching strategies at application, CDN, and reverse proxy layers for performance and scalability
  • Set up and manage reverse proxies and Cloudflare WAF to ensure application security and performance
  • Implement infrastructure as code (IaC) using Terraform, CloudFormation, or ARM templates
  • Administer and optimize databases (RDS, PostgreSQL, MySQL, etc.) including backups, scaling, and monitoring
  • Configure and maintain VPCs, subnets, routing, VPNs, and security groups for secure and isolated network setups
  • Implement monitoring, logging, and alerting using tools like CloudWatch, Grafana, ELK, or Azure Monitor
  • Collaborate with development and QA teams to align infrastructure with application needs
  • Troubleshoot infrastructure and deployment issues efficiently and proactively
  • Ensure cloud cost optimization and usage tracking


Required Skills & Experience:

  • 3-4 years of hands-on experience in a DevOps
  • Strong expertise with both AWS and Azure cloud platforms
  • Proficient in Git, branching strategies, and pull request workflows
  • Deep understanding of CI/CD concepts and experience with pipeline tools
  • Proficiency in Docker, container orchestration (Kubernetes, ECS/EKS/AKS)
  • Good knowledge of relational databases and experience in managing DB backups, performance, and migrations
  • Experience with networking concepts including VPC, subnets, firewalls, VPNs, etc.
  • Experience with Infrastructure as Code tools (Terraform preferred)
  • Strong working knowledge of CDN technologies: AWS CloudFront and Azure Front Door
  • Understanding of caching strategies: edge caching, browser caching, API caching, and reverse proxy-level caching
  • Experience with Cloudflare WAF, reverse proxy setups, SSL termination, and rate-limiting
  • Familiarity with Linux system administration, scripting (Bash, Python), and automation tools
  • Working knowledge of monitoring and logging tools
  • Strong troubleshooting and problem-solving skills


Good to Have (Bonus Points):

  • Experience with serverless architecture (e.g., AWS Lambda, Azure Functions)
  • Exposure to cost monitoring tools like CloudHealth, Azure Cost Management
  • Experience with compliance/security best practices (SOC2, ISO, etc.)
  • Familiarity with Service Mesh (Istio, Linkerd) and API gateways
  • Knowledge of Secrets Management tools (e.g., HashiCorp Vault, AWS Secrets Manager)


Read more
Aryush Infotech India Pvt Ltd
Nitin Gupta
Posted by Nitin Gupta
Bengaluru (Bangalore), Bhopal
2 - 3 yrs
₹3L - ₹4L / yr
Fintech
Test Automation (QA)
Manual testing
skill iconPostman
JIRA
+5 more

Job Title: QA Tester – FinTech (Manual + Automation Testing)

Location: Bangalore, India

Job Type: Full-Time

Experience Required: 3 Years

Industry: FinTech / Financial Services

Function: Quality Assurance / Software Testing

 

About the Role:

We are looking for a skilled QA Tester with 3 years of experience in both manual and automation testing, ideally in the FinTech domain. The candidate will work closely with development and product teams to ensure that our financial applications meet the highest standards of quality, performance, and security.

 

Key Responsibilities:

  • Analyze business and functional requirements for financial products and translate them into test scenarios.
  • Design, write, and execute manual test cases for new features, enhancements, and bug fixes.
  • Develop and maintain automated test scripts using tools such as Selenium, TestNG, or similar frameworks.
  • Conduct API testing using Postman, Rest Assured, or similar tools.
  • Perform functional, regression, integration, and system testing across web and mobile platforms.
  • Work in an Agile/Scrum environment and actively participate in sprint planning, stand-ups, and retrospectives.
  • Log and track defects using JIRA or a similar defect management tool.
  • Collaborate with developers, BAs, and DevOps teams to improve quality across the SDLC.
  • Ensure test coverage for critical fintech workflows like transactions, KYC, lending, payments, and compliance.
  • Assist in setting up CI/CD pipelines for automated test execution using tools like Jenkins, GitLab CI, etc.

 

Required Skills and Experience:

  • 3+ years of hands-on experience in manual and automation testing.
  • Solid understanding of QA methodologies, STLC, and SDLC.
  • Experience in testing FinTech applications such as digital wallets, online banking, investment platforms, etc.
  • Strong experience with Selenium WebDriver, TestNG, Postman, and JIRA.
  • Knowledge of API testing, including RESTful services.
  • Familiarity with SQL to validate data in databases.
  • Understanding of CI/CD processes and basic scripting for automation integration.
  • Good problem-solving skills and attention to detail.
  • Excellent communication and documentation skills.

 

Preferred Qualifications:

  • Exposure to financial compliance and regulatory testing (e.g., PCI DSS, AML/KYC).
  • Experience with mobile app testing (iOS/Android).
  • Working knowledge of test management tools like TestRail, Zephyr, or Xray.
  • Performance testing experience (e.g., JMeter, LoadRunner) is a plus.
  • Basic knowledge of version control systems (e.g., Git).


Read more
Global digital transformation solutions provider

Global digital transformation solutions provider

Agency job
via Peak Hire Solutions by Dhara Thakkar
Pune
6 - 9 yrs
₹15L - ₹25L / yr
Data engineering
Apache Kafka
skill iconPython
skill iconAmazon Web Services (AWS)
AWS Lambda
+11 more

Job Details

- Job Title: Lead I - Data Engineering 

- Industry: Global digital transformation solutions provider

- Domain - Information technology (IT)

- Experience Required: 6-9 years

- Employment Type: Full Time

- Job Location: Pune

- CTC Range: Best in Industry


Job Description

Job Title: Senior Data Engineer (Kafka & AWS)

Responsibilities:

  • Develop and maintain real-time data pipelines using Apache Kafka (MSK or Confluent) and AWS services.
  • Configure and manage Kafka connectors, ensuring seamless data flow and integration across systems.
  • Demonstrate strong expertise in the Kafka ecosystem, including producers, consumers, brokers, topics, and schema registry.
  • Design and implement scalable ETL/ELT workflows to efficiently process large volumes of data.
  • Optimize data lake and data warehouse solutions using AWS services such as Lambda, S3, and Glue.
  • Implement robust monitoring, testing, and observability practices to ensure reliability and performance of data platforms.
  • Uphold data security, governance, and compliance standards across all data operations.

 

Requirements:

  • Minimum of 5 years of experience in Data Engineering or related roles.
  • Proven expertise with Apache Kafka and the AWS data stack (MSK, Glue, Lambda, S3, etc.).
  • Proficient in coding with Python, SQL, and Java — with Java strongly preferred.
  • Experience with Infrastructure-as-Code (IaC) tools (e.g., CloudFormation) and CI/CD pipelines.
  • Excellent problem-solvingcommunication, and collaboration skills.
  • Flexibility to write production-quality code in both Python and Java as required.

 

Skills: Aws, Kafka, Python


Must-Haves

Minimum of 5 years of experience in Data Engineering or related roles.

Proven expertise with Apache Kafka and the AWS data stack (MSK, Glue, Lambda, S3, etc.).

Proficient in coding with Python, SQL, and Java — with Java strongly preferred.

Experience with Infrastructure-as-Code (IaC) tools (e.g., CloudFormation) and CI/CD pipelines.

Excellent problem-solving, communication, and collaboration skills.

Flexibility to write production-quality code in both Python and Java as required.

Skills: Aws, Kafka, Python

Notice period - 0 to 15days only

Read more
Global digital transformation solutions provider.

Global digital transformation solutions provider.

Agency job
via Peak Hire Solutions by Dhara Thakkar
Bengaluru (Bangalore), Chennai, Coimbatore, Hosur, Hyderabad
12 - 15 yrs
₹20L - ₹35L / yr
DevOps
Automation
skill iconGitHub
Agile management
Agile/Scrum
+3 more

Job Details

- Job Title: DevOps and SRE -Technical Project Manager

- Industry: Global digital transformation solutions provider

- Domain - Information technology (IT)

- Experience Required: 12-15 years

- Employment Type: Full Time

- Job Location: Bangalore, Chennai, Coimbatore, Hosur & Hyderabad

- CTC Range: Best in Industry


Job Description

Company’s DevOps Practice is seeking a highly skilled DevOps and SRE Technical Project Manager to lead large-scale transformation programs for enterprise customers. The ideal candidate will bring deep expertise in DevOps and Site Reliability Engineering (SRE), combined with strong program management, stakeholder leadership, and the ability to drive end-to-end execution of complex initiatives.


Key Responsibilities

  • Lead the planning, execution, and successful delivery of DevOps and SRE transformation programs for enterprise clients, including full oversight of project budgets, financials, and margins.
  • Partner with senior stakeholders to define program objectives, roadmaps, milestones, and success metrics aligned with business and technology goals.
  • Develop and implement actionable strategies to optimize development, deployment, release management, observability, and operational workflows across client environments.
  • Provide technical leadership and strategic guidance to cross-functional engineering teams, ensuring alignment with industry standards, best practices, and company delivery methodologies.
  • Identify risks, dependencies, and blockers across programs, and proactively implement mitigation and contingency plans.
  • Monitor program performance, KPIs, and financial health; drive corrective actions and margin optimization where necessary.
  • Facilitate strong communication, collaboration, and transparency across engineering, product, architecture, and leadership teams.
  • Deliver periodic program updates to internal and client stakeholders, highlighting progress, risks, challenges, and improvement opportunities.
  • Champion a culture of continuous improvement, operational excellence, and innovation by encouraging adoption of emerging DevOps, SRE, automation, and cloud-native practices.
  • Support GitHub migration initiatives, including planning, execution, troubleshooting, and governance setup for repository and workflow migrations.

 

Requirements

  • Bachelor’s degree in Computer Science, Engineering, Business Administration, or a related technical discipline.
  • 15+ years of IT experience, including at least 5 years in a managerial or program leadership role.
  • Proven experience leading large-scale DevOps and SRE transformation programs with measurable business impact.
  • Strong program management expertise, including planning, execution oversight, risk management, and financial governance.
  • Solid understanding of Agile methodologies (Scrum, Kanban) and modern software development practices.
  • Deep hands-on knowledge of DevOps principles, CI/CD pipelines, automation frameworks, Infrastructure as Code (IaC), and cloud-native tooling.
  • Familiarity with SRE practices such as service reliability, observability, SLIs/SLOs, incident management, and performance optimization.
  • Experience with GitHub migration projects—including repository analysis, migration planning, tooling adoption, and workflow modernization.
  • Excellent communication, stakeholder management, and interpersonal skills with the ability to influence and lead cross-functional teams.
  • Strong analytical, organizational, and problem-solving skills with a results-oriented mindset.
  • Preferred certifications: PMP, PgMP, ITIL, Agile/Scrum Master, or relevant technical certifications.

 

Skills: Devops Tools, Cloud Infrastructure, Team Management


Must-Haves

DevOps principles (5+ years), SRE practices (5+ years), GitHub migration (3+ years), CI/CD pipelines (5+ years), Agile methodologies (5+ years)

Notice period - 0 to 15days only 

Read more
AI-First Company

AI-First Company

Agency job
via Peak Hire Solutions by Dhara Thakkar
Bengaluru (Bangalore), Mumbai, Hyderabad, Gurugram
5 - 17 yrs
₹30L - ₹45L / yr
Data engineering
Data architecture
SQL
Data modeling
GCS
+47 more

ROLES AND RESPONSIBILITIES:

You will be responsible for architecting, implementing, and optimizing Dremio-based data Lakehouse environments integrated with cloud storage, BI, and data engineering ecosystems. The role requires a strong balance of architecture design, data modeling, query optimization, and governance enablement in large-scale analytical environments.


  • Design and implement Dremio lakehouse architecture on cloud (AWS/Azure/Snowflake/Databricks ecosystem).
  • Define data ingestion, curation, and semantic modeling strategies to support analytics and AI workloads.
  • Optimize Dremio reflections, caching, and query performance for diverse data consumption patterns.
  • Collaborate with data engineering teams to integrate data sources via APIs, JDBC, Delta/Parquet, and object storage layers (S3/ADLS).
  • Establish best practices for data security, lineage, and access control aligned with enterprise governance policies.
  • Support self-service analytics by enabling governed data products and semantic layers.
  • Develop reusable design patterns, documentation, and standards for Dremio deployment, monitoring, and scaling.
  • Work closely with BI and data science teams to ensure fast, reliable, and well-modeled access to enterprise data.


IDEAL CANDIDATE:

  • Bachelor’s or Master’s in Computer Science, Information Systems, or related field.
  • 5+ years in data architecture and engineering, with 3+ years in Dremio or modern lakehouse platforms.
  • Strong expertise in SQL optimization, data modeling, and performance tuning within Dremio or similar query engines (Presto, Trino, Athena).
  • Hands-on experience with cloud storage (S3, ADLS, GCS), Parquet/Delta/Iceberg formats, and distributed query planning.
  • Knowledge of data integration tools and pipelines (Airflow, DBT, Kafka, Spark, etc.).
  • Familiarity with enterprise data governance, metadata management, and role-based access control (RBAC).
  • Excellent problem-solving, documentation, and stakeholder communication skills.


PREFERRED:

  • Experience integrating Dremio with BI tools (Tableau, Power BI, Looker) and data catalogs (Collibra, Alation, Purview).
  • Exposure to Snowflake, Databricks, or BigQuery environments.
  • Experience in high-tech, manufacturing, or enterprise data modernization programs.
Read more
Capital Squared
Remote only
5 - 10 yrs
₹25L - ₹55L / yr
MLOps
DevOps
Google Cloud Platform (GCP)
CI/CD
skill iconPostgreSQL
+4 more

Role: Full-Time, Long-Term Required: Docker, GCP, CI/CD Preferred: Experience with ML pipelines


OVERVIEW

We are seeking a DevOps engineer to join as a core member of our technical team. This is a long-term position for someone who wants to own infrastructure and deployment for a production machine learning system. You will ensure our prediction pipeline runs reliably, deploys smoothly, and scales as needed.


The ideal candidate thinks about failure modes obsessively, automates everything possible, and builds systems that run without constant attention.


CORE TECHNICAL REQUIREMENTS

Docker (Required): Deep experience with containerization. Efficient Dockerfiles, layer caching, multi-stage builds, debugging container issues. Experience with Docker Compose for local development.


Google Cloud Platform (Required): Strong GCP experience: Cloud Run for serverless containers, Compute Engine for VMs, Artifact Registry for images, Cloud Storage, IAM. You can navigate the console but prefer scripting everything.


CI/CD (Required): Build and maintain deployment pipelines. GitHub Actions required. You automate testing, building, pushing, and deploying. You understand the difference between continuous integration and continuous deployment.


Linux Administration (Required): Comfortable on the command line. SSH, diagnose problems, manage services, read logs, fix things. Bash scripting is second nature.


PostgreSQL (Required): Database administration basics—backups, monitoring, connection management, basic performance tuning. Not a DBA, but comfortable keeping a production database healthy.


Infrastructure as Code (Preferred): Terraform, Pulumi, or similar. Infrastructure should be versioned, reviewed, and reproducible—not clicked together in a console.


WHAT YOU WILL OWN

Deployment Pipeline: Maintaining and improving deployment scripts and CI/CD workflows. Code moves from commit to production reliably with appropriate testing gates.


Cloud Run Services: Managing deployments for model fitting, data cleansing, and signal discovery services. Monitor health, optimize cold starts, handle scaling.


VM Infrastructure: PostgreSQL and Streamlit on GCP VMs. Instance management, updates, backups, security.


Container Registry: Managing images in GitHub Container Registry and Google Artifact Registry. Cleanup policies, versioning, access control.


Monitoring and Alerting: Building observability. Logging, metrics, health checks, alerting. Know when things break before users tell us.


Environment Management: Configuration across local and production. Secrets management. Environment parity where it matters.


WHAT SUCCESS LOOKS LIKE

Deployments are boring—no drama, no surprises. Systems recover automatically from transient failures. Engineers deploy with confidence. Infrastructure changes are versioned and reproducible. Costs are reasonable and resources scale appropriately.


ENGINEERING STANDARDS

Automation First: If you do something twice, automate it. Manual processes are bugs waiting to happen.


Documentation: Runbooks, architecture diagrams, deployment guides. The next person can understand and operate the system.


Security Mindset: Secrets never in code. Least-privilege access. You think about attack surfaces.


Reliability Focus: Design for failure. Backups are tested. Recovery procedures exist and work.


CURRENT ENVIRONMENT

GCP (Cloud Run, Compute Engine, Artifact Registry, Cloud Storage), Docker, Docker Compose, GitHub Actions, PostgreSQL 16, Bash deployment scripts with Python wrapper.


WHAT WE ARE LOOKING FOR

Ownership Mentality: You see a problem, you fix it. You do not wait for assignment.


Calm Under Pressure: When production breaks, you diagnose methodically.


Communication: You explain infrastructure decisions to non-infrastructure people. You document what you build.


Long-Term Thinking: You build systems maintained for years, not quick fixes creating tech debt.


EDUCATION

University degree in Computer Science, Engineering, or related field preferred. Equivalent demonstrated expertise also considered.


TO APPLY

Include: (1) CV/resume, (2) Brief description of infrastructure you built or maintained, (3) Links to relevant work if available, (4) Availability and timezone.

Read more
Auxo AI
kusuma Gullamajji
Posted by kusuma Gullamajji
Hyderabad, Bengaluru (Bangalore), Mumbai, Gurugram
4 - 7 yrs
₹15L - ₹35L / yr
skill iconHTML/CSS
skill iconJavascript
skill iconPython
skill iconNodeJS (Node.js)
CI/CD

Responsibilities :

  • Design and develop user-friendly web interfaces using HTML, CSS, and JavaScript.
  • Utilize modern frontend frameworks and libraries such as React, Angular, or Vue.js to build dynamic and responsive web applications.
  • Develop and maintain server-side logic using programming languages such as Java, Python, Ruby, Node.js, or PHP.
  • Build and manage APIs for seamless communication between the frontend and backend systems.
  • Integrate third-party services and APIs to enhance application functionality.
  • Implement CI/CD pipelines to automate testing, integration, and deployment processes.
  • Monitor and optimize the performance of web applications to ensure a high-quality user experience.
  • Stay up-to-date with emerging technologies and industry trends to continuously improve development processes and application performance.

Qualifications :

  • Bachelors/master's in computer science or related subjects or hands-on experience demonstrating working understanding of software applications.
  • Knowledge of building applications that can be deployed in a cloud environment or are cloud native applications.
  • Strong expertise in building backend applications using Java/C#/Python with demonstrable experience in using frameworks such as Spring/Vertx/.Net/FastAPI.
  • Deep understanding of enterprise design patterns, API development and integration and Test-Driven Development (TDD)
  • Working knowledge in building applications that leverage databases such as PostgreSQL, MySQL, MongoDB, Neo4J or storage technologies such as AWS S3, Azure Blob Storage.
  • Hands-on experience in building enterprise applications adhering to their needs of security and reliability.
  • Hands-on experience building applications using one of the major cloud providers (AWS, Azure, GCP).
  • Working knowledge of CI/CD tools for application integration and deployment.
  • Working knowledge of using reliability tools to monitor the performance of the application.


Read more
AsperAI

at AsperAI

4 candid answers
Bisman Gill
Posted by Bisman Gill
BLR
3 - 6 yrs
Upto ₹33L / yr (Varies
)
CI/CD
skill iconKubernetes
skill iconDocker
kubeflow
TensorFlow
+7 more

About the Role

We are seeking a highly skilled and experienced AI Ops Engineer to join our team. In this role, you will be responsible for ensuring the reliability, scalability, and efficiency of our AI/ML systems in production. You will work at the intersection of software engineering, machine learning, and DevOps— helping to design, deploy, and manage AI/ML models and pipelines that power mission-critical business applications.

The ideal candidate has hands-on experience in AI/ML operations and orchestrating complex data pipelines, a strong understanding of cloud-native technologies, and a passion for building robust, automated, and scalable systems.


Key Responsibilities

  • AI/ML Systems Operations: Develop and manage systems to run and monitor production AI/ML workloads, ensuring performance, availability, cost-efficiency and convenience.
  • Deployment & Automation: Build and maintain ETL, ML and Agentic pipelines, ensuring reproducibility and smooth deployments across environments.
  • Monitoring & Incident Response: Design observability frameworks for ML systems (alerts and notifications, latency, cost, etc.) and lead incident triage, root cause analysis, and remediation.
  • Collaboration: Partner with data scientists, ML engineers, and software engineers to operationalize models at scale.
  • Optimization: Continuously improve infrastructure, workflows, and automation to reduce latency, increase throughput, and minimize costs.
  • Governance & Compliance: Implement MLOps best practices, including versioning, auditing, security, and compliance for data and models.
  • Leadership: Mentor junior engineers and contribute to the development of AI Ops standards and playbooks.


Qualifications

  • Bachelor’s or Master’s degree in Computer Science, Engineering, or related field (or equivalent practical experience).
  • 4+ years of experience in AI/MLOps, DevOps, SRE, Data Engineering, or with at least 2+ years in AI/ML-focused operations.
  • Strong expertise with cloud platforms (AWS, Azure, GCP) and container orchestration (Kubernetes, Docker).
  • Hands-on experience with ML pipelines and frameworks (MLflow, Kubeflow, Airflow, SageMaker, Vertex AI, etc.).
  • Proficiency in Python and/or other scripting languages for automation.
  • Familiarity with monitoring/observability tools (Prometheus, Grafana, Datadog, ELK, etc.).
  • Deep understanding of CI/CD, GitOps, and Infrastructure as Code (Terraform, Helm, etc.).
  • Knowledge of data governance, model drift detection, and compliance in AI systems.
  • Excellent problem-solving, communication, and collaboration skills.

Nice-to-Have

  • Experience in large-scale distributed systems and real-time data streaming (Kafka, Flink, Spark).
  • Familiarity with data science concepts, and frameworks such as scikit-learn, Keras, PyTorch, Tensorflow, etc.
  • Full Stack Development knowledge to collaborate effectively across end-to-end solution delivery
  • Contributions to open-source MLOps/AI Ops tools or platforms.
  • Exposure to Responsible AI practices, model fairness, and explainability frameworks

Why Join Us

  • Opportunity to shape and scale AI/ML operations in a fast-growing, innovation-driven environment.
  • Work alongside leading data scientists and engineers on cutting-edge AI solutions.
  • Competitive compensation, benefits, and career growth opportunities.
Read more
Codemonk

at Codemonk

4 candid answers
2 recruiters
Reshika Mendiratta
Posted by Reshika Mendiratta
Bengaluru (Bangalore)
1yr+
Upto ₹10L / yr (Varies
)
DevOps
skill iconAmazon Web Services (AWS)
CI/CD
skill iconDocker
skill iconKubernetes
+3 more

Role Overview

We are seeking a DevOps Engineer with 2 years of experience to join our innovative team. The ideal

candidate will bridge the gap between development and operations, implementing and maintaining our

cloud infrastructure while ensuring secure deployment pipelines and robust security practices for our

client projects.


Responsibilities:

  • Design, implement, and maintain CI/CD pipelines.
  • Containerize applications using Docker and orchestrate deployments
  • Manage and optimize cloud infrastructure on AWS and Azure platforms
  • Monitor system performance and implement automation for operational tasks to ensure optimal
  • performance, security, and scalability.
  • Troubleshoot and resolve infrastructure and deployment issues
  • Create and maintain documentation for processes and configurations
  • Collaborate with cross-functional teams to gather requirements, prioritise tasks, and contribute to project completion.
  • Stay informed about emerging technologies and best practices within the fields of DevOps and cloud computing.


Requirements:

  • 2+ years of hands-on experience with AWS cloud services
  • Strong proficiency in CI/CD pipeline configuration
  • Expertise in Docker containerisation and container management
  • Proficiency in shell scripting (Bash/Power-Shell)
  • Working knowledge of monitoring and logging tools
  • Knowledge of network security and firewall configuration
  • Strong communication and collaboration skills, with the ability to work effectively within a team
  • environment
  • Understanding of networking concepts and protocols in AWS and/or Azure
Read more
Deqode

at Deqode

1 recruiter
purvisha Bhavsar
Posted by purvisha Bhavsar
Gurugram, Delhi, Noida, Ghaziabad, Faridabad
6 - 10 yrs
₹8L - ₹25L / yr
skill iconReact.js
TypeScript
CI/CD
skill iconRedux/Flux

Hiring: Reactjs Developer at Deqode

⭐ Experience: 6+ Years

📍 Location: Gurgaon

⭐ Work Mode:- Hybrid

⏱️ Notice Period: Immediate Joiners

(Only immediate joiners & candidates serving notice period)


We are hiring Senior Frontend Engineers with strong experience in React.js and TypeScript to build scalable, high-performance web applications using micro frontend architecture.


✨ Key Requirements:

✅ 6+ years of frontend development experience

✅Strong expertise in React.js, TypeScript, Hooks, and state management (Redux/Context)

✅Experience with Micro Frontends

✅API integration (REST/GraphQL)

✅Unit & integration testing (Jest, React Testing Library)

✅CI/CD pipelines (Jenkins, GitLab CI, Azure DevOps)

✅Basic knowledge of Kubernetes & Docker

✅Strong Git, performance optimization, and problem-solving skills


Read more
Trential Technologies

at Trential Technologies

1 candid answer
Garima Jangid
Posted by Garima Jangid
Gurugram
3 - 5 yrs
₹20L - ₹35L / yr
skill iconJavascript
skill iconNodeJS (Node.js)
skill iconAmazon Web Services (AWS)
NOSQL Databases
Google Cloud Platform (GCP)
+7 more

What you'll be doing:

As a Software Developer at Trential, you will be the bridge between technical strategy and hands-on execution. You will be working with our dedicated engineering team designing, building, and deploying our core platforms and APIs. You will ensure our solutions are scalable, secure, interoperable, and aligned with open standards and our core vision. Build and maintain back-end interfaces using modern frameworks.

  • Design & Implement: Lead the design, implementation and management of Trential’s products.
  • Code Quality & Best Practices: Enforce high standards for code quality, security, and performance through rigorous code reviews, automated testing, and continuous delivery pipelines.
  • Standards Adherence: Ensure all solutions comply with relevant open standards like W3C Verifiable Credentials (VCs), Decentralized Identifiers (DIDs) & Privacy Laws, maintaining global interoperability.
  • Continuous Improvement: Lead the charge to continuously evaluate and improve the products & processes. Instill a culture of metrics-driven process improvement to boost team efficiency and product quality.
  • Cross-Functional Collaboration: Work closely with the Co-Founders & Product Team to translate business requirements and market needs into clear, actionable technical specifications and stories. Represent Trential in interactions with external stakeholders for integrations.


What we're looking for:

  • 3+ years of experience in backend development.
  • Deep proficiency in JavaScript, Node.js experience in building and operating distributed, fault tolerant systems.
  • Hands-on experience with cloud platforms (AWS & GCP) and modern DevOps practices (e.g., CI/CD, Infrastructure as Code, Docker).
  • Strong knowledge of SQL/NoSQL databases and data modeling for high-throughput, secure applications.

Preferred Qualifications (Nice to Have)

  • Knowledge of decentralized identity principles, Verifiable Credentials (W3C VCs), DIDs, and relevant protocols (e.g., OpenID4VC, DIDComm)
  • Familiarity with data privacy and security standards (GDPR, SOC 2, ISO 27001) and designing systems complying to these laws.
  • Experience integrating AI/ML models into verification or data extraction workflows.
Read more
iMerit
Bengaluru (Bangalore)
6 - 9 yrs
₹10L - ₹15L / yr
DevOps
Terraform
Apache Kafka
skill iconPython
skill iconGo Programming (Golang)
+4 more

Exp: 7- 10 Years

CTC: up to 35 LPA


Skills:

  • 6–10 years DevOps / SRE / Cloud Infrastructure experience
  • Expert-level Kubernetes (networking, security, scaling, controllers)
  • Terraform Infrastructure-as-Code mastery
  • Hands-on Kafka production experience
  • AWS cloud architecture and networking expertise
  • Strong scripting in Python, Go, or Bash
  • GitOps and CI/CD tooling experience


Key Responsibilities:

  • Design highly available, secure cloud infrastructure supporting distributed microservices at scale
  • Lead multi-cluster Kubernetes strategy optimized for GPU and multi-tenant workloads
  • Implement Infrastructure-as-Code using Terraform across full infrastructure lifecycle
  • Optimize Kafka-based data pipelines for throughput, fault tolerance, and low latency
  • Deliver zero-downtime CI/CD pipelines using GitOps-driven deployment models
  • Establish SRE practices with SLOs, p95 and p99 monitoring, and FinOps discipline
  • Ensure production-ready disaster recovery and business continuity testing



If interested Kindly share your updated resume at 82008 31681

Read more
Financial Services Industry

Financial Services Industry

Agency job
via Peak Hire Solutions by Dhara Thakkar
Hyderabad
4 - 5 yrs
₹10L - ₹20L / yr
skill iconPython
CI/CD
SQL
skill iconKubernetes
Stakeholder management
+14 more

Required Skills: CI/CD Pipeline, Kubernetes, SQL Database, Excellent Communication & Stakeholder Management, Python

 

Criteria:

Looking for 15days and max 30 days of notice period candidates.

looking candidates from Hyderabad location only

Looking candidates from EPAM company only 

1.4+ years of software development experience

2. Strong experience with Kubernetes, Docker, and CI/CD pipelines in cloud-native environments.

3. Hands-on with NATS for event-driven architecture and streaming.

4. Skilled in microservices, RESTful APIs, and containerized app performance optimization.

5. Strong in problem-solving, team collaboration, clean code practices, and continuous learning.

6.  Proficient in Python (Flask) for building scalable applications and APIs.

7. Focus: Java, Python, Kubernetes, Cloud-native development

8. SQL database 

 

Description

Position Overview

We are seeking a skilled Developer to join our engineering team. The ideal candidate will have strong expertise in Java and Python ecosystems, with hands-on experience in modern web technologies, messaging systems, and cloud-native development using Kubernetes.


Key Responsibilities

  • Design, develop, and maintain scalable applications using Java and Spring Boot framework
  • Build robust web services and APIs using Python and Flask framework
  • Implement event-driven architectures using NATS messaging server
  • Deploy, manage, and optimize applications in Kubernetes environments
  • Develop microservices following best practices and design patterns
  • Collaborate with cross-functional teams to deliver high-quality software solutions
  • Write clean, maintainable code with comprehensive documentation
  • Participate in code reviews and contribute to technical architecture decisions
  • Troubleshoot and optimize application performance in containerized environments
  • Implement CI/CD pipelines and follow DevOps best practices
  •  

Required Qualifications

  • Bachelor's degree in Computer Science, Information Technology, or related field
  • 4+ years of experience in software development
  • Strong proficiency in Java with deep understanding of web technology stack
  • Hands-on experience developing applications with Spring Boot framework
  • Solid understanding of Python programming language with practical Flask framework experience
  • Working knowledge of NATS server for messaging and streaming data
  • Experience deploying and managing applications in Kubernetes
  • Understanding of microservices architecture and RESTful API design
  • Familiarity with containerization technologies (Docker)
  • Experience with version control systems (Git)


Skills & Competencies

  • Skills Java (Spring Boot, Spring Cloud, Spring Security) 
  • Python (Flask, SQL Alchemy, REST APIs)
  • NATS messaging patterns (pub/sub, request/reply, queue groups)
  • Kubernetes (deployments, services, ingress, ConfigMaps, Secrets)
  • Web technologies (HTTP, REST, WebSocket, gRPC)
  • Container orchestration and management
  • Soft Skills Problem-solving and analytical thinking
  • Strong communication and collaboration
  • Self-motivated with ability to work independently
  • Attention to detail and code quality
  • Continuous learning mindset
  • Team player with mentoring capabilities


Read more
Tradelab Technologies
Aakanksha Yadav
Posted by Aakanksha Yadav
Mumbai
6 - 10 yrs
₹12L - ₹20L / yr
CI/CD
skill iconAmazon Web Services (AWS)
skill iconJenkins
skill iconGitHub
ArgoCD
+1 more

Senior DevOps Engineer (8–10 years)

Location: Mumbai


Role Summary

As a Senior DevOps Engineer, you will own end-to-end platform reliability and delivery automation for mission-critical lending systems. You’ll architect cloud infrastructure, standardize CI/CD, enforce DevSecOps controls, and drive observability at scale—ensuring high availability, performance, and compliance consistent with BFSI standards.


Key Responsibilities


Platform & Cloud Infrastructure

  • Design, implement, and scale multi-account, multi-VPC cloud architectures on AWS and/or Azure (compute, networking, storage, IAM, RDS, EKS/AKS, Load Balancers, CDN). 
  • Champion Infrastructure as Code (IaC) using Terraform (and optionally Pulumi/Crossplane) with GitOps workflows for repeatable, auditable deployments.
  • Lead capacity planning, cost optimization, and performance tuning across environments.

CI/CD & Release Engineering

  • Build and standardize CI/CD pipelines (Jenkins, GitHub Actions, Azure DevOps, ArgoCD) for microservices, data services, and frontends; enable blue‑green/canary releases and feature flags.
  • Drive artifact management, environment promotion, and release governance with compliance-friendly controls.

Containers, Kubernetes & Runtime

  • Operate production-grade Kubernetes (EKS/AKS), including cluster lifecycle, autoscaling, ingress, service mesh, and workload security; manage Docker/containerd images and registries. 

Reliability, Observability & Incident Management

  • Implement end-to-end monitoring, logging, and tracing (Prometheus, Grafana, ELK/EFK, CloudWatch/Log Analytics, Datadog/New Relic) with SLO/SLI error budgets. 
  • Establish on-call rotations, run postmortems, and continuously improve MTTR and change failure rate.

Security & Compliance (DevSecOps)

  • Enforce cloud and container hardening, secrets management (AWS Secrets Manager / HashiCorp Vault), vulnerability scanning (Snyk/SonarQube), and policy-as-code (OPA/Conftest).
  • Partner with infosec/risk to meet BFSI regulatory expectations for DR/BCP, audits, and data protection.

Data, Networking & Edge

  • Optimize networking (DNS, TCP/IP, routing, OSI layers) and edge delivery (CloudFront/Fastly), including WAF rules and caching strategies. 
  • Support persistence layers (MySQL, Elasticsearch, DynamoDB) for performance and reliability.

Ways of Working & Leadership

  • Lead cross-functional squads (Product, Engineering, Data, Risk) and mentor junior DevOps/SREs.
  • Document runbooks, architecture diagrams, and operating procedures; drive automation-first culture.


Must‑Have Qualifications

  • 8–10 years of total experience with 5+ years hands-on in DevOps/SRE roles.
  • Strong expertise in AWS and/or Azure, Linux administration, Kubernetes, Docker, and Terraform.
  • Proven track record building CI/CD with Jenkins/GitHub Actions/Azure DevOps/ArgoCD. 
  • Solid grasp of networking fundamentals (DNS, TLS, TCP/IP, routing, load balancing).
  • Experience implementing observability stacks and responding to production incidents. 
  • Scripting in Bash/Python; ability to automate ops workflows and platform tasks. 
  • Good‑to‑Have / Preferred
  • Exposure to BFSI/fintech systems and compliance standards; DR/BCP planning. 
  • Secrets management (Vault), policy-as-code (OPA), and security scanning (Snyk/SonarQube). 
  • Experience with GitOps patterns, service tiering, and SLO/SLI design. [illbeback.ai]
  • Knowledge of CDNs (CloudFront/Fastly) and edge caching/WAF rule authoring. 
  • Education
  • Bachelor’s/Master’s in Computer Science, Information Technology, or related field (or equivalent experience).
Read more
Matchmaking platform

Matchmaking platform

Agency job
via Peak Hire Solutions by Dhara Thakkar
Mumbai
2 - 5 yrs
₹15L - ₹28L / yr
skill iconData Science
skill iconPython
Natural Language Processing (NLP)
MySQL
skill iconMachine Learning (ML)
+15 more

Review Criteria

  • Strong Data Scientist/Machine Learnings/ AI Engineer Profile
  • 2+ years of hands-on experience as a Data Scientist or Machine Learning Engineer building ML models
  • Strong expertise in Python with the ability to implement classical ML algorithms including linear regression, logistic regression, decision trees, gradient boosting, etc.
  • Hands-on experience in minimum 2+ usecaseds out of recommendation systems, image data, fraud/risk detection, price modelling, propensity models
  • Strong exposure to NLP, including text generation or text classification (Text G), embeddings, similarity models, user profiling, and feature extraction from unstructured text
  • Experience productionizing ML models through APIs/CI/CD/Docker and working on AWS or GCP environments
  • Preferred (Company) – Must be from product companies

 

Job Specific Criteria

  • CV Attachment is mandatory
  • What's your current company?
  • Which use cases you have hands on experience?
  • Are you ok for Mumbai location (if candidate is from outside Mumbai)?
  • Reason for change (if candidate has been in current company for less than 1 year)?
  • Reason for hike (if greater than 25%)?

 

Role & Responsibilities

  • Partner with Product to spot high-leverage ML opportunities tied to business metrics.
  • Wrangle large structured and unstructured datasets; build reliable features and data contracts.
  • Build and ship models to:
  • Enhance customer experiences and personalization
  • Boost revenue via pricing/discount optimization
  • Power user-to-user discovery and ranking (matchmaking at scale)
  • Detect and block fraud/risk in real time
  • Score conversion/churn/acceptance propensity for targeted actions
  • Collaborate with Engineering to productionize via APIs/CI/CD/Docker on AWS.
  • Design and run A/B tests with guardrails.
  • Build monitoring for model/data drift and business KPIs


Ideal Candidate

  • 2–5 years of DS/ML experience in consumer internet / B2C products, with 7–8 models shipped to production end-to-end.
  • Proven, hands-on success in at least two (preferably 3–4) of the following:
  • Recommender systems (retrieval + ranking, NDCG/Recall, online lift; bandits a plus)
  • Fraud/risk detection (severe class imbalance, PR-AUC)
  • Pricing models (elasticity, demand curves, margin vs. win-rate trade-offs, guardrails/simulation)
  • Propensity models (payment/churn)
  • Programming: strong Python and SQL; solid git, Docker, CI/CD.
  • Cloud and data: experience with AWS or GCP; familiarity with warehouses/dashboards (Redshift/BigQuery, Looker/Tableau).
  • ML breadth: recommender systems, NLP or user profiling, anomaly detection.
  • Communication: clear storytelling with data; can align stakeholders and drive decisions.



Read more
Global digital transformation solutions provider.

Global digital transformation solutions provider.

Agency job
via Peak Hire Solutions by Dhara Thakkar
Chennai, Kochi (Cochin), Pune, Trivandrum, Thiruvananthapuram
5 - 7 yrs
₹10L - ₹25L / yr
Google Cloud Platform (GCP)
skill iconJenkins
CI/CD
skill iconDocker
skill iconKubernetes
+15 more

Job Description

We are seeking a highly skilled Site Reliability Engineer (SRE) with strong expertise in Google Cloud Platform (GCP) and CI/CD automation to lead cloud infrastructure initiatives. The ideal candidate will design and implement robust CI/CD pipelines, automate deployments, ensure platform reliability, and drive continuous improvement in cloud operations and DevOps practices.


Key Responsibilities:

  • Design, develop, and optimize end-to-end CI/CD pipelines using Jenkins, with a strong focus on Declarative Pipeline syntax.
  • Automate deployment, scaling, and management of applications across various GCP services including GKE, Cloud Run, Compute Engine, Cloud SQL, Cloud Storage, VPC, and Cloud Functions.
  • Collaborate closely with development and DevOps teams to ensure seamless integration of applications into the CI/CD pipeline and GCP environment.
  • Implement and manage monitoring, logging, and ing solutions to maintain visibility, reliability, and performance of cloud infrastructure and applications.
  • Ensure compliance with security best practices and organizational policies across GCP environments.
  • Document processes, configurations, and architectural decisions to maintain operational transparency.
  • Stay updated with the latest GCP services, DevOps, and SRE best practices to enhance infrastructure efficiency and reliability.


Mandatory Skills:

  • Google Cloud Platform (GCP) – Hands-on experience with core GCP compute, networking, and storage services.
  • Jenkins – Expertise in Declarative Pipeline creation and optimization.
  • CI/CD – Strong understanding of automated build, test, and deployment workflows.
  • Solid understanding of SRE principles including automation, scalability, observability, and system reliability.
  • Familiarity with containerization and orchestration tools (Docker, Kubernetes – GKE).
  • Proficiency in scripting languages such as Shell, Python, or Groovy for automation tasks.


Preferred Skills:

  • Experience with TerraformAnsible, or Cloud Deployment Manager for Infrastructure as Code (IaC).
  • Exposure to monitoring and observability tools like Stackdriver, Prometheus, or Grafana.
  • Knowledge of multi-cloud or hybrid environments (AWS experience is a plus).
  • GCP certification (Professional Cloud DevOps Engineer / Cloud Architect) preferred.


Skills

Gcp, Jenkins, CICD Aws,


Nice to Haves

Experience with Terraform, Ansible, or Cloud Deployment Manager for Infrastructure as Code (IaC).

Exposure to monitoring and observability tools like Stackdriver, Prometheus, or Grafana.

Knowledge of multi-cloud or hybrid environments (AWS experience is a plus).

GCP certification (Professional Cloud DevOps Engineer / Cloud Architect) preferred.

 

******

Notice period - 0 to 15days only

Location – Pune, Trivandrum, Kochi, Chennai

Read more
Indore
2 - 6 yrs
₹4L - ₹8L / yr
skill iconMongoDB
skill iconReact.js
skill iconNodeJS (Node.js)
skill iconExpress
DevOps
+2 more

Key Responsibilities & Skills


Strong hands-on experience in React.js, Node.js, Express.js, MongoDB

Ability to lead and mentor a development team

Project ownership – sprint planning, code reviews, task allocation

Excellent communication skills for client interactions

Strong decision-making & problem-solving abilities


Nice-to-Have (Bonus Skills)


Experience in system architecture design

Deployment knowledge – AWS / DigitalOcean / Cloud

Understanding of CI/CD pipelines & best coding practices


Why Join InfoSparkles?


Lead from Day One

Work on modern & challenging tech projects

Excellent career growth in a leadership position

Read more
Arcitech
Navi Mumbai
5 - 7 yrs
₹12L - ₹14L / yr
Cyber Security
VAPT
Cloud Computing
CI/CD
skill iconJenkins
+4 more

Senior DevSecOps Engineer (Cybersecurity & VAPT) - Arcitech AI



Arcitech AI, located in Mumbai's bustling Lower Parel, is a trailblazer in software and IT, specializing in software development, AI, mobile apps, and integrative solutions. Committed to excellence and innovation, Arcitech AI offers incredible growth opportunities for team members. Enjoy unique perks like weekends off and a provident fund. Our vibrant culture is friendly and cooperative, fostering a dynamic work environment that inspires creativity and forward-thinking. Join us to shape the future of technology.

Full-time

Navi Mumbai, Maharashtra, India

5+ Years Experience

1200000 - 1400000

Job Title: Senior DevSecOps Engineer (Cybersecurity & VAPT)

Location: Vashi, Navi Mumbai (On-site)

Shift: 10:00 AM - 7:00 PM

Experience: 5+ years

Salary : INR 12,00,000 - 14,00,000


Job Summary

Hiring a Senior DevSecOps Engineer with strong cloud, CI/CD, automation skills and hands-on experience in Cybersecurity & VAPT to manage deployments, secure infrastructure, and support DevSecOps initiatives.


Key Responsibilities

Cloud & Infrastructure

  • Manage deployments on AWS/Azure
  • Maintain Linux servers & cloud environments
  • Ensure uptime, performance, and scalability


CI/CD & Automation

  • Build and optimize pipelines (Jenkins, GitHub Actions, GitLab CI/CD)
  • Automate tasks using Bash/Python
  • Implement IaC (Terraform/CloudFormation)


Containerization

  • Build and run Docker containers
  • Work with basic Kubernetes concepts


Cybersecurity & VAPT

  • Perform Vulnerability Assessment & Penetration Testing
  • Identify, track, and mitigate security vulnerabilities
  • Implement hardening and support DevSecOps practices
  • Assist with firewall/security policy management


Monitoring & Troubleshooting

  • Use ELK, Prometheus, Grafana, CloudWatch
  • Resolve cloud, deployment, and infra issues


Cross-Team Collaboration

  • Work with Dev, QA, and Security for secure releases
  • Maintain documentation and best practices


Required Skills

  • AWS/Azure, Linux, Docker
  • CI/CD tools: Jenkins, GitHub Actions, GitLab
  • Terraform / IaC
  • VAPT experience + understanding of OWASP, cloud security
  • Bash/Python scripting
  • Monitoring tools (ELK, Prometheus, Grafana)
  • Strong troubleshooting & communication
Read more
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Find more jobs
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort