Cutshort logo

50+ CI/CD Jobs in India

Apply to 50+ CI/CD Jobs on CutShort.io. Find your next job, effortlessly. Browse CI/CD Jobs and apply today!

icon
CyberWarFare Labs
Yash Bharadwaj
Posted by Yash Bharadwaj
Bengaluru (Bangalore)
1 - 3 yrs
₹4L - ₹6L / yr
skill iconAmazon Web Services (AWS)
Microsoft Windows Azure
Google Cloud Platform (GCP)
skill iconDocker
CI/CD
+4 more


Job Overview:

We are looking for a full-time Infrastructure & DevOps Engineer to support and enhance our cloud, server, and network operations. The role involves managing virtualization platforms, container environments, automation tools, and CI/CD workflows while ensuring smooth, secure, and reliable infrastructure performance. The ideal candidate should be proactive, technically strong, and capable of working collaboratively across teams.


Qualifications and Requirements

  • Bachelor’s/Master’s degree in Computer Science, Engineering, or related field (B.E/B.Tech/BCA/MCA/M.Tech).
  • Strong understanding of cloud platforms (AWS, Azure, GCP),including core services and IT infrastructure concepts.
  • Hands-on experience with virtualization tools including vCenter, hypervisors, nested virtualization, and bare-metal servers and concepts.
  • Practical knowledge of Linux and Windows servers, including cron jobs and essential Linux commands.
  • Experience working with Docker, Kubernetes, and CI/CD pipelines.
  • Strong understanding of Terraform and Ansible for infrastructure automation.
  • Scripting proficiency in Python and Bash (PowerShell optional).
  • Networking fundamentals (IP, routing, subnetting, LAN/WAN/WLAN).
  • Experience with firewalls, basic security concepts, and tools like pfSense.
  • Familiarity with Git/GitHub for version control and team collaboration.
  • Ability to perform API testing using cURL and Postman.
  • Strong understanding of the application deployment lifecycle and basic application deployment processes.
  • Good problem-solving, analytical thinking, and documentation skills.


Roles and Responsibility

  • Manage and maintain Linux/Windows servers, virtualization environments, and cloud infrastructure across AWS/Azure/GCP.
  • Use Terraform and Ansible to provision, automate, and manage infrastructure components.
  • Support application deployment lifecycle—from build and testing to release and rollout.
  • Deploy and maintain Kubernetes clusters and containerized workloads using Docker.
  • Develop, enhance, and troubleshoot CI/CD pipelines and integrate DevSecOps practices.
  • Write automation scripts using Python/Bash to optimize recurring tasks.
  • Conduct API testing using curl and Postman to validate integrations and service functionality.
  • Configure and monitor firewalls including pfSense for secure access control.
  • Troubleshoot network, server, and application issues using tools like Wireshark, ping, traceroute, and SNMP.
  • Maintain Git/GitHub repos, manage branching strategies, and participate in code reviews.
  • Prepare clear, detailed documentation including infrastructure diagrams, workflows, SOPs, and configuration records.


Read more
E-Commerce Industry

E-Commerce Industry

Agency job
via Peak Hire Solutions by Dhara Thakkar
Bengaluru (Bangalore)
6 - 10 yrs
₹30L - ₹50L / yr
Security Information and Event Management (SIEM)
Information security governance
ISO/IEC 27001:2005
Systems Development Life Cycle (SDLC)
Software Development
+67 more

SENIOR INFORMATION SECURITY ENGINEER (DEVSECOPS)

Key Skills: Software Development Life Cycle (SDLC), CI/CD

About Company: Consumer Internet / E-Commerce

Company Size: Mid-Sized

Experience Required: 6 - 10 years

Working Days: 5 days/week

Office Location: Bengaluru [Karnataka]


Review Criteria:

Mandatory:

  • Strong DevSecOps profile
  • Must have 5+ years of hands-on experience in Information Security, with a primary focus on cloud security across AWS, Azure, and GCP environments.
  • Must have strong practical experience working with Cloud Security Posture Management (CSPM) tools such as Prisma Cloud, Wiz, or Orca along with SIEM / IDS / IPS platforms
  • Must have proven experience in securing Kubernetes and containerized environments including image security,runtime protection, RBAC, and network policies.
  • Must have hands-on experience integrating security within CI/CD pipelines using tools such as Snyk, GitHub Advanced Security,or equivalent security scanning solutions.
  • Must have solid understanding of core security domains including network security, encryption, identity and access management key management, and security governance including cloud-native security services like GuardDuty, Azure Security Center etc
  • Must have practical experience with Application Security Testing tools including SAST, DAST, and SCA in real production environments
  • Must have hands-on experience with security monitoring, incident response, alert investigation, root-cause analysis (RCA), and managing VAPT / penetration testing activities
  • Must have experience securing infrastructure-as-code and cloud deployments using Terraform, CloudFormation, ARM, Docker, and Kubernetes
  • B2B SaaS Product companies
  • Must have working knowledge of globally recognized security frameworks and standards such as ISO 27001, NIST, and CIS with exposure to SOC2, GDPR, or HIPAA compliance environments


Preferred:

  • Experience with DevSecOps automation, security-as-code, and policy-as-code implementations
  • Exposure to threat intelligence platforms, cloud security monitoring, and proactive threat detection methodologies, including EDR / DLP or vulnerability management tools
  • Must demonstrate strong ownership mindset, proactive security-first thinking, and ability to communicate risks in clear business language


Roles & Responsibilities:

We are looking for a Senior Information Security Engineer who can help protect our cloud infrastructure, applications, and data while enabling teams to move fast and build securely.


This role sits deep within our engineering ecosystem. You’ll embed security into how we design, build, deploy, and operate systems—working closely with Cloud, Platform, and Application Engineering teams. You’ll balance proactive security design with hands-on incident response, and help shape a strong, security-first culture across the organization.


If you enjoy solving real-world security problems, working close to systems and code, and influencing how teams build securely at scale, this role is for you.


What You’ll Do-

Cloud & Infrastructure Security:

  • Design, implement, and operate cloud-native security controls across AWS, Azure, GCP, and Oracle.
  • Strengthen IAM, network security, and cloud posture using services like GuardDuty, Azure Security Center and others.
  • Partner with platform teams to secure VPCs, security groups, and cloud access patterns.


Application & DevSecOps Security:

  • Embed security into the SDLC through threat modeling, secure code reviews, and security-by-design practices.
  • Integrate SAST, DAST, and SCA tools into CI/CD pipelines.
  • Secure infrastructure-as-code and containerized workloads using Terraform, CloudFormation, ARM, Docker, and Kubernetes.


Security Monitoring & Incident Response:

  • Monitor security alerts and investigate potential threats across cloud and application layers.
  • Lead or support incident response efforts, root-cause analysis, and corrective actions.
  • Plan and execute VAPT and penetration testing engagements (internal and external), track remediation, and validate fixes.
  • Conduct red teaming activities and tabletop exercises to test detection, response readiness, and cross-team coordination.
  • Continuously improve detection, response, and testing maturity.


Security Tools & Platforms:

  • Manage and optimize security tooling including firewalls, SIEM, EDR, DLP, IDS/IPS, CSPM, and vulnerability management platforms.
  • Ensure tools are well-integrated, actionable, and aligned with operational needs.


Compliance, Governance & Awareness:

  • Support compliance with industry standards and frameworks such as SOC2, HIPAA, ISO 27001, NIST, CIS, and GDPR.
  • Promote secure engineering practices through training, documentation, and ongoing awareness programs.
  • Act as a trusted security advisor to engineering and product teams.


Continuous Improvement:

  • Stay ahead of emerging threats, cloud vulnerabilities, and evolving security best practices.
  • Continuously raise the bar on a company's security posture through automation and process improvement.


Endpoint Security (Secondary Scope):

  • Provide guidance on endpoint security tooling such as SentinelOne and Microsoft Defender when required.


Ideal Candidate:

  • Strong hands-on experience in cloud security across AWS and Azure.
  • Practical exposure to CSPM tools (e.g., Prisma Cloud, Wiz, Orca) and SIEM / IDS / IPS platforms.
  • Experience securing containerized and Kubernetes-based environments.
  • Familiarity with CI/CD security integrations (e.g., Snyk, GitHub Advanced Security, or similar).
  • Solid understanding of network security, encryption, identity, and access management.
  • Experience with application security testing tools (SAST, DAST, SCA).
  • Working knowledge of security frameworks and standards such as ISO 27001, NIST, and CIS.
  • Strong analytical, troubleshooting, and problem-solving skills.


Nice to Have:

  • Experience with DevSecOps automation and security-as-code practices.
  • Exposure to threat intelligence and cloud security monitoring solutions.
  • Familiarity with incident response frameworks and forensic analysis.
  • Security certifications such as CISSP, CISM, CCSP, or CompTIA Security+.


Perks, Benefits and Work Culture:

A wholesome opportunity in a fast-paced environment that will enable you to juggle between concepts, yet maintain the quality of content, interact and share your ideas and have loads of learning while at work. Work with a team of highly talented young professionals and enjoy the comprehensive benefits that company offers.

Read more
Hiver

at Hiver

2 recruiters
Bisman Gill
Posted by Bisman Gill
Bengaluru (Bangalore)
4 - 7 yrs
Upto ₹22L / yr (Varies
)
Selenium
Manual testing
Test Automation (QA)
Software Testing (QA)
CI/CD
+1 more

Hiver offers teams the simplest way to provide outstanding and personalized customer service. As a customer service solution built on Gmail, Hiver is intuitive, super easy to learn, and delightful to use. Hiver is used by thousands of teams at some of the best-known companies in the world to provide attentive, empathetic, and human service to their customers at scale. We’re a top-rated product on G2 and rank very highly on customer satisfaction. 


At Hiver, we obsess about being world-class at everything we do. Our product is loved by our customers, our content engages a very wide audience, our customer service is one of the highest rated in the industry, and our sales team is as driven about doing right by our customers as they are by hitting their numbers. We’re profitably run and are backed by notable investors. K1 Capital led our most recent round of $27 million. Before that, we raised from Kalaari Capital, Kae Capital, and Citrix Startup Accelerator. 


Opportunity:


We are looking for a Senior QA Engineer who will play a critical role in ensuring high product quality, reducing release risks, and scaling automation across teams. This is a hands-on, high-ownership individual contributor role where you will influence quality practices, collaborate deeply with engineers, and act as a quality champion within your product area.This role goes beyond test execution—you will help define how quality is built into the system while still being deeply involved in automation, debugging, and release readiness.


What You Will Be Working for?


  • Quality Ownership & Execution 
  • Own end-to-end quality for assigned product areas/features.
  • Design, review, and execute comprehensive test strategies (functional, regression, integration).
  • Lead feature-level release sign-offs in collaboration with engineering and product.
  • Actively debug failures across UI, backend, and APIs to identify root causes.
  • Work closely with Customer Support to reproduce, analyze, and prevent customer-reported issues.
  • Automation & Engineering Excellence 
  • Build, extend, and maintain automated test suites for UI, API, and backend services.
  • Improve test reliability by identifying and fixing flaky tests.
  • Drive reduction in manual regression time through automation ROI.
  • Integrate automated tests into CI/CD pipelines and ensure fast feedback cycles.
  • Collaborate with developers to shift quality left (testability, better coverage, early validation).


What we are looking for?


  • 4+ years of experience in software quality engineering.
  • Strong ability to design high-quality test scenarios from ambiguous requirements.
  • Hands-on experience in test automation using tools like Selenium, Playwright, Webdriver.io, or similar.
  • Proficiency in at least one programming language (Python, JavaScript, Java, etc.).
  • Solid experience with API testing, backend validation, and debugging.
  • Working knowledge of CI/CD pipelines and version control systems (Git).
  • Strong RCA skills with the ability to prevent repeat issues.
  • Excellent communication skills and confidence to challenge assumptions.
  • High ownership mindset with strong attention to detail and quality standards.


Good to Have skills?


  • Experience testing microservices-based architectures.
  • Familiarity with performance testing tools like JMeter.
  • Experience with Linux/Unix environments.
  • Exposure to test metrics such as bug leakage, automation coverage, and regression effectiveness.
  • Experience mentoring junior QA engineers 


Read more
Hiver

at Hiver

2 recruiters
Bisman Gill
Posted by Bisman Gill
HSR Layout, BLR
2 - 4 yrs
Upto ₹14L / yr (Varies
)
Selenium
Manual testing
Automation Testing
Software Testing (QA)
CI/CD
+1 more

About us:

Hiver offers teams the simplest way to provide outstanding and personalized customer service. As a customer service solution built on Gmail, Hiver is intuitive, super easy to learn, and delightful to use. Hiver is used by thousands of teams at some of the best-known companies in the world to provide attentive, empathetic, and human service to their customers at scale. We’re a top-rated product on G2 and rank very highly on customer satisfaction. 


At Hiver, we obsess about being world-class at everything we do. Our product is loved by our customers, our content engages a very wide audience, our customer service is one of the highest rated in the industry, and our sales team is as driven about doing right by our customers as they are by hitting their numbers. We’re profitably run and are backed by notable investors. K1 Capital led our most recent round of $27 million. Before that, we raised from Kalaari Capital, Kae Capital, and Citrix Startup Accelerator. 


Opportunity:

We are looking for a QA Engineer whose key goals would be to drive software quality and reduce risks in our releases. This would involve doing both functional tests as well as building the automated tests for our CI systems. Expect lots of challenges and high levels of ownership and autonomy. Come up with testing procedures to validate functional, system, and performance requirements for new features Ensuring the quality of releases for the features by running the test cases and reporting them Write and maintain automated test suites for functional and performance testing Keep the manual test cases updatedParticipate in the product feature design and specification with Product Managers, UX engineers, and developers


What We are looking for?


  • 2+ years of total experience in a QA role.
  • Should be able to write quality test cases on the problem statement.
  • 1+ years of experience in QA automation with any of the Selenium, Playwright or Webdriver.io etc.
  • Knowledge of at least one high-level programming language like Python, Javascript, Java etc.
  • Hands-on experience with Code version control systems like Git.
  • Work with the Customer Support team to reproduce customer problems and provide solutions to customers.


Good to have skills?


  • Experience with RESTful API testing tools like Postman and performance testing tools like JMeter.
  • Hands-on experience with Build and Continuous Integration (CI) systems like Jenkins.
  • Experience working with Linux/Unix platforms and security aspects of testing is a plus.



Read more
Talent Pro
Bengaluru (Bangalore)
6 - 10 yrs
₹30L - ₹60L / yr
Software Development Life Cycle (SDLC)
CI/CD

Mandatory (Experience 1) – Must have 5+ years of hands-on experience in Information Security, with a primary focus on cloud security across AWS, Azure, and GCP environments.

Mandatory (Experience 2) – Must have strong practical experience working with Cloud Security Posture Management (CSPM) tools such as Prisma Cloud, Wiz, or Orca along with SIEM / IDS / IPS platforms

Mandatory (Experience 3) – Must have proven experience in securing Kubernetes and containerized environments including image security,runtime protection, RBAC, and network policies.

Mandatory (Experience 4) – Must have hands-on experience integrating security within CI/CD pipelines using tools such as Snyk, GitHub Advanced Security,or equivalent security scanning solutions.

Mandatory (Experience 5) – Must have solid understanding of core security domains including network security, encryption, identity and access management key management, and security governance including cloud-native security services like GuardDuty, Azure Security Center etc

Mandatory (Experience 6) – Must have practical experience with Application Security Testing tools including SAST, DAST, and SCA in real production environments

Mandatory (Experience 7) – Must have hands-on experience with security monitoring, incident response, alert investigation, root-cause analysis (RCA), and managing VAPT / penetration testing activities

Mandatory (Experience 8) – Must have experience securing infrastructure-as-code and cloud deployments using Terraform, CloudFormation, ARM, Docker, and Kubernetes

Mandatory (Core Skill) – Must have working knowledge of globally recognized security frameworks and standards such as ISO 27001, NIST, and CIS with exposure to SOC2, GDPR, or HIPAA compliance environments

Read more
Foyforyou
Hardika Bhansali
Posted by Hardika Bhansali
Mumbai
2 - 7 yrs
₹3L - ₹15L / yr
skill iconSwift
User Interface (UI) Design
CI/CD
skill iconGit
RESTful APIs

iOS Developer – FOY (FoyForYou.com)

Function: Software Engineering → Mobile Development, Backend Collaboration Skills: Swift, SwiftUI/UIKit, MVVM/MVP, REST APIs, Xcode, CI/CD

About FOY FOY (FoyForYou.com) is one of India’s fastest-growing beauty & wellness destinations. We offer customers a curated range of 100% authentic products, trusted brands, and a frictionless shopping experience. Our mission is to make beauty effortless, personal, and accessible for every Indian. As we scale fast and build a mobile-first commerce ecosystem, we're strengthening our engineering team with passionate builders who care deeply about user experience.

Job Description:

We’re looking for an iOS Developer (4–8 years) who wants to craft deeply polished mobile experiences and play an active role in shaping FOY’s product direction—not just implement tickets. You'll work on performance, app architecture, offline handling, animations, and end-to-end features that impact millions of users.

Responsibilities

● Work closely with product & design to influence feature strategy and user experience on iOS.

● Build a fast, stable, and intuitive FOY iOS app using Swift, SwiftUI, UIKit, and modern architecture patterns.

● Optimize the app for performance, memory usage, network efficiency, and battery consumption.

● Integrate cleanly with FOY’s backend APIs and ensure reliability across devices.

● Own the delivery pipeline with unit tests, automation, continuous integration, and code reviews.

● Diagnose and solve issues using crash logs, performance tools, and debugging tools.

● Collaborate cross-functionally with Android, backend, QA, and product teams to deliver a seamless commerce experience.

Requirements:

● 4–8 years of experience building and shipping iOS apps.

● Proven experience shipping at least one iOS app—professionally or via a significant side project.

● Solid expertise in Swift, SwiftUI/UIKit, and mobile architecture patterns (MVVM/MVP/Clean Architecture).

● Strong understanding of networking, REST APIs, async programming (Combine, async/await), and local data caching.

● Ability to debug production issues and trace them across client–server boundaries.

● A strong sense of ownership, attention to detail, and user-centric thinking.

● Passion for solving meaningful user problems, not just building features.

Bonus Points

● A GitHub/portfolio with code samples or open-source contributions.

● Experience with fast-moving consumer apps, e-commerce, or high-scale mobile applications.

● Understanding of advanced topics like custom rendering, animations, or performance profiling.

Why Build Your Career at FOY?

At FOY, we’re transforming how India shops for beauty—and building that future requires creativity, ownership, and speed.

We hire for 3 core qualities:

1. Rockstar Team Players Your work will directly impact business, growth, and customer experience.

2. Ownership With Passion You’ll be given important projects to drive independently—minimal hierarchy, maximum impact.

3. Big Dreamers We’re scaling quickly and building boldly. If you dream big and execute fast, you’ll thrive here.

Join Us If bringing world-class iOS experiences to millions excites you, we’d love to meet you.

Apply now and be part of FOY’s journey to redefine beauty commerce in India.

Read more
Ekloud INC
Remote only
7 - 15 yrs
₹6L - ₹25L / yr
skill icon.NET
Fullstack Developer
skill iconReact.js
cloud platforms
Windows Azure
+11 more

Hiring :.NET Full Stack Developer with ReactJS

Designation: Team Lead

Location: Bidadi, Bengaluru (Karnataka) ,Hybrid mode

Relevant Experience: 7-10 years

Preferred Qualifications

• Bachelors in CSE with minimum 7-10 years of relevant experience

• Exposure to cloud platforms (Azure) and API Gateway.

• Knowledge of microservices architecture.

• Experience with unit testing frameworks (xUnit, NUnit).

Required Skills & Qualifications

• Strong hands-on experience in .NET Core, C#, .Net framework (.NET Core, .NET 5+) and API development.

• Experience with RESTful API design and development.

• Strong experience on ReactJS for front-end development.

• Expertise in SQL Server (queries, stored procedures, performance tuning).

• Experience in system integration, especially with SAP.

• Ability to manage and mentor a team effectively.

• Strong requirement gathering and client communication skills.

• Familiarity with Git, CI/CD pipelines, and Agile methodologies.

Role Overview

• Design, develop, and maintain scalable backend services using .NET technologies.

• Work on ReactJS components as well as UI integration and ensure seamless communication between front-end and back-end.

• Write clean, efficient, and well-documented code.

• Lead and mentor a team of developers/Testing, ensuring adherence to best practices and timely delivery.

• Good exposure to Agile and scrum methodology

• Design and implement secure RESTful APIs using .NET Core.

• Apply best practices for authentication, authorization, and data security.

• Develop and maintain integrations with multiple systems, including SAP.

• Design and optimize SQL Server queries, stored procedures, and schemas.

• Gather requirements from clients and translate them into technical specifications.

• Implement Excel file uploaders and data processing workflows.

• Coordinate with stakeholders, manage timelines, and ensure quality deliverables.

• Troubleshoot and debug issues, ensuring smooth operation of backend systems.

Read more
Bookxpert Private Limited
Abhijith Neeli
Posted by Abhijith Neeli
Guntur, Hyderabad
3 - 5 yrs
₹5L - ₹10L / yr
skill iconReact.js
skill iconJavascript
skill iconHTML/CSS
RESTful APIs
UI/UX
+15 more


About the Role:

We are seeking a skilled and enthusiastic React.js Web Developer to join our technology team. The ideal candidate will be responsible for building high-quality user interfaces, enhancing user experience, and developing efficient web applications.


Key Responsibilities:


1. Develop responsive, interactive, and high-performing web applications using React.jsJavaScript/TypeScript, and modern front-end libraries.

2. Translate UI/UX wireframes into high-quality code and reusable components.

3. Optimize components for maximum performance across various devices and browsers.

4. Work with the team to design, structure, and maintain scalable front-end application architecture.

5. Integrate REST APIs, third-party services, and internal tools into the application.

6. Manage application state using tools such as ReduxContext API, or other state management libraries.

7. Write clean, readable, and well-documented code following best industry practices.

8. Conduct thorough debugging, troubleshooting, and performance enhancements.

9. Assist in deployment processes and ensure the application works smoothly in production.

10. Familiarity with CI/CD pipelines is an added advantage.

11. Collaborate with the team on planning, development, and code reviews.

12. Stay updated with the latest technologies and development best practices.


Required Skills & Qualifications:


  • Bachelors degree in Computer Science, IT, or related field (or equivalent experience).
  • 2 - 3+ years of experience in React JS development.
  • Strong proficiency in JavaScript (ES6+), HTML5, CSS3.
  • Hands-on experience with React Hooks, Redux, Context API, and component-based architecture.
  • Good understanding of REST APIs and asynchronous request handling.
  • Experience with build tools like Webpack, Babel, Vite, etc.
  • Familiarity with Git/GitHub and version control workflows.
  • Knowledge of responsive design and cross-browser compatibility.
  • Strong problem-solving and analytical abilities.
  • Ability to work independently as well as in a team environment.
  • Time management skills and ability to meet deadlines.
  • A positive attitude and willingness to learn new technologies.


Why Join Us?


  • Competitive Salary and Professional development opportunities and training.
  • Opportunity to work with cutting-edge technologies in a fast-paced environment.
  • A supportive environment that encourages learning and growth.
  • Collaborative team culture focused on creativity and continuous improvement.


Read more
Procedure

at Procedure

4 candid answers
3 recruiters
Adithya K
Posted by Adithya K
Remote only
5 - 10 yrs
₹40L - ₹60L / yr
Software Development
skill iconAmazon Web Services (AWS)
skill iconPython
TypeScript
skill iconPostgreSQL
+3 more

Procedure is hiring for Drover.


This is not a DevOps/SRE/cloud-migration role — this is a hands-on backend engineering and architecture role where you build the platform powering our hardware at scale.


About Drover

Ranching is getting harder. Increased labor costs and a volatile climate are placing mounting pressure to provide for a growing population. Drover is empowering ranchers to efficiently and sustainably feed the world by making it cheaper and easier to manage livestock, unlock productivity gains, and reduce carbon footprint with rotational grazing. Not only is this a $46B opportunity, you'll be working on a climate solution with the potential for real, meaningful impact.


We use patent-pending low-voltage electrical muscle stimulation (EMS) to steer and contain cows, replacing the need for physical fences or electric shock. We are building something that has never been done before, and we have hundreds of ranches on our waitlist.


Drover is founded by Callum Taylor (ex-Harvard), who comes from 5 generations of ranching, and Samuel Aubin, both of whom grew up in Australian ranching towns and have an intricate understanding of the problem space. We are well-funded and supported by Workshop Ventures, a VC firm with experience in building unicorn IoT companies.


We're looking to assemble a team of exceptional talent with a high eagerness to dive headfirst into understanding the challenges and opportunities within ranching.


About The Role

As our founding cloud engineer, you will be responsible for building and scaling the infrastructure that powers our IoT platform, connecting thousands of devices across ranches nationwide.


Because we are an early-stage startup, you will have high levels of ownership in what you build. You will play a pivotal part in architecting our cloud infrastructure, building robust APIs, and ensuring our systems can scale reliably. We are looking for someone who is excited about solving complex technical challenges at the intersection of IoT, agriculture, and cloud computing.


What You'll Do

  • Develop Drover IoT cloud architecture from the ground up (it’s a green field project)
  • Design and implement services to support wearable devices, mobile app, and backend API
  • Implement data processing and storage pipelines
  • Create and maintain Infrastructure-as-Code
  • Support the engineering team across all aspects of early-stage development -- after all, this is a startup


Requirements

  • 5+ years of experience developing cloud architecture on AWS
  • In-depth understanding of various AWS services, especially those related to IoT
  • Expertise in cloud-hosted, event-driven, serverless architectures
  • Expertise in programming languages suitable for AWS micro-services (eg: TypeScript, Python)
  • Experience with networking and socket programming
  • Experience with Kubernetes or similar orchestration platforms
  • Experience with Infrastructure-as-Code tools (e.g., Terraform, AWS CDK)
  • Familiarity with relational databases (PostgreSQL)
  • Familiarity with Continuous Integration and Continuous Deployment (CI/CD)


Nice To Have

  • Bachelor’s or Master’s degree in Computer Science, Software Engineering, Electrical Engineering, or a related field


Read more
Tarento Group

at Tarento Group

3 candid answers
1 recruiter
Reshika Mendiratta
Posted by Reshika Mendiratta
Bengaluru (Bangalore)
8yrs+
Upto ₹30L / yr (Varies
)
skill iconJava
skill iconSpring Boot
Microservices
Windows Azure
RESTful APIs
+7 more

About Tarento:

 

Tarento is a fast-growing technology consulting company headquartered in Stockholm, with a strong presence in India and clients across the globe. We specialize in digital transformation, product engineering, and enterprise solutions, working across diverse industries including retail, manufacturing, and healthcare. Our teams combine Nordic values with Indian expertise to deliver innovative, scalable, and high-impact solutions.

 

We're proud to be recognized as a Great Place to Work, a testament to our inclusive culture, strong leadership, and commitment to employee well-being and growth. At Tarento, you’ll be part of a collaborative environment where ideas are valued, learning is continuous, and careers are built on passion and purpose.


Job Summary:

We are seeking a highly skilled and self-driven Senior Java Backend Developer with strong experience in designing and deploying scalable microservices using Spring Boot and Azure Cloud. The ideal candidate will have hands-on expertise in modern Java development, containerization, messaging systems like Kafka, and knowledge of CI/CD and DevOps practices.


Key Responsibilities:

  • Design, develop, and deploy microservices using Spring Boot on Azure cloud platforms.
  • Implement and maintain RESTful APIs, ensuring high performance and scalability.
  • Work with Java 11+ features including Streams, Functional Programming, and Collections framework.
  • Develop and manage Docker containers, enabling efficient development and deployment pipelines.
  • Integrate messaging services like Apache Kafka into microservice architectures.
  • Design and maintain data models using PostgreSQL or other SQL databases.
  • Implement unit testing using JUnit and mocking frameworks to ensure code quality.
  • Develop and execute API automation tests using Cucumber or similar tools.
  • Collaborate with QA, DevOps, and other teams for seamless CI/CD integration and deployment pipelines.
  • Work with Kubernetes for orchestrating containerized services.
  • Utilize Couchbase or similar NoSQL technologies when necessary.
  • Participate in code reviews, design discussions, and contribute to best practices and standards.


Required Skills & Qualifications:

  • Strong experience in Java (11 or above) and Spring Boot framework.
  • Solid understanding of microservices architecture and deployment on Azure.
  • Hands-on experience with Docker, and exposure to Kubernetes.
  • Proficiency in Kafka, with real-world project experience.
  • Working knowledge of PostgreSQL (or any SQL DB) and data modeling principles.
  • Experience in writing unit tests using JUnit and mocking tools.
  • Experience with Cucumber or similar frameworks for API automation testing.
  • Exposure to CI/CD tools, DevOps processes, and Git-based workflows.


Nice to Have:

  • Azure certifications (e.g., Azure Developer Associate)
  • Familiarity with Couchbase or other NoSQL databases.
  • Familiarity with other cloud providers (AWS, GCP)
  • Knowledge of observability tools (Prometheus, Grafana, ELK)


Soft Skills:

  • Strong problem-solving and analytical skills.
  • Excellent verbal and written communication.
  • Ability to work in an agile environment and contribute to continuous improvement.


Why Join Us:

  • Work on cutting-edge microservice architectures
  • Strong learning and development culture
  • Opportunity to innovate and influence technical decisions
  • Collaborative and inclusive work environment
Read more
Industrial Automation

Industrial Automation

Agency job
via Michael Page by Pramod P
Bengaluru (Bangalore), Bommasandra Industrial Area
8 - 13 yrs
₹20L - ₹44L / yr
skill iconPython
skill iconC++
skill iconRust
gitlab
DevOps
+4 more

Mode Employment – Fulltime and Permanent

Working Location: Bommasandra Industrial Area, Hosur Main Road, Bangalore

Working Days: 5 days

Working Model: Hybrid - 3 days WFO and 2 days Home


Position Overview

As the Lead Software Engineer in our Research & Innovation team, you’ll play a strategic role in establishing and driving the technical vision for industrial AI solutions. Working closely with the Lead AI Engineer, you will form a leadership tandem to define the roadmap for the team, cultivate an innovative culture, and ensure that projects are strategically aligned with the organization’s goals. Your leadership will be crucial in developing, mentoring, and empowering the team as we expand, helping create an environment where innovative ideas can translate seamlessly from research to industry-ready products.


Key Responsibilities:

  • Define and drive the technical strategy for embedding AI into industrial automation products, with a focus on scalability, quality, and industry compliance.
  • Lead the development of a collaborative, high-performing engineering team, mentoring junior engineers, automation experts, and researchers.
  • Establish and oversee processes and standards for agile and DevOps practices, ensuring project alignment with strategic goals.
  • Collaborate with stakeholders to align project goals, define priorities, and manage timelines, while driving innovative, research-based solutions.
  • Act as a key decision-maker on technical issues, architecture, and system design, ensuring long-term maintainability and scalability of solutions.
  • Ensure adherence to industry standards, certifications, and compliance, and advocate for industry best practices within the team.
  • Stay updated on software engineering trends and AI applications in embedded systems, incorporating the latest advancements into the team’s strategic planning.


Qualifications:

  • Bachelor’s or Master’s degree in Computer Science, Engineering, or related field.
  • Extensive experience in software engineering, with a proven track record of leading technical teams, ideally in manufacturing or embedded systems.
  • Strong expertise in Python and C++/Rust, Gitlab toolchains, and system architecture for embedded applications.
  • Experience in DevOps, CI/CD, and agile methodologies, with an emphasis on setting and maintaining high standards across a team.
  • Exceptional communication and collaboration skills in English.
  • Willingness to travel as needed.


Preferred:

  • Background in driving team culture, agile project management, and experience embedding AI in industrial products.
  • Familiarity with sociocratic or consent-based management practices.
  • Knowledge in embedded programming is an advantage.
Read more
Bits In Glass

at Bits In Glass

3 candid answers
Nikita Sinha
Posted by Nikita Sinha
Hyderabad, Pune, Mohali
5 - 8 yrs
Upto ₹30L / yr (Varies
)
skill iconJava
skill iconPython
CI/CD
skill iconReact.js
skill iconAngular (2+)

Design, build, and operate end-to-end web and API solutions (front end + back end) with strong automation, observability, and production reliability. You will own features from concept through deployment and steady state, including incident response and continuous improvement.


Key Responsibilities:

Engineering & Delivery

  • Translate business requirements into technical designs, APIs, and data models.
  • Develop back-end services using Java and Python, and front-end components using React / Angular / Vue (where applicable).
  • Build REST / GraphQL APIs, batch jobs, streaming jobs, and system integration adapters.
  • Write efficient SQL/NoSQL queries; optimize schemas, indexes, and data flows (ETL / CDC as needed).

Automation, CI/CD & Operations

  • Automate builds, testing, packaging, and deployments using CI/CD pipelines.
  • Create Linux shell and Python scripts for operational tasks, environment automation, and diagnostics.
  • Manage configuration, feature flags, environment parity, and Infrastructure as Code (where applicable).

Reliability, Security & Quality

  • Embed security best practices: authentication/authorization, input validation, secrets management, TLS.
  • Implement unit, integration, contract, and performance tests with enforced quality gates.
  • Add observability: structured logs, metrics, traces, health checks, dashboards, and alerts.
  • Apply resilience patterns: retries, timeouts, circuit breakers, and graceful degradation.

Production Ownership

  • Participate in on-call rotations, incident triage, RCA, and permanent fixes.
  • Refactor legacy code and reduce technical debt with measurable impact.
  • Maintain technical documentation, runbooks, and architecture decision records (ADRs).

Collaboration & Leadership

  • Mentor peers and contribute to engineering standards and best practices.
  • Work closely with Product, QA, Security, and Ops to balance scope, risk, and timelines.

Qualifications

Must Have

  • Strong experience in Java (core concepts, concurrency, REST frameworks).
  • Strong Python experience (services + scripting).
  • Solid Linux skills with automation using shell/Python.
  • Web services expertise: REST/JSON, API design, versioning, pagination, error handling.
  • Databases: Relational (SQL tuning, transactions) plus exposure to NoSQL / caching (Redis).
  • CI/CD tools: Git, pipelines, artifact management.
  • Testing frameworks: JUnit, PyTest, API testing tools.
  • Observability tools: Prometheus, Grafana, ELK, OpenTelemetry (or equivalents).
  • Strong production support mindset with incident management, SLA/SLO awareness, and RCA experience.

Good to Have

  • Messaging & streaming platforms: Kafka, MQ.
  • Infrastructure as Code: Terraform, Ansible.
  • Cloud exposure: AWS / Azure / GCP, including managed data services.
  • Front-end experience with React / Angular / Vue and TypeScript.
  • Deployment strategies: feature flags, canary, blue/green.
  • Knowledge of cost optimization and capacity planning.

Key Performance Indicators (KPIs)

  • Deployment frequency & change failure rate
  • Mean Time to Detect (MTTD) & Mean Time to Recover (MTTR)
  • API latency (p95) and availability vs SLOs
  • Defect escape rate & automated test coverage
  • Technical debt reduction (items resolved per quarter)
  • Incident recurrence trend (continuous reduction)

Soft Skills

  • End-to-end ownership mindset
  • Data-driven decision making
  • Bias for automation and simplification
  • Proactive risk identification
  • Clear, timely, and effective communication

About the Company – Bits In Glass

  • 20+ years of industry experience
  • Merged with Crochet Technologies in 2021 to form a larger global organization
  • Offices in Pune, Hyderabad, and Chandigarh
  • Top 30 global Pega partner and sponsor of PegaWorld
  • Elite Appian Partner since 2008
  • Operations across US, Canada, UK, and India
  • Dedicated Global Pega Center of Excellence

Employee Benefits

  • Career Growth: Clear advancement paths and learning opportunities
  • Challenging Projects: Global, cutting-edge client work
  • Global Exposure: Collaboration with international teams
  • Flexible Work Arrangements: Work-life balance support
  • Comprehensive Benefits: Competitive compensation, health insurance, paid time off
  • Learning & Upskilling: AI-enabled Pega solutions, data engineering, integrations, cloud migration

Company Culture & Values

  • Collaborative & Inclusive: Teamwork, innovation, and respect for diverse ideas
  • Continuous Learning: Certifications and skill development encouraged
  • Integrity: Ethical and transparent practices
  • Excellence: High standards in delivery
  • Client-Centricity: Tailored solutions with measurable impact


Read more
Avhan Technologies Pvt Ltd
Nikita Sinha
Posted by Nikita Sinha
Mumbai
4 - 8 yrs
Upto ₹8.3L / yr (Varies
)
skill iconAmazon Web Services (AWS)
AWS Lambda
API
Amazon S3
Platform as a Service (PaaS)
+3 more

To design, automate, and manage scalable cloud infrastructure that powers real-time AI and communication workloads globally.


Key Responsibilities

  • Implement and mange CI/CD pipelines (GitHub Actions, Jenkins, or GitLab).
  • Manage Kubernetes/EKS clusters
  • Implement infrastructure as code (provisioning via Terraform, CloudFormation, Pulumi etc).
  • Implement observability (Grafana, Loki, Prometheus, ELK/CloudWatch).
  • Enforce security/compliance guardrails (GDPR, DPDP, ISO 27001, PCI, HIPPA).
  • Drive cost-optimization and zero-downtime deployment strategies.
  • Collaborate with developers to containerize and deploy services.

Required Skills & Experience

  • 4–8 years in DevOps or Cloud Infrastructure roles.
  • Proficiency with AWS (EKS, Lambda, API Gateway, S3, IAM).
  • Experience with infrastructure-as-code and CI/CD automation.
  • Familiarity with monitoring, alerting, and incident management.

What Success Looks Like

  • < 10 min build-to-deploy cycle.
  • 99.999 % uptime with proactive incident response.
  • Documented and repeatable DevOps workflows.
Read more
AbleCredit

at AbleCredit

2 candid answers
Utkarsh Apoorva
Posted by Utkarsh Apoorva
Bengaluru (Bangalore)
5 - 8 yrs
₹20L - ₹35L / yr
CI/CD
DevOps
Security Information and Event Management (SIEM)
ISO/IEC 27001:2005

Role: Senior Security Engineer

Salary: INR 20-35L per annum

Performance Bonus: Up to 10% of the base salary

Location: Hulimavu, Bangalore, India

Experience: 5-8 years



About AbleCredit:

AbleCredit has built a foundational AI platform to help BFSI enterprises reduce OPEX by up to 70% by powering workflows for onboarding, claims, credit, and collections. Our GenAI model achieves over 95% accuracy in understanding Indian dialects and excels in financial analysis.


The company was founded in June 2023 by Utkarsh Apoorva (IIT Delhi, built Reshamandi, Guitarstreet, Edulabs); Harshad Saykhedkar (IITB, ex-AI Lead at Slack); and Ashwini Prabhu (IIML, co-founder of Mythiksha, ex-Product Head at Reshamandi, HandyTrain).




What Work You’ll Do

  • Be the guardian of trust — every system you secure will protect millions of data interactions.
  • Operate like a builder, not a gatekeeper — automate guardrails that make security invisible but ever-present.
  • You’ll define what ‘secure by default’ means for a next-generation AI SaaS platform.
  • Own the security posture of our cloud-native SaaS platform — design, implement, and enforce security controls across AWS, Linux, and Kubernetes (EKS) environments.
  • Drive security compliance initiatives such as SOC 2 Type II, ISO 27001, and RBI-aligned frameworks — build systems that enforce, not just document, compliance.
  • Architect defense-in-depth systems across EC2, S3, IAM, and VPC layers, ensuring secure configuration, least-privilege access, and continuous compliance.
  • Build and automate security pipelines — integrate AWS Security Hub, GuardDuty, Inspector, WAF, and CloudTrail into continuous detection and response systems.
  • Lead vulnerability management and incident readiness — identify, prioritize, and remediate vulnerabilities across the stack while ensuring traceable audit logs.
  • Implement and maintain zero-trust and least-privilege access controls using AWS IAM, SSO, and modern secrets management tools like AWS SSM or Vault.
  • Serve as a trusted advisor — train developers, review architecture, and proactively identify risks before they surface.




The Skills You Have..

  • Deep hands-on experience with AWS security architecture — IAM, VPCs, EKS, EC2, S3, CloudTrail, Security Hub, WAF, GuardDuty, and Inspector.
  • Strong background in Linux hardening, container security, and DevSecOps automation.
  • Proficiency with infrastructure-as-code (Terraform, CloudFormation) and integrating security controls into provisioning.
  • Knowledge of zero-trust frameworks, least-privilege IAM, and secrets management (Vault, SSM, KMS).
  • Experience with SIEM and monitoring tools — configuring alerts, analyzing logs, and responding to incidents.
  • Familiarity with compliance automation and continuous assurance — especially SOC 2, ISO 27001, or RBI frameworks.
  • Understanding of secure software supply chains — dependency scanning, artifact signing, and policy enforcement in CI/CD.
  • Ability to perform risk assessment, threat modeling, and architecture review collaboratively with engineering teams.



What You Should Have Done in the Past

  • Secured cloud-native SaaS systems built entirely on AWS (EC2, EKS, S3, IAM, VPC).
  • Led or contributed to SOC 2 Type II or ISO 27001 certification initiatives, ideally in a regulated industry such as FinTech.
  • Designed secure CI/CD pipelines with integrated code scanning, image validation, and secrets rotation.
  • (Bonus) Built internal security automation frameworks or tooling for continuous monitoring and compliance checks.





Read more
TrumetricAI
Yashika Tiwari
Posted by Yashika Tiwari
Bengaluru (Bangalore)
3 - 7 yrs
₹12L - ₹20L / yr
skill iconAmazon Web Services (AWS)
CI/CD
skill iconGit
skill iconDocker
skill iconKubernetes

Key Responsibilities:

  • Design, implement, and maintain scalable, secure, and cost-effective infrastructure on AWS and Azure
  • Set up and manage CI/CD pipelines for smooth code integration and delivery using tools like GitHub Actions, Bitbucket Runners, AWS Code build/deploy, Azure DevOps, etc.
  • Containerize applications using Docker and manage orchestration with Kubernetes, ECS, Fargate, AWS EKS, Azure AKS.
  • Manage and monitor production deployments to ensure high availability and performance
  • Implement and manage CDN solutions using AWS CloudFront and Azure Front Door for optimal content delivery and latency reduction
  • Define and apply caching strategies at application, CDN, and reverse proxy layers for performance and scalability
  • Set up and manage reverse proxies and Cloudflare WAF to ensure application security and performance
  • Implement infrastructure as code (IaC) using Terraform, CloudFormation, or ARM templates
  • Administer and optimize databases (RDS, PostgreSQL, MySQL, etc.) including backups, scaling, and monitoring
  • Configure and maintain VPCs, subnets, routing, VPNs, and security groups for secure and isolated network setups
  • Implement monitoring, logging, and alerting using tools like CloudWatch, Grafana, ELK, or Azure Monitor
  • Collaborate with development and QA teams to align infrastructure with application needs
  • Troubleshoot infrastructure and deployment issues efficiently and proactively
  • Ensure cloud cost optimization and usage tracking


Required Skills & Experience:

  • 3-4 years of hands-on experience in a DevOps
  • Strong expertise with both AWS and Azure cloud platforms
  • Proficient in Git, branching strategies, and pull request workflows
  • Deep understanding of CI/CD concepts and experience with pipeline tools
  • Proficiency in Docker, container orchestration (Kubernetes, ECS/EKS/AKS)
  • Good knowledge of relational databases and experience in managing DB backups, performance, and migrations
  • Experience with networking concepts including VPC, subnets, firewalls, VPNs, etc.
  • Experience with Infrastructure as Code tools (Terraform preferred)
  • Strong working knowledge of CDN technologies: AWS CloudFront and Azure Front Door
  • Understanding of caching strategies: edge caching, browser caching, API caching, and reverse proxy-level caching
  • Experience with Cloudflare WAF, reverse proxy setups, SSL termination, and rate-limiting
  • Familiarity with Linux system administration, scripting (Bash, Python), and automation tools
  • Working knowledge of monitoring and logging tools
  • Strong troubleshooting and problem-solving skills


Good to Have (Bonus Points):

  • Experience with serverless architecture (e.g., AWS Lambda, Azure Functions)
  • Exposure to cost monitoring tools like CloudHealth, Azure Cost Management
  • Experience with compliance/security best practices (SOC2, ISO, etc.)
  • Familiarity with Service Mesh (Istio, Linkerd) and API gateways
  • Knowledge of Secrets Management tools (e.g., HashiCorp Vault, AWS Secrets Manager)


Read more
Aryush Infotech India Pvt Ltd
Nitin Gupta
Posted by Nitin Gupta
Bengaluru (Bangalore), Bhopal
2 - 3 yrs
₹3L - ₹4L / yr
Fintech
Test Automation (QA)
Manual testing
skill iconPostman
JIRA
+5 more

Job Title: QA Tester – FinTech (Manual + Automation Testing)

Location: Bangalore, India

Job Type: Full-Time

Experience Required: 3 Years

Industry: FinTech / Financial Services

Function: Quality Assurance / Software Testing

 

About the Role:

We are looking for a skilled QA Tester with 3 years of experience in both manual and automation testing, ideally in the FinTech domain. The candidate will work closely with development and product teams to ensure that our financial applications meet the highest standards of quality, performance, and security.

 

Key Responsibilities:

  • Analyze business and functional requirements for financial products and translate them into test scenarios.
  • Design, write, and execute manual test cases for new features, enhancements, and bug fixes.
  • Develop and maintain automated test scripts using tools such as Selenium, TestNG, or similar frameworks.
  • Conduct API testing using Postman, Rest Assured, or similar tools.
  • Perform functional, regression, integration, and system testing across web and mobile platforms.
  • Work in an Agile/Scrum environment and actively participate in sprint planning, stand-ups, and retrospectives.
  • Log and track defects using JIRA or a similar defect management tool.
  • Collaborate with developers, BAs, and DevOps teams to improve quality across the SDLC.
  • Ensure test coverage for critical fintech workflows like transactions, KYC, lending, payments, and compliance.
  • Assist in setting up CI/CD pipelines for automated test execution using tools like Jenkins, GitLab CI, etc.

 

Required Skills and Experience:

  • 3+ years of hands-on experience in manual and automation testing.
  • Solid understanding of QA methodologies, STLC, and SDLC.
  • Experience in testing FinTech applications such as digital wallets, online banking, investment platforms, etc.
  • Strong experience with Selenium WebDriver, TestNG, Postman, and JIRA.
  • Knowledge of API testing, including RESTful services.
  • Familiarity with SQL to validate data in databases.
  • Understanding of CI/CD processes and basic scripting for automation integration.
  • Good problem-solving skills and attention to detail.
  • Excellent communication and documentation skills.

 

Preferred Qualifications:

  • Exposure to financial compliance and regulatory testing (e.g., PCI DSS, AML/KYC).
  • Experience with mobile app testing (iOS/Android).
  • Working knowledge of test management tools like TestRail, Zephyr, or Xray.
  • Performance testing experience (e.g., JMeter, LoadRunner) is a plus.
  • Basic knowledge of version control systems (e.g., Git).


Read more
Global digital transformation solutions provider

Global digital transformation solutions provider

Agency job
via Peak Hire Solutions by Dhara Thakkar
Pune
6 - 9 yrs
₹15L - ₹25L / yr
Data engineering
Apache Kafka
skill iconPython
skill iconAmazon Web Services (AWS)
AWS Lambda
+11 more

Job Details

- Job Title: Lead I - Data Engineering 

- Industry: Global digital transformation solutions provider

- Domain - Information technology (IT)

- Experience Required: 6-9 years

- Employment Type: Full Time

- Job Location: Pune

- CTC Range: Best in Industry


Job Description

Job Title: Senior Data Engineer (Kafka & AWS)

Responsibilities:

  • Develop and maintain real-time data pipelines using Apache Kafka (MSK or Confluent) and AWS services.
  • Configure and manage Kafka connectors, ensuring seamless data flow and integration across systems.
  • Demonstrate strong expertise in the Kafka ecosystem, including producers, consumers, brokers, topics, and schema registry.
  • Design and implement scalable ETL/ELT workflows to efficiently process large volumes of data.
  • Optimize data lake and data warehouse solutions using AWS services such as Lambda, S3, and Glue.
  • Implement robust monitoring, testing, and observability practices to ensure reliability and performance of data platforms.
  • Uphold data security, governance, and compliance standards across all data operations.

 

Requirements:

  • Minimum of 5 years of experience in Data Engineering or related roles.
  • Proven expertise with Apache Kafka and the AWS data stack (MSK, Glue, Lambda, S3, etc.).
  • Proficient in coding with Python, SQL, and Java — with Java strongly preferred.
  • Experience with Infrastructure-as-Code (IaC) tools (e.g., CloudFormation) and CI/CD pipelines.
  • Excellent problem-solvingcommunication, and collaboration skills.
  • Flexibility to write production-quality code in both Python and Java as required.

 

Skills: Aws, Kafka, Python


Must-Haves

Minimum of 5 years of experience in Data Engineering or related roles.

Proven expertise with Apache Kafka and the AWS data stack (MSK, Glue, Lambda, S3, etc.).

Proficient in coding with Python, SQL, and Java — with Java strongly preferred.

Experience with Infrastructure-as-Code (IaC) tools (e.g., CloudFormation) and CI/CD pipelines.

Excellent problem-solving, communication, and collaboration skills.

Flexibility to write production-quality code in both Python and Java as required.

Skills: Aws, Kafka, Python

Notice period - 0 to 15days only

Read more
Global digital transformation solutions provider.

Global digital transformation solutions provider.

Agency job
via Peak Hire Solutions by Dhara Thakkar
Bengaluru (Bangalore), Chennai, Coimbatore, Hosur, Hyderabad
12 - 15 yrs
₹20L - ₹35L / yr
DevOps
Automation
skill iconGitHub
Agile management
Agile/Scrum
+3 more

Job Details

- Job Title: DevOps and SRE -Technical Project Manager

- Industry: Global digital transformation solutions provider

- Domain - Information technology (IT)

- Experience Required: 12-15 years

- Employment Type: Full Time

- Job Location: Bangalore, Chennai, Coimbatore, Hosur & Hyderabad

- CTC Range: Best in Industry


Job Description

Company’s DevOps Practice is seeking a highly skilled DevOps and SRE Technical Project Manager to lead large-scale transformation programs for enterprise customers. The ideal candidate will bring deep expertise in DevOps and Site Reliability Engineering (SRE), combined with strong program management, stakeholder leadership, and the ability to drive end-to-end execution of complex initiatives.


Key Responsibilities

  • Lead the planning, execution, and successful delivery of DevOps and SRE transformation programs for enterprise clients, including full oversight of project budgets, financials, and margins.
  • Partner with senior stakeholders to define program objectives, roadmaps, milestones, and success metrics aligned with business and technology goals.
  • Develop and implement actionable strategies to optimize development, deployment, release management, observability, and operational workflows across client environments.
  • Provide technical leadership and strategic guidance to cross-functional engineering teams, ensuring alignment with industry standards, best practices, and company delivery methodologies.
  • Identify risks, dependencies, and blockers across programs, and proactively implement mitigation and contingency plans.
  • Monitor program performance, KPIs, and financial health; drive corrective actions and margin optimization where necessary.
  • Facilitate strong communication, collaboration, and transparency across engineering, product, architecture, and leadership teams.
  • Deliver periodic program updates to internal and client stakeholders, highlighting progress, risks, challenges, and improvement opportunities.
  • Champion a culture of continuous improvement, operational excellence, and innovation by encouraging adoption of emerging DevOps, SRE, automation, and cloud-native practices.
  • Support GitHub migration initiatives, including planning, execution, troubleshooting, and governance setup for repository and workflow migrations.

 

Requirements

  • Bachelor’s degree in Computer Science, Engineering, Business Administration, or a related technical discipline.
  • 15+ years of IT experience, including at least 5 years in a managerial or program leadership role.
  • Proven experience leading large-scale DevOps and SRE transformation programs with measurable business impact.
  • Strong program management expertise, including planning, execution oversight, risk management, and financial governance.
  • Solid understanding of Agile methodologies (Scrum, Kanban) and modern software development practices.
  • Deep hands-on knowledge of DevOps principles, CI/CD pipelines, automation frameworks, Infrastructure as Code (IaC), and cloud-native tooling.
  • Familiarity with SRE practices such as service reliability, observability, SLIs/SLOs, incident management, and performance optimization.
  • Experience with GitHub migration projects—including repository analysis, migration planning, tooling adoption, and workflow modernization.
  • Excellent communication, stakeholder management, and interpersonal skills with the ability to influence and lead cross-functional teams.
  • Strong analytical, organizational, and problem-solving skills with a results-oriented mindset.
  • Preferred certifications: PMP, PgMP, ITIL, Agile/Scrum Master, or relevant technical certifications.

 

Skills: Devops Tools, Cloud Infrastructure, Team Management


Must-Haves

DevOps principles (5+ years), SRE practices (5+ years), GitHub migration (3+ years), CI/CD pipelines (5+ years), Agile methodologies (5+ years)

Notice period - 0 to 15days only 

Read more
AI-First Company

AI-First Company

Agency job
via Peak Hire Solutions by Dhara Thakkar
Bengaluru (Bangalore), Mumbai, Hyderabad, Gurugram
5 - 17 yrs
₹30L - ₹45L / yr
Data engineering
Data architecture
SQL
Data modeling
GCS
+47 more

ROLES AND RESPONSIBILITIES:

You will be responsible for architecting, implementing, and optimizing Dremio-based data Lakehouse environments integrated with cloud storage, BI, and data engineering ecosystems. The role requires a strong balance of architecture design, data modeling, query optimization, and governance enablement in large-scale analytical environments.


  • Design and implement Dremio lakehouse architecture on cloud (AWS/Azure/Snowflake/Databricks ecosystem).
  • Define data ingestion, curation, and semantic modeling strategies to support analytics and AI workloads.
  • Optimize Dremio reflections, caching, and query performance for diverse data consumption patterns.
  • Collaborate with data engineering teams to integrate data sources via APIs, JDBC, Delta/Parquet, and object storage layers (S3/ADLS).
  • Establish best practices for data security, lineage, and access control aligned with enterprise governance policies.
  • Support self-service analytics by enabling governed data products and semantic layers.
  • Develop reusable design patterns, documentation, and standards for Dremio deployment, monitoring, and scaling.
  • Work closely with BI and data science teams to ensure fast, reliable, and well-modeled access to enterprise data.


IDEAL CANDIDATE:

  • Bachelor’s or Master’s in Computer Science, Information Systems, or related field.
  • 5+ years in data architecture and engineering, with 3+ years in Dremio or modern lakehouse platforms.
  • Strong expertise in SQL optimization, data modeling, and performance tuning within Dremio or similar query engines (Presto, Trino, Athena).
  • Hands-on experience with cloud storage (S3, ADLS, GCS), Parquet/Delta/Iceberg formats, and distributed query planning.
  • Knowledge of data integration tools and pipelines (Airflow, DBT, Kafka, Spark, etc.).
  • Familiarity with enterprise data governance, metadata management, and role-based access control (RBAC).
  • Excellent problem-solving, documentation, and stakeholder communication skills.


PREFERRED:

  • Experience integrating Dremio with BI tools (Tableau, Power BI, Looker) and data catalogs (Collibra, Alation, Purview).
  • Exposure to Snowflake, Databricks, or BigQuery environments.
  • Experience in high-tech, manufacturing, or enterprise data modernization programs.
Read more
Capital Squared
Remote only
5 - 10 yrs
₹25L - ₹55L / yr
MLOps
DevOps
Google Cloud Platform (GCP)
CI/CD
skill iconPostgreSQL
+4 more

Role: Full-Time, Long-Term Required: Docker, GCP, CI/CD Preferred: Experience with ML pipelines


OVERVIEW

We are seeking a DevOps engineer to join as a core member of our technical team. This is a long-term position for someone who wants to own infrastructure and deployment for a production machine learning system. You will ensure our prediction pipeline runs reliably, deploys smoothly, and scales as needed.


The ideal candidate thinks about failure modes obsessively, automates everything possible, and builds systems that run without constant attention.


CORE TECHNICAL REQUIREMENTS

Docker (Required): Deep experience with containerization. Efficient Dockerfiles, layer caching, multi-stage builds, debugging container issues. Experience with Docker Compose for local development.


Google Cloud Platform (Required): Strong GCP experience: Cloud Run for serverless containers, Compute Engine for VMs, Artifact Registry for images, Cloud Storage, IAM. You can navigate the console but prefer scripting everything.


CI/CD (Required): Build and maintain deployment pipelines. GitHub Actions required. You automate testing, building, pushing, and deploying. You understand the difference between continuous integration and continuous deployment.


Linux Administration (Required): Comfortable on the command line. SSH, diagnose problems, manage services, read logs, fix things. Bash scripting is second nature.


PostgreSQL (Required): Database administration basics—backups, monitoring, connection management, basic performance tuning. Not a DBA, but comfortable keeping a production database healthy.


Infrastructure as Code (Preferred): Terraform, Pulumi, or similar. Infrastructure should be versioned, reviewed, and reproducible—not clicked together in a console.


WHAT YOU WILL OWN

Deployment Pipeline: Maintaining and improving deployment scripts and CI/CD workflows. Code moves from commit to production reliably with appropriate testing gates.


Cloud Run Services: Managing deployments for model fitting, data cleansing, and signal discovery services. Monitor health, optimize cold starts, handle scaling.


VM Infrastructure: PostgreSQL and Streamlit on GCP VMs. Instance management, updates, backups, security.


Container Registry: Managing images in GitHub Container Registry and Google Artifact Registry. Cleanup policies, versioning, access control.


Monitoring and Alerting: Building observability. Logging, metrics, health checks, alerting. Know when things break before users tell us.


Environment Management: Configuration across local and production. Secrets management. Environment parity where it matters.


WHAT SUCCESS LOOKS LIKE

Deployments are boring—no drama, no surprises. Systems recover automatically from transient failures. Engineers deploy with confidence. Infrastructure changes are versioned and reproducible. Costs are reasonable and resources scale appropriately.


ENGINEERING STANDARDS

Automation First: If you do something twice, automate it. Manual processes are bugs waiting to happen.


Documentation: Runbooks, architecture diagrams, deployment guides. The next person can understand and operate the system.


Security Mindset: Secrets never in code. Least-privilege access. You think about attack surfaces.


Reliability Focus: Design for failure. Backups are tested. Recovery procedures exist and work.


CURRENT ENVIRONMENT

GCP (Cloud Run, Compute Engine, Artifact Registry, Cloud Storage), Docker, Docker Compose, GitHub Actions, PostgreSQL 16, Bash deployment scripts with Python wrapper.


WHAT WE ARE LOOKING FOR

Ownership Mentality: You see a problem, you fix it. You do not wait for assignment.


Calm Under Pressure: When production breaks, you diagnose methodically.


Communication: You explain infrastructure decisions to non-infrastructure people. You document what you build.


Long-Term Thinking: You build systems maintained for years, not quick fixes creating tech debt.


EDUCATION

University degree in Computer Science, Engineering, or related field preferred. Equivalent demonstrated expertise also considered.


TO APPLY

Include: (1) CV/resume, (2) Brief description of infrastructure you built or maintained, (3) Links to relevant work if available, (4) Availability and timezone.

Read more
Auxo AI
kusuma Gullamajji
Posted by kusuma Gullamajji
Hyderabad, Bengaluru (Bangalore), Mumbai, Gurugram
4 - 7 yrs
₹15L - ₹35L / yr
skill iconHTML/CSS
skill iconJavascript
skill iconPython
skill iconNodeJS (Node.js)
CI/CD

Responsibilities :

  • Design and develop user-friendly web interfaces using HTML, CSS, and JavaScript.
  • Utilize modern frontend frameworks and libraries such as React, Angular, or Vue.js to build dynamic and responsive web applications.
  • Develop and maintain server-side logic using programming languages such as Java, Python, Ruby, Node.js, or PHP.
  • Build and manage APIs for seamless communication between the frontend and backend systems.
  • Integrate third-party services and APIs to enhance application functionality.
  • Implement CI/CD pipelines to automate testing, integration, and deployment processes.
  • Monitor and optimize the performance of web applications to ensure a high-quality user experience.
  • Stay up-to-date with emerging technologies and industry trends to continuously improve development processes and application performance.

Qualifications :

  • Bachelors/master's in computer science or related subjects or hands-on experience demonstrating working understanding of software applications.
  • Knowledge of building applications that can be deployed in a cloud environment or are cloud native applications.
  • Strong expertise in building backend applications using Java/C#/Python with demonstrable experience in using frameworks such as Spring/Vertx/.Net/FastAPI.
  • Deep understanding of enterprise design patterns, API development and integration and Test-Driven Development (TDD)
  • Working knowledge in building applications that leverage databases such as PostgreSQL, MySQL, MongoDB, Neo4J or storage technologies such as AWS S3, Azure Blob Storage.
  • Hands-on experience in building enterprise applications adhering to their needs of security and reliability.
  • Hands-on experience building applications using one of the major cloud providers (AWS, Azure, GCP).
  • Working knowledge of CI/CD tools for application integration and deployment.
  • Working knowledge of using reliability tools to monitor the performance of the application.


Read more
AsperAI

at AsperAI

4 candid answers
Bisman Gill
Posted by Bisman Gill
BLR
3 - 6 yrs
Upto ₹33L / yr (Varies
)
CI/CD
skill iconKubernetes
skill iconDocker
kubeflow
TensorFlow
+7 more

About the Role

We are seeking a highly skilled and experienced AI Ops Engineer to join our team. In this role, you will be responsible for ensuring the reliability, scalability, and efficiency of our AI/ML systems in production. You will work at the intersection of software engineering, machine learning, and DevOps— helping to design, deploy, and manage AI/ML models and pipelines that power mission-critical business applications.

The ideal candidate has hands-on experience in AI/ML operations and orchestrating complex data pipelines, a strong understanding of cloud-native technologies, and a passion for building robust, automated, and scalable systems.


Key Responsibilities

  • AI/ML Systems Operations: Develop and manage systems to run and monitor production AI/ML workloads, ensuring performance, availability, cost-efficiency and convenience.
  • Deployment & Automation: Build and maintain ETL, ML and Agentic pipelines, ensuring reproducibility and smooth deployments across environments.
  • Monitoring & Incident Response: Design observability frameworks for ML systems (alerts and notifications, latency, cost, etc.) and lead incident triage, root cause analysis, and remediation.
  • Collaboration: Partner with data scientists, ML engineers, and software engineers to operationalize models at scale.
  • Optimization: Continuously improve infrastructure, workflows, and automation to reduce latency, increase throughput, and minimize costs.
  • Governance & Compliance: Implement MLOps best practices, including versioning, auditing, security, and compliance for data and models.
  • Leadership: Mentor junior engineers and contribute to the development of AI Ops standards and playbooks.


Qualifications

  • Bachelor’s or Master’s degree in Computer Science, Engineering, or related field (or equivalent practical experience).
  • 4+ years of experience in AI/MLOps, DevOps, SRE, Data Engineering, or with at least 2+ years in AI/ML-focused operations.
  • Strong expertise with cloud platforms (AWS, Azure, GCP) and container orchestration (Kubernetes, Docker).
  • Hands-on experience with ML pipelines and frameworks (MLflow, Kubeflow, Airflow, SageMaker, Vertex AI, etc.).
  • Proficiency in Python and/or other scripting languages for automation.
  • Familiarity with monitoring/observability tools (Prometheus, Grafana, Datadog, ELK, etc.).
  • Deep understanding of CI/CD, GitOps, and Infrastructure as Code (Terraform, Helm, etc.).
  • Knowledge of data governance, model drift detection, and compliance in AI systems.
  • Excellent problem-solving, communication, and collaboration skills.

Nice-to-Have

  • Experience in large-scale distributed systems and real-time data streaming (Kafka, Flink, Spark).
  • Familiarity with data science concepts, and frameworks such as scikit-learn, Keras, PyTorch, Tensorflow, etc.
  • Full Stack Development knowledge to collaborate effectively across end-to-end solution delivery
  • Contributions to open-source MLOps/AI Ops tools or platforms.
  • Exposure to Responsible AI practices, model fairness, and explainability frameworks

Why Join Us

  • Opportunity to shape and scale AI/ML operations in a fast-growing, innovation-driven environment.
  • Work alongside leading data scientists and engineers on cutting-edge AI solutions.
  • Competitive compensation, benefits, and career growth opportunities.
Read more
Codemonk

at Codemonk

4 candid answers
2 recruiters
Reshika Mendiratta
Posted by Reshika Mendiratta
Bengaluru (Bangalore)
1yr+
Upto ₹10L / yr (Varies
)
DevOps
skill iconAmazon Web Services (AWS)
CI/CD
skill iconDocker
skill iconKubernetes
+3 more

Role Overview

We are seeking a DevOps Engineer with 2 years of experience to join our innovative team. The ideal

candidate will bridge the gap between development and operations, implementing and maintaining our

cloud infrastructure while ensuring secure deployment pipelines and robust security practices for our

client projects.


Responsibilities:

  • Design, implement, and maintain CI/CD pipelines.
  • Containerize applications using Docker and orchestrate deployments
  • Manage and optimize cloud infrastructure on AWS and Azure platforms
  • Monitor system performance and implement automation for operational tasks to ensure optimal
  • performance, security, and scalability.
  • Troubleshoot and resolve infrastructure and deployment issues
  • Create and maintain documentation for processes and configurations
  • Collaborate with cross-functional teams to gather requirements, prioritise tasks, and contribute to project completion.
  • Stay informed about emerging technologies and best practices within the fields of DevOps and cloud computing.


Requirements:

  • 2+ years of hands-on experience with AWS cloud services
  • Strong proficiency in CI/CD pipeline configuration
  • Expertise in Docker containerisation and container management
  • Proficiency in shell scripting (Bash/Power-Shell)
  • Working knowledge of monitoring and logging tools
  • Knowledge of network security and firewall configuration
  • Strong communication and collaboration skills, with the ability to work effectively within a team
  • environment
  • Understanding of networking concepts and protocols in AWS and/or Azure
Read more
Deqode

at Deqode

1 recruiter
purvisha Bhavsar
Posted by purvisha Bhavsar
Gurugram, Delhi, Noida, Ghaziabad, Faridabad
6 - 10 yrs
₹8L - ₹25L / yr
skill iconReact.js
TypeScript
CI/CD
skill iconRedux/Flux

Hiring: Reactjs Developer at Deqode

⭐ Experience: 6+ Years

📍 Location: Gurgaon

⭐ Work Mode:- Hybrid

⏱️ Notice Period: Immediate Joiners

(Only immediate joiners & candidates serving notice period)


We are hiring Senior Frontend Engineers with strong experience in React.js and TypeScript to build scalable, high-performance web applications using micro frontend architecture.


✨ Key Requirements:

✅ 6+ years of frontend development experience

✅Strong expertise in React.js, TypeScript, Hooks, and state management (Redux/Context)

✅Experience with Micro Frontends

✅API integration (REST/GraphQL)

✅Unit & integration testing (Jest, React Testing Library)

✅CI/CD pipelines (Jenkins, GitLab CI, Azure DevOps)

✅Basic knowledge of Kubernetes & Docker

✅Strong Git, performance optimization, and problem-solving skills


Read more
Trential Technologies

at Trential Technologies

1 candid answer
Garima Jangid
Posted by Garima Jangid
Gurugram
3 - 5 yrs
₹20L - ₹35L / yr
skill iconJavascript
skill iconNodeJS (Node.js)
skill iconAmazon Web Services (AWS)
NOSQL Databases
Google Cloud Platform (GCP)
+7 more

What you'll be doing:

As a Software Developer at Trential, you will be the bridge between technical strategy and hands-on execution. You will be working with our dedicated engineering team designing, building, and deploying our core platforms and APIs. You will ensure our solutions are scalable, secure, interoperable, and aligned with open standards and our core vision. Build and maintain back-end interfaces using modern frameworks.

  • Design & Implement: Lead the design, implementation and management of Trential’s products.
  • Code Quality & Best Practices: Enforce high standards for code quality, security, and performance through rigorous code reviews, automated testing, and continuous delivery pipelines.
  • Standards Adherence: Ensure all solutions comply with relevant open standards like W3C Verifiable Credentials (VCs), Decentralized Identifiers (DIDs) & Privacy Laws, maintaining global interoperability.
  • Continuous Improvement: Lead the charge to continuously evaluate and improve the products & processes. Instill a culture of metrics-driven process improvement to boost team efficiency and product quality.
  • Cross-Functional Collaboration: Work closely with the Co-Founders & Product Team to translate business requirements and market needs into clear, actionable technical specifications and stories. Represent Trential in interactions with external stakeholders for integrations.


What we're looking for:

  • 3+ years of experience in backend development.
  • Deep proficiency in JavaScript, Node.js experience in building and operating distributed, fault tolerant systems.
  • Hands-on experience with cloud platforms (AWS & GCP) and modern DevOps practices (e.g., CI/CD, Infrastructure as Code, Docker).
  • Strong knowledge of SQL/NoSQL databases and data modeling for high-throughput, secure applications.

Preferred Qualifications (Nice to Have)

  • Knowledge of decentralized identity principles, Verifiable Credentials (W3C VCs), DIDs, and relevant protocols (e.g., OpenID4VC, DIDComm)
  • Familiarity with data privacy and security standards (GDPR, SOC 2, ISO 27001) and designing systems complying to these laws.
  • Experience integrating AI/ML models into verification or data extraction workflows.
Read more
iMerit
Bengaluru (Bangalore)
6 - 9 yrs
₹10L - ₹15L / yr
DevOps
Terraform
Apache Kafka
skill iconPython
skill iconGo Programming (Golang)
+4 more

Exp: 7- 10 Years

CTC: up to 35 LPA


Skills:

  • 6–10 years DevOps / SRE / Cloud Infrastructure experience
  • Expert-level Kubernetes (networking, security, scaling, controllers)
  • Terraform Infrastructure-as-Code mastery
  • Hands-on Kafka production experience
  • AWS cloud architecture and networking expertise
  • Strong scripting in Python, Go, or Bash
  • GitOps and CI/CD tooling experience


Key Responsibilities:

  • Design highly available, secure cloud infrastructure supporting distributed microservices at scale
  • Lead multi-cluster Kubernetes strategy optimized for GPU and multi-tenant workloads
  • Implement Infrastructure-as-Code using Terraform across full infrastructure lifecycle
  • Optimize Kafka-based data pipelines for throughput, fault tolerance, and low latency
  • Deliver zero-downtime CI/CD pipelines using GitOps-driven deployment models
  • Establish SRE practices with SLOs, p95 and p99 monitoring, and FinOps discipline
  • Ensure production-ready disaster recovery and business continuity testing



If interested Kindly share your updated resume at 82008 31681

Read more
Financial Services Industry

Financial Services Industry

Agency job
via Peak Hire Solutions by Dhara Thakkar
Hyderabad
4 - 5 yrs
₹10L - ₹20L / yr
skill iconPython
CI/CD
SQL
skill iconKubernetes
Stakeholder management
+14 more

Required Skills: CI/CD Pipeline, Kubernetes, SQL Database, Excellent Communication & Stakeholder Management, Python

 

Criteria:

Looking for 15days and max 30 days of notice period candidates.

looking candidates from Hyderabad location only

Looking candidates from EPAM company only 

1.4+ years of software development experience

2. Strong experience with Kubernetes, Docker, and CI/CD pipelines in cloud-native environments.

3. Hands-on with NATS for event-driven architecture and streaming.

4. Skilled in microservices, RESTful APIs, and containerized app performance optimization.

5. Strong in problem-solving, team collaboration, clean code practices, and continuous learning.

6.  Proficient in Python (Flask) for building scalable applications and APIs.

7. Focus: Java, Python, Kubernetes, Cloud-native development

8. SQL database 

 

Description

Position Overview

We are seeking a skilled Developer to join our engineering team. The ideal candidate will have strong expertise in Java and Python ecosystems, with hands-on experience in modern web technologies, messaging systems, and cloud-native development using Kubernetes.


Key Responsibilities

  • Design, develop, and maintain scalable applications using Java and Spring Boot framework
  • Build robust web services and APIs using Python and Flask framework
  • Implement event-driven architectures using NATS messaging server
  • Deploy, manage, and optimize applications in Kubernetes environments
  • Develop microservices following best practices and design patterns
  • Collaborate with cross-functional teams to deliver high-quality software solutions
  • Write clean, maintainable code with comprehensive documentation
  • Participate in code reviews and contribute to technical architecture decisions
  • Troubleshoot and optimize application performance in containerized environments
  • Implement CI/CD pipelines and follow DevOps best practices
  •  

Required Qualifications

  • Bachelor's degree in Computer Science, Information Technology, or related field
  • 4+ years of experience in software development
  • Strong proficiency in Java with deep understanding of web technology stack
  • Hands-on experience developing applications with Spring Boot framework
  • Solid understanding of Python programming language with practical Flask framework experience
  • Working knowledge of NATS server for messaging and streaming data
  • Experience deploying and managing applications in Kubernetes
  • Understanding of microservices architecture and RESTful API design
  • Familiarity with containerization technologies (Docker)
  • Experience with version control systems (Git)


Skills & Competencies

  • Skills Java (Spring Boot, Spring Cloud, Spring Security) 
  • Python (Flask, SQL Alchemy, REST APIs)
  • NATS messaging patterns (pub/sub, request/reply, queue groups)
  • Kubernetes (deployments, services, ingress, ConfigMaps, Secrets)
  • Web technologies (HTTP, REST, WebSocket, gRPC)
  • Container orchestration and management
  • Soft Skills Problem-solving and analytical thinking
  • Strong communication and collaboration
  • Self-motivated with ability to work independently
  • Attention to detail and code quality
  • Continuous learning mindset
  • Team player with mentoring capabilities


Read more
Tradelab Technologies
Aakanksha Yadav
Posted by Aakanksha Yadav
Mumbai
8 - 10 yrs
₹12L - ₹20L / yr
CI/CD
skill iconAmazon Web Services (AWS)
skill iconJenkins
skill iconGitHub
ArgoCD
+1 more

Senior DevOps Engineer (8–10 years)

Location: Mumbai


Role Summary

As a Senior DevOps Engineer, you will own end-to-end platform reliability and delivery automation for mission-critical lending systems. You’ll architect cloud infrastructure, standardize CI/CD, enforce DevSecOps controls, and drive observability at scale—ensuring high availability, performance, and compliance consistent with BFSI standards.


Key Responsibilities


Platform & Cloud Infrastructure

  • Design, implement, and scale multi-account, multi-VPC cloud architectures on AWS and/or Azure (compute, networking, storage, IAM, RDS, EKS/AKS, Load Balancers, CDN). 
  • Champion Infrastructure as Code (IaC) using Terraform (and optionally Pulumi/Crossplane) with GitOps workflows for repeatable, auditable deployments.
  • Lead capacity planning, cost optimization, and performance tuning across environments.

CI/CD & Release Engineering

  • Build and standardize CI/CD pipelines (Jenkins, GitHub Actions, Azure DevOps, ArgoCD) for microservices, data services, and frontends; enable blue‑green/canary releases and feature flags.
  • Drive artifact management, environment promotion, and release governance with compliance-friendly controls.

Containers, Kubernetes & Runtime

  • Operate production-grade Kubernetes (EKS/AKS), including cluster lifecycle, autoscaling, ingress, service mesh, and workload security; manage Docker/containerd images and registries. 

Reliability, Observability & Incident Management

  • Implement end-to-end monitoring, logging, and tracing (Prometheus, Grafana, ELK/EFK, CloudWatch/Log Analytics, Datadog/New Relic) with SLO/SLI error budgets. 
  • Establish on-call rotations, run postmortems, and continuously improve MTTR and change failure rate.

Security & Compliance (DevSecOps)

  • Enforce cloud and container hardening, secrets management (AWS Secrets Manager / HashiCorp Vault), vulnerability scanning (Snyk/SonarQube), and policy-as-code (OPA/Conftest).
  • Partner with infosec/risk to meet BFSI regulatory expectations for DR/BCP, audits, and data protection.

Data, Networking & Edge

  • Optimize networking (DNS, TCP/IP, routing, OSI layers) and edge delivery (CloudFront/Fastly), including WAF rules and caching strategies. 
  • Support persistence layers (MySQL, Elasticsearch, DynamoDB) for performance and reliability.

Ways of Working & Leadership

  • Lead cross-functional squads (Product, Engineering, Data, Risk) and mentor junior DevOps/SREs.
  • Document runbooks, architecture diagrams, and operating procedures; drive automation-first culture.


Must‑Have Qualifications

  • 8–10 years of total experience with 5+ years hands-on in DevOps/SRE roles.
  • Strong expertise in AWS and/or Azure, Linux administration, Kubernetes, Docker, and Terraform.
  • Proven track record building CI/CD with Jenkins/GitHub Actions/Azure DevOps/ArgoCD. 
  • Solid grasp of networking fundamentals (DNS, TLS, TCP/IP, routing, load balancing).
  • Experience implementing observability stacks and responding to production incidents. 
  • Scripting in Bash/Python; ability to automate ops workflows and platform tasks. 
  • Good‑to‑Have / Preferred
  • Exposure to BFSI/fintech systems and compliance standards; DR/BCP planning. 
  • Secrets management (Vault), policy-as-code (OPA), and security scanning (Snyk/SonarQube). 
  • Experience with GitOps patterns, service tiering, and SLO/SLI design. [illbeback.ai]
  • Knowledge of CDNs (CloudFront/Fastly) and edge caching/WAF rule authoring. 
  • Education
  • Bachelor’s/Master’s in Computer Science, Information Technology, or related field (or equivalent experience).


Read more
Matchmaking platform

Matchmaking platform

Agency job
via Peak Hire Solutions by Dhara Thakkar
Mumbai
2 - 5 yrs
₹15L - ₹28L / yr
skill iconData Science
skill iconPython
Natural Language Processing (NLP)
MySQL
skill iconMachine Learning (ML)
+15 more

Review Criteria

  • Strong Data Scientist/Machine Learnings/ AI Engineer Profile
  • 2+ years of hands-on experience as a Data Scientist or Machine Learning Engineer building ML models
  • Strong expertise in Python with the ability to implement classical ML algorithms including linear regression, logistic regression, decision trees, gradient boosting, etc.
  • Hands-on experience in minimum 2+ usecaseds out of recommendation systems, image data, fraud/risk detection, price modelling, propensity models
  • Strong exposure to NLP, including text generation or text classification (Text G), embeddings, similarity models, user profiling, and feature extraction from unstructured text
  • Experience productionizing ML models through APIs/CI/CD/Docker and working on AWS or GCP environments
  • Preferred (Company) – Must be from product companies

 

Job Specific Criteria

  • CV Attachment is mandatory
  • What's your current company?
  • Which use cases you have hands on experience?
  • Are you ok for Mumbai location (if candidate is from outside Mumbai)?
  • Reason for change (if candidate has been in current company for less than 1 year)?
  • Reason for hike (if greater than 25%)?

 

Role & Responsibilities

  • Partner with Product to spot high-leverage ML opportunities tied to business metrics.
  • Wrangle large structured and unstructured datasets; build reliable features and data contracts.
  • Build and ship models to:
  • Enhance customer experiences and personalization
  • Boost revenue via pricing/discount optimization
  • Power user-to-user discovery and ranking (matchmaking at scale)
  • Detect and block fraud/risk in real time
  • Score conversion/churn/acceptance propensity for targeted actions
  • Collaborate with Engineering to productionize via APIs/CI/CD/Docker on AWS.
  • Design and run A/B tests with guardrails.
  • Build monitoring for model/data drift and business KPIs


Ideal Candidate

  • 2–5 years of DS/ML experience in consumer internet / B2C products, with 7–8 models shipped to production end-to-end.
  • Proven, hands-on success in at least two (preferably 3–4) of the following:
  • Recommender systems (retrieval + ranking, NDCG/Recall, online lift; bandits a plus)
  • Fraud/risk detection (severe class imbalance, PR-AUC)
  • Pricing models (elasticity, demand curves, margin vs. win-rate trade-offs, guardrails/simulation)
  • Propensity models (payment/churn)
  • Programming: strong Python and SQL; solid git, Docker, CI/CD.
  • Cloud and data: experience with AWS or GCP; familiarity with warehouses/dashboards (Redshift/BigQuery, Looker/Tableau).
  • ML breadth: recommender systems, NLP or user profiling, anomaly detection.
  • Communication: clear storytelling with data; can align stakeholders and drive decisions.



Read more
Global digital transformation solutions provider.

Global digital transformation solutions provider.

Agency job
via Peak Hire Solutions by Dhara Thakkar
Chennai, Kochi (Cochin), Pune, Trivandrum, Thiruvananthapuram
5 - 7 yrs
₹10L - ₹25L / yr
Google Cloud Platform (GCP)
skill iconJenkins
CI/CD
skill iconDocker
skill iconKubernetes
+15 more

Job Description

We are seeking a highly skilled Site Reliability Engineer (SRE) with strong expertise in Google Cloud Platform (GCP) and CI/CD automation to lead cloud infrastructure initiatives. The ideal candidate will design and implement robust CI/CD pipelines, automate deployments, ensure platform reliability, and drive continuous improvement in cloud operations and DevOps practices.


Key Responsibilities:

  • Design, develop, and optimize end-to-end CI/CD pipelines using Jenkins, with a strong focus on Declarative Pipeline syntax.
  • Automate deployment, scaling, and management of applications across various GCP services including GKE, Cloud Run, Compute Engine, Cloud SQL, Cloud Storage, VPC, and Cloud Functions.
  • Collaborate closely with development and DevOps teams to ensure seamless integration of applications into the CI/CD pipeline and GCP environment.
  • Implement and manage monitoring, logging, and ing solutions to maintain visibility, reliability, and performance of cloud infrastructure and applications.
  • Ensure compliance with security best practices and organizational policies across GCP environments.
  • Document processes, configurations, and architectural decisions to maintain operational transparency.
  • Stay updated with the latest GCP services, DevOps, and SRE best practices to enhance infrastructure efficiency and reliability.


Mandatory Skills:

  • Google Cloud Platform (GCP) – Hands-on experience with core GCP compute, networking, and storage services.
  • Jenkins – Expertise in Declarative Pipeline creation and optimization.
  • CI/CD – Strong understanding of automated build, test, and deployment workflows.
  • Solid understanding of SRE principles including automation, scalability, observability, and system reliability.
  • Familiarity with containerization and orchestration tools (Docker, Kubernetes – GKE).
  • Proficiency in scripting languages such as Shell, Python, or Groovy for automation tasks.


Preferred Skills:

  • Experience with TerraformAnsible, or Cloud Deployment Manager for Infrastructure as Code (IaC).
  • Exposure to monitoring and observability tools like Stackdriver, Prometheus, or Grafana.
  • Knowledge of multi-cloud or hybrid environments (AWS experience is a plus).
  • GCP certification (Professional Cloud DevOps Engineer / Cloud Architect) preferred.


Skills

Gcp, Jenkins, CICD Aws,


Nice to Haves

Experience with Terraform, Ansible, or Cloud Deployment Manager for Infrastructure as Code (IaC).

Exposure to monitoring and observability tools like Stackdriver, Prometheus, or Grafana.

Knowledge of multi-cloud or hybrid environments (AWS experience is a plus).

GCP certification (Professional Cloud DevOps Engineer / Cloud Architect) preferred.

 

******

Notice period - 0 to 15days only

Location – Pune, Trivandrum, Kochi, Chennai

Read more
Indore
2 - 6 yrs
₹4L - ₹8L / yr
skill iconMongoDB
skill iconReact.js
skill iconNodeJS (Node.js)
skill iconExpress
DevOps
+2 more

Key Responsibilities & Skills


Strong hands-on experience in React.js, Node.js, Express.js, MongoDB

Ability to lead and mentor a development team

Project ownership – sprint planning, code reviews, task allocation

Excellent communication skills for client interactions

Strong decision-making & problem-solving abilities


Nice-to-Have (Bonus Skills)


Experience in system architecture design

Deployment knowledge – AWS / DigitalOcean / Cloud

Understanding of CI/CD pipelines & best coding practices


Why Join InfoSparkles?


Lead from Day One

Work on modern & challenging tech projects

Excellent career growth in a leadership position

Read more
Arcitech
Navi Mumbai
5 - 7 yrs
₹12L - ₹14L / yr
Cyber Security
VAPT
Cloud Computing
CI/CD
skill iconJenkins
+4 more

Senior DevSecOps Engineer (Cybersecurity & VAPT) - Arcitech AI



Arcitech AI, located in Mumbai's bustling Lower Parel, is a trailblazer in software and IT, specializing in software development, AI, mobile apps, and integrative solutions. Committed to excellence and innovation, Arcitech AI offers incredible growth opportunities for team members. Enjoy unique perks like weekends off and a provident fund. Our vibrant culture is friendly and cooperative, fostering a dynamic work environment that inspires creativity and forward-thinking. Join us to shape the future of technology.

Full-time

Navi Mumbai, Maharashtra, India

5+ Years Experience

1200000 - 1400000

Job Title: Senior DevSecOps Engineer (Cybersecurity & VAPT)

Location: Vashi, Navi Mumbai (On-site)

Shift: 10:00 AM - 7:00 PM

Experience: 5+ years

Salary : INR 12,00,000 - 14,00,000


Job Summary

Hiring a Senior DevSecOps Engineer with strong cloud, CI/CD, automation skills and hands-on experience in Cybersecurity & VAPT to manage deployments, secure infrastructure, and support DevSecOps initiatives.


Key Responsibilities

Cloud & Infrastructure

  • Manage deployments on AWS/Azure
  • Maintain Linux servers & cloud environments
  • Ensure uptime, performance, and scalability


CI/CD & Automation

  • Build and optimize pipelines (Jenkins, GitHub Actions, GitLab CI/CD)
  • Automate tasks using Bash/Python
  • Implement IaC (Terraform/CloudFormation)


Containerization

  • Build and run Docker containers
  • Work with basic Kubernetes concepts


Cybersecurity & VAPT

  • Perform Vulnerability Assessment & Penetration Testing
  • Identify, track, and mitigate security vulnerabilities
  • Implement hardening and support DevSecOps practices
  • Assist with firewall/security policy management


Monitoring & Troubleshooting

  • Use ELK, Prometheus, Grafana, CloudWatch
  • Resolve cloud, deployment, and infra issues


Cross-Team Collaboration

  • Work with Dev, QA, and Security for secure releases
  • Maintain documentation and best practices


Required Skills

  • AWS/Azure, Linux, Docker
  • CI/CD tools: Jenkins, GitHub Actions, GitLab
  • Terraform / IaC
  • VAPT experience + understanding of OWASP, cloud security
  • Bash/Python scripting
  • Monitoring tools (ELK, Prometheus, Grafana)
  • Strong troubleshooting & communication
Read more
Media and Entertainment Industry

Media and Entertainment Industry

Agency job
via Peak Hire Solutions by Dhara Thakkar
Noida
5 - 7 yrs
₹15L - ₹25L / yr
DevOps
skill iconAmazon Web Services (AWS)
CI/CD
Infrastructure
Scripting
+28 more

Required Skills: Advanced AWS Infrastructure Expertise, CI/CD Pipeline Automation, Monitoring, Observability & Incident Management, Security, Networking & Risk Management, Infrastructure as Code & Scripting


Criteria:

  • 5+ years of DevOps/SRE experience in cloud-native, product-based companies (B2C scale preferred)
  • Strong hands-on AWS expertise across core and advanced services (EC2, ECS/EKS, Lambda, S3, CloudFront, RDS, VPC, IAM, ELB/ALB, Route53)
  • Proven experience designing high-availability, fault-tolerant cloud architectures for large-scale traffic
  • Strong experience building & maintaining CI/CD pipelines (Jenkins mandatory; GitHub Actions/GitLab CI a plus)
  • Prior experience running production-grade microservices deployments and automated rollout strategies (Blue/Green, Canary)
  • Hands-on experience with monitoring & observability tools (Grafana, Prometheus, ELK, CloudWatch, New Relic, etc.)
  • Solid hands-on experience with MongoDB in production, including performance tuning, indexing & replication
  • Strong scripting skills (Bash, Shell, Python) for automation
  • Hands-on experience with IaC (Terraform, CloudFormation, or Ansible)
  • Deep understanding of networking fundamentals (VPC, subnets, routing, NAT, security groups)
  • Strong experience in incident management, root cause analysis & production firefighting

 

Description

Role Overview

Company is seeking an experienced Senior DevOps Engineer to design, build, and optimize cloud infrastructure on AWS, automate CI/CD pipelines, implement monitoring and security frameworks, and proactively identify scalability challenges. This role requires someone who has hands-on experience running infrastructure at B2C product scale, ideally in media/OTT or high-traffic applications.

 

 Key Responsibilities

1. Cloud Infrastructure — AWS (Primary Focus)

  • Architect, deploy, and manage scalable infrastructure using AWS services such as EC2, ECS/EKS, Lambda, S3, CloudFront, RDS, ELB/ALB, VPC, IAM, Route53, etc.
  • Optimize cloud cost, resource utilization, and performance across environments.
  • Design high-availability, fault-tolerant systems for streaming workloads.

 

2. CI/CD Automation

  • Build and maintain CI/CD pipelines using Jenkins, GitHub Actions, or GitLab CI.
  • Automate deployments for microservices, mobile apps, and backend APIs.
  • Implement blue/green and canary deployments for seamless production rollouts.

 

3. Observability & Monitoring

  • Implement logging, metrics, and alerting using tools like Grafana, Prometheus, ELK, CloudWatch, New Relic, etc.
  • Perform proactive performance analysis to minimize downtime and bottlenecks.
  • Set up dashboards for real-time visibility into system health and user traffic spikes.

 

4. Security, Compliance & Risk Highlighting

• Conduct frequent risk assessments and identify vulnerabilities in:

  o Cloud architecture

  o Access policies (IAM)

  o Secrets & key management

  o Data flows & network exposure


• Implement security best practices including VPC isolation, WAF rules, firewall policies, and SSL/TLS management.

 

5. Scalability & Reliability Engineering

  • Analyze traffic patterns for OTT-specific load variations (weekends, new releases, peak hours).
  • Identify scalability gaps and propose solutions across:
  •   o Microservices
  •   o Caching layers
  •   o CDN distribution (CloudFront)
  •   o Database workloads
  • Perform capacity planning and load testing to ensure readiness for 10x traffic growth.

 

6. Database & Storage Support

  • Administer and optimize MongoDB for high-read/low-latency use cases.
  • Design backup, recovery, and data replication strategies.
  • Work closely with backend teams to tune query performance and indexing.

 

7. Automation & Infrastructure as Code

  • Implement IaC using Terraform, CloudFormation, or Ansible.
  • Automate repetitive infrastructure tasks to ensure consistency across environments.

 

Required Skills & Experience

Technical Must-Haves

  • 5+ years of DevOps/SRE experience in cloud-native, product-based companies.
  • Strong hands-on experience with AWS (core and advanced services).
  • Expertise in Jenkins CI/CD pipelines.
  • Solid background working with MongoDB in production environments.
  • Good understanding of networking: VPCs, subnets, security groups, NAT, routing.
  • Strong scripting experience (Bash, Python, Shell).
  • Experience handling risk identification, root cause analysis, and incident management.

 

Nice to Have

  • Experience with OTT, video streaming, media, or any content-heavy product environments.
  • Familiarity with containers (Docker), orchestration (Kubernetes/EKS), and service mesh.
  • Understanding of CDN, caching, and streaming pipelines.

 

Personality & Mindset

  • Strong sense of ownership and urgency—DevOps is mission critical at OTT scale.
  • Proactive problem solver with ability to think about long-term scalability.
  • Comfortable working with cross-functional engineering teams.

 

Why Join company?

• Build and operate infrastructure powering millions of monthly users.

• Opportunity to shape DevOps culture and cloud architecture from the ground up.

• High-impact role in a fast-scaling Indian OTT product.

Read more
Global digital transformation solutions provider.

Global digital transformation solutions provider.

Agency job
via Peak Hire Solutions by Dhara Thakkar
Hyderabad, Chennai, Kochi (Cochin), Bengaluru (Bangalore), Trivandrum, Thiruvananthapuram
12 - 15 yrs
₹20L - ₹40L / yr
skill iconJava
DevOps
CI/CD
ReAct (Reason + Act)
skill iconReact.js
+6 more

Role Proficiency:

Leverage expertise in a technology area (e.g. Java Microsoft technologies or Mainframe/legacy) to design system architecture.


Knowledge Examples:

  • Domain/ Industry Knowledge: Basic knowledge of standard business processes within the relevant industry vertical and customer business domain
  1. Technology Knowledge: Demonstrates working knowledge of more than one technology area related to own area of work (e.g. Java/JEE 5+ Microsoft technologies or Mainframe/legacy) customer technology landscape multiple frameworks (Struts JSF Hibernate etc.) within one technology area and their applicability. Consider low level details such as data structures algorithms APIs and libraries and best practices for one technology stack configuration parameters for successful deployment and configuration parameters for high performance within one technology stack
  2. Technology Trends: Demonstrates working knowledge of technology trends related to one technology stack and awareness of technology trends related to least two technologies
  3. Architecture Concepts and Principles: Demonstrates working knowledge of standard architectural principles models patterns (e.g. SOA N-Tier EDA etc.) and perspective (e.g. TOGAF Zachman etc.) integration architecture including input and output components existing integration methodologies and topologies source and external system non functional requirements data architecture deployment architecture architecture governance
  4. Design Patterns Tools and Principles: Applies specialized knowledge of design patterns design principles practices and design tools. Knowledge of documentation of design using tolls like EA
  5. Software Development Process Tools & Techniques: Demonstrates thorough knowledge of end-to-end SDLC process (Agile and Traditional) SDLC methodology programming principles tools best practices (refactoring code code package etc.)
  6. Project Management Tools and Techniques: Demonstrates working knowledge of project management process (such as project scoping requirements management change management risk management quality assurance disaster management etc.) tools (MS Excel MPP client specific time sheets capacity planning tools etc.)
  7. Project Management: Demonstrates working knowledge of project governance framework RACI matrix and basic knowledge of project metrics like utilization onsite to offshore ratio span of control fresher ratio SLAs and quality metrics
  8. Estimation and Resource Planning: Working knowledge of estimation and resource planning techniques (e.g. TCP estimation model) company specific estimation templates
  9. Working knowledge of industry knowledge management tools (such as portals wiki) company and customer knowledge management tools techniques (such as workshops classroom training self-study application walkthrough and reverse KT)
  10. Technical Standards Documentation & Templates: Demonstrates working knowledge of various document templates and standards (such as business blueprint design documents and test specifications)
  11. Requirement Gathering and Analysis: Demonstrates working knowledge of requirements gathering for ( non functional) requirements analysis for functional and non functional requirement analysis tools (such as functional flow diagrams activity diagrams blueprint storyboard) techniques (business analysis process mapping etc.) and requirements management tools (e.g.MS Excel) and basic knowledge of functional requirements gathering. Specifically identify Architectural concerns and to document them as part of IT requirements including NFRs
  12. Solution Structuring: Demonstrates working knowledge of service offering and products


Additional Comments:

Looking for a Senior Java Architect with 12+ years of experience. Key responsibilities include:

• Excellent technical background and end to end architecture to design and implement scalable maintainable and high performing systems integrating front end technologies with back-end services.

• Collaborate with front-end teams to architect React -based user interfaces that are robust, responsive and aligned with overall technical architecture.

• Expertise in cloud-based applications on Azure, leveraging key Azure services.

• Lead the adoption of DevOps practices, including CI/CD pipelines, automation, monitoring and logging to ensure reliable and efficient deployment cycles.

• Provide technical leadership to development teams, guiding them in building solutions that adhere to best practices, industry standards and customer requirements.

• Conduct code reviews to maintain high quality code and collaborate with team to ensure code is optimized for performance, scalability and security.

• Collaborate with stakeholders to defined requirements and deliver technical solutions aligned with business goals.

• Excellent communication skills

• Mentor team members providing guidance on technical challenges and helping them grow their skill set.

• Good to have experience in GCP and retail domain.

 

Skills: Devops, Azure, Java


Must-Haves

Java (12+ years), React, Azure, DevOps, Cloud Architecture

Strong Java architecture and design experience.

Expertise in Azure cloud services.

Hands-on experience with React and front-end integration.

Proven track record in DevOps practices (CI/CD, automation).

Notice period - 0 to 15days only

Location: Hyderabad, Chennai, Kochi, Bangalore, Trivandrum

Excellent communication and leadership skills.

Read more
Global digital transformation solutions provider.

Global digital transformation solutions provider.

Agency job
via Peak Hire Solutions by Dhara Thakkar
Bengaluru (Bangalore)
7 - 9 yrs
₹15L - ₹28L / yr
databricks
skill iconPython
SQL
PySpark
skill iconAmazon Web Services (AWS)
+9 more

Role Proficiency:

This role requires proficiency in developing data pipelines including coding and testing for ingesting wrangling transforming and joining data from various sources. The ideal candidate should be adept in ETL tools like Informatica Glue Databricks and DataProc with strong coding skills in Python PySpark and SQL. This position demands independence and proficiency across various data domains. Expertise in data warehousing solutions such as Snowflake BigQuery Lakehouse and Delta Lake is essential including the ability to calculate processing costs and address performance issues. A solid understanding of DevOps and infrastructure needs is also required.


Skill Examples:

  1. Proficiency in SQL Python or other programming languages used for data manipulation.
  2. Experience with ETL tools such as Apache Airflow Talend Informatica AWS Glue Dataproc and Azure ADF.
  3. Hands-on experience with cloud platforms like AWS Azure or Google Cloud particularly with data-related services (e.g. AWS Glue BigQuery).
  4. Conduct tests on data pipelines and evaluate results against data quality and performance specifications.
  5. Experience in performance tuning.
  6. Experience in data warehouse design and cost improvements.
  7. Apply and optimize data models for efficient storage retrieval and processing of large datasets.
  8. Communicate and explain design/development aspects to customers.
  9. Estimate time and resource requirements for developing/debugging features/components.
  10. Participate in RFP responses and solutioning.
  11. Mentor team members and guide them in relevant upskilling and certification.

 

Knowledge Examples:

  1. Knowledge of various ETL services used by cloud providers including Apache PySpark AWS Glue GCP DataProc/Dataflow Azure ADF and ADLF.
  2. Proficient in SQL for analytics and windowing functions.
  3. Understanding of data schemas and models.
  4. Familiarity with domain-related data.
  5. Knowledge of data warehouse optimization techniques.
  6. Understanding of data security concepts.
  7. Awareness of patterns frameworks and automation practices.


 

Additional Comments:

# of Resources: 22 Role(s): Technical Role Location(s): India Planned Start Date: 1/1/2026 Planned End Date: 6/30/2026

Project Overview:

Role Scope / Deliverables: We are seeking highly skilled Data Engineer with strong experience in Databricks, PySpark, Python, SQL, and AWS to join our data engineering team on or before 1st week of Dec, 2025.

The candidate will be responsible for designing, developing, and optimizing large-scale data pipelines and analytics solutions that drive business insights and operational efficiency.

Design, build, and maintain scalable data pipelines using Databricks and PySpark.

Develop and optimize complex SQL queries for data extraction, transformation, and analysis.

Implement data integration solutions across multiple AWS services (S3, Glue, Lambda, Redshift, EMR, etc.).

Collaborate with analytics, data science, and business teams to deliver clean, reliable, and timely datasets.

Ensure data quality, performance, and reliability across data workflows.

Participate in code reviews, data architecture discussions, and performance optimization initiatives.

Support migration and modernization efforts for legacy data systems to modern cloud-based solutions.


Key Skills:

Hands-on experience with Databricks, PySpark & Python for building ETL/ELT pipelines.

Proficiency in SQL (performance tuning, complex joins, CTEs, window functions).

Strong understanding of AWS services (S3, Glue, Lambda, Redshift, CloudWatch, etc.).

Experience with data modeling, schema design, and performance optimization.

Familiarity with CI/CD pipelines, version control (Git), and workflow orchestration (Airflow preferred).

Excellent problem-solving, communication, and collaboration skills.

 

Skills: Databricks, Pyspark & Python, Sql, Aws Services

 

Must-Haves

Python/PySpark (5+ years), SQL (5+ years), Databricks (3+ years), AWS Services (3+ years), ETL tools (Informatica, Glue, DataProc) (3+ years)

Hands-on experience with Databricks, PySpark & Python for ETL/ELT pipelines.

Proficiency in SQL (performance tuning, complex joins, CTEs, window functions).

Strong understanding of AWS services (S3, Glue, Lambda, Redshift, CloudWatch, etc.).

Experience with data modeling, schema design, and performance optimization.

Familiarity with CI/CD pipelines, Git, and workflow orchestration (Airflow preferred).


******

Notice period - Immediate to 15 days

Location: Bangalore

Read more
Lovoj

at Lovoj

2 candid answers
LOVOJ CONTACT
Posted by LOVOJ CONTACT
Delhi
3 - 10 yrs
₹8L - ₹14L / yr
skill iconAmazon Web Services (AWS)
AWS Lambda
CI/CD
DevOps

Key Responsibilities

  • Design, implement, and maintain CI/CD pipelines for backend, frontend, and mobile applications.
  • Manage cloud infrastructure using AWS (EC2, Lambda, S3, VPC, RDS, CloudWatch, ECS/EKS).
  • Configure and maintain Docker containers and/or Kubernetes clusters.
  • Implement and maintain Infrastructure as Code (IaC) using Terraform / CloudFormation.
  • Automate build, deployment, and monitoring processes.
  • Manage code repositories using Git/GitHub/GitLab, enforce branching strategies.
  • Implement monitoring and alerting using tools like Prometheus, Grafana, CloudWatch, ELK, Splunk.
  • Ensure system scalability, reliability, and security.
  • Troubleshoot production issues and perform root-cause analysis.
  • Collaborate with engineering teams to improve deployment and development workflows.
  • Optimize infrastructure costs and improve performance.

Required Skills & Qualifications

  • 3+ years of experience in DevOps, SRE, or Cloud Engineering.
  • Strong hands-on knowledge of AWS cloud services.
  • Experience with Docker, containers, and orchestrators (ECS, EKS, Kubernetes).
  • Strong understanding of CI/CD tools: GitHub Actions, Jenkins, GitLab CI, or AWS CodePipeline.
  • Experience with Linux administration and shell scripting.
  • Strong understanding of Networking, VPC, DNS, Load Balancers, Security Groups.
  • Experience with monitoring/logging tools: CloudWatch, ELK, Prometheus, Grafana.
  • Experience with Terraform or CloudFormation (IaC).
  • Good understanding of Node.js or similar application deployments.
  • Knowledge of NGINX/Apache and load balancing concepts.
  • Strong problem-solving and communication skills.

Preferred/Good to Have

  • Experience with Kubernetes (EKS).
  • Experience with Serverless architectures (Lambda).
  • Experience with Redis, MongoDB, RDS.
  • Certification in AWS Solutions Architect / DevOps Engineer.
  • Experience with security best practices, IAM policies, and DevSecOps.
  • Understanding of cost optimization and cloud cost management.


Read more
Capace Software Private Limited
Bengaluru (Bangalore), Bhopal
5 - 10 yrs
₹4L - ₹10L / yr
skill iconDjango
CI/CD
Software deployment
RESTful APIs
skill iconFlask
+8 more

Senior Python Django Developer 

Experience: Back-end development: 6 years (Required)


Location:  Bangalore/ Bhopal

Job Description:

We are looking for a highly skilled Senior Python Django Developer with extensive experience in building and scaling financial or payments-based applications. The ideal candidate has a deep understanding of system design, architecture patterns, and testing best practices, along with a strong grasp of the start-up environment.

This role requires a balance of hands-on coding, architectural design, and collaboration across teams to deliver robust and scalable financial products.

Responsibilities:

  • Design and develop scalable, secure, and high-performance applications using Python (Django framework).
  • Architect system components, define database schemas, and optimize backend services for speed and efficiency.
  • Lead and implement design patterns and software architecture best practices.
  • Ensure code quality through comprehensive unit testing, integration testing, and participation in code reviews.
  • Collaborate closely with Product, DevOps, QA, and Frontend teams to build seamless end-to-end solutions.
  • Drive performance improvements, monitor system health, and troubleshoot production issues.
  • Apply domain knowledge in payments and finance, including transaction processing, reconciliation, settlements, wallets, UPI, etc.
  • Contribute to technical decision-making and mentor junior developers.

Requirements:

  • 6 to 10 years of professional backend development experience with Python and Django.
  • Strong background in payments/financial systems or FinTech applications.
  • Proven experience in designing software architecture in a microservices or modular monolith environment.
  • Experience working in fast-paced startup environments with agile practices.
  • Proficiency in RESTful APIs, SQL (PostgreSQL/MySQL), NoSQL (MongoDB/Redis).
  • Solid understanding of Docker, CI/CD pipelines, and cloud platforms (AWS/GCP/Azure).
  • Hands-on experience with test-driven development (TDD) and frameworks like pytest, unittest, or factory_boy.
  • Familiarity with security best practices in financial applications (PCI compliance, data encryption, etc.).

Preferred Skills:

  • Exposure to event-driven architecture (Celery, Kafka, RabbitMQ).
  • Experience integrating with third-party payment gateways, banking APIs, or financial instruments.
  • Understanding of DevOps and monitoring tools (Prometheus, ELK, Grafana).
  • Contributions to open-source or personal finance-related projects.

Job Types: Full-time, Permanent


Schedule:

  • Day shift

Supplemental Pay:

  • Performance bonus
  • Yearly bonus

Ability to commute/relocate:

  • JP Nagar, 5th Phase, Bangalore, Karnataka or Indrapuri, Bhopal, Madhya Pradesh: Reliably commute or willing to relocate with an employer-provided relocation package (Preferred)


Read more
Technology, Information and Internet Company

Technology, Information and Internet Company

Agency job
via Peak Hire Solutions by Dhara Thakkar
Bengaluru (Bangalore)
6 - 10 yrs
₹20L - ₹65L / yr
Data Structures
CI/CD
Microservices
Architecture
Cloud Computing
+19 more

Required Skills: CI/CD Pipeline, Data Structures, Microservices, Determining overall architectural principles, frameworks and standards, Cloud expertise (AWS, GCP, or Azure), Distributed Systems


Criteria:

  • Candidate must have 6+ years of backend engineering experience, with 1–2 years leading engineers or owning major systems.
  • Must be strong in one core backend language: Node.js, Go, Java, or Python.
  • Deep understanding of distributed systems, caching, high availability, and microservices architecture.
  • Hands-on experience with AWS/GCP, Docker, Kubernetes, and CI/CD pipelines.
  • Strong command over system design, data structures, performance tuning, and scalable architecture
  • Ability to partner with Product, Data, Infrastructure, and lead end-to-end backend roadmap execution.


Description

What This Role Is All About

We’re looking for a Backend Tech Lead who’s equally obsessed with architecture decisions and clean code, someone who can zoom out to design systems and zoom in to fix that one weird memory leak. You’ll lead a small but sharp team, drive the backend roadmap, and make sure our systems stay fast, lean, and battle-tested.

 

What You’ll Own

● Architect backend systems that handle India-scale traffic without breaking a sweat.

● Build and evolve microservices, APIs, and internal platforms that our entire app depends on.

● Guide, mentor, and uplevel a team of backend engineers—be the go-to technical brain.

● Partner with Product, Data, and Infra to ship features that are reliable and delightful.

● Set high engineering standards—clean architecture, performance, automation, and testing.

● Lead discussions on system design, performance tuning, and infra choices.

● Keep an eye on production like a hawk: metrics, monitoring, logs, uptime.

● Identify gaps proactively and push for improvements instead of waiting for fires.

 

What Makes You a Great Fit

● 6+ years of backend experience; 1–2 years leading engineers or owning major systems.

● Strong in one core language (Node.js / Go / Java / Python) — pick your sword.

● Deep understanding of distributed systems, caching, high-availability, and microservices.

● Hands-on with AWS/GCP, Docker, Kubernetes, CI/CD pipelines.

● You think data structures and system design are not interviews — they’re daily tools.

● You write code that future-you won’t hate.

● Strong communication and a let’s figure this out attitude.

 

Bonus Points If You Have

● Built or scaled consumer apps with millions of DAUs.

● Experimented with event-driven architecture, streaming systems, or real-time pipelines.

● Love startups and don’t mind wearing multiple hats.

● Experience on logging/monitoring tools like Grafana, Prometheus, ELK, OpenTelemetry.

 

Why company Might Be Your Best Move

● Work on products used by real people every single day.

● Ownership from day one—your decisions will shape our core architecture.

● No unnecessary hierarchy; direct access to founders and senior leadership.

● A team that cares about quality, speed, and impact in equal measure.

● Build for Bharat — complex constraints, huge scale, real impact.


Read more
bootcoding
Shruti Choubey
Posted by Shruti Choubey
Remote, Nagpur
3 - 8 yrs
₹6L - ₹15L / yr
OutSystems
RESTful APIs
skill iconGit
CI/CD
Data modeling
+2 more

Key Responsibilities

  • Design, develop, and maintain scalable applications using the OutSystems platform.
  • Build modern Reactive Web and Mobile applications aligned with business and technical requirements.
  • Implement integrations with REST APIs, databases, and external systems.
  • Collaborate with architects, tech leads, and cross-functional teams for smooth deployments.
  • Create reusable, maintainable components following OutSystems best practices.
  • Participate in code reviews, unit testing, debugging, and performance optimization.
  • Ensure adherence to scalability, security, and deployment automation guidelines.
  • Stay updated on new OutSystems capabilities and contribute to continuous improvement.


Read more
Phi Commerce

at Phi Commerce

2 candid answers
Ariba Khan
Posted by Ariba Khan
Pune
3 - 9 yrs
Upto ₹22L / yr (Varies
)
skill iconJava
CI/CD
skill iconJenkins
Linux/Unix
Selenium
+1 more

About Phi Commerce

Founded in 2015, Phi Commerce has created PayPhi, a ground-breaking omni-channel payment processing platform which processes digital payments at doorstep, online & in-store across variety of form factors such as cards, net-banking, UPI, Aadhaar, BharatQR, wallets, NEFT, RTGS, and NACH. The company was established with the objective to digitize white spaces in payments & go beyond routine payment processing.


Phi Commerce's PayPhi Digital Enablement suite has been developed with the mission of empowering very large untapped blue-ocean sectors dominated by offline payment modes such as cash & cheque to accept digital payments.


Core team comprises of industry veterans with complementary skill sets and nearly 100 years of global experience with noteworthy players such as Mastercard, Euronet, ICICI Bank, Opus Software and Electra Card Services.


Awards & Recognitions:

The company innovative work has been recognized at prestigious forums in short span of its existence:


  • Certification of Recognition as StartUp by Department of Industrial Policy and Promotion.
  • Winner of the "Best Payment Gateway" of the year award at Payments & Cards Awards 2018
  • Winner at Payments & Cards Awards 2017 in 3 categories - Best Startup Of The Year, Best Online Payment Solution Of The Year- Consumer And Best Online Payment Solution Of The Year-Merchant,
  • Winner of NPCI IDEATHON on Blockchain in Payments
  • Shortlisted by Govt. of Maharashtra as top 100 start-ups pan-India across 8 sectors


About the role:

As an SDET, you will work closely with the development, product, and QA teams to ensure the delivery of high-quality, reliable, and scalable software. You will be responsible for creating and maintaining automated test suites, designing testing frameworks, and identifying and resolving software defects. The role will also involve continuous improvement of the test process and promoting best practices in software development and testing.


Key Responsibilities:


  • Develop, implement, and maintain automated test scripts for validating software functionality and performance.
  • Design and develop testing frameworks and tools to improve the efficiency and effectiveness of automated testing.
  • Collaborate with developers, product managers, and QA engineers to identify test requirements and create effective test plans.
  • Write and execute unit, integration, regression, and performance tests to ensure high-qualitycode.
  • Troubleshoot and debug issues identified during testing, working with developers to resolve them in a timely manner.
  • Conduct code reviews to ensure code quality, maintainability, and testability.
  • Work with CI/CD pipelines to integrate automated testing into the development process.
  • Continuously evaluate and improve testing strategies, identifying areas for automation and optimization.
  • Monitor the quality of releases by tracking test coverage, defect trends, and other quality metrics.
  • Ensure that all tests are documented, maintainable, and reusable for future software releases.
  • Stay up-to-date with the latest trends, tools, and technologies in the testing and automation space.


Skills and Qualifications:

  • Bachelor's degree in Computer Science, Engineering, or a related field.
  • 3+ years of experience as an SDET, software engineer, or quality engineer with a focus on test automation.
  • Strong experience in automated testing frameworks and tools (e.g., Selenium, Appium JUnit, TestNG, Cucumber).
  • Proficiency in programming languages with Java
  • Experience in designing and implementing test automation for web applications, APIs, and mobile applications.
  • Strong understanding of software testing methodologies and processes (e.g., Agile, Scrum).
  • Excellent problem-solving skills and attention to detail.
  • Good communication and collaboration skills, with the ability to work effectively in a team.
  • Knowledge of performance testing and load testing tools is a plus (e.g., JMeter, LoadRunner)
  • Experience with test management tools (e.g., TestRail, Jira).
  • Knowledge of databases and ability to write SQL queries to validate test data.
  • Experience in API testing and knowledge of RESTful web services.
Read more
Phi Commerce

at Phi Commerce

2 candid answers
Nikita Sinha
Posted by Nikita Sinha
Pune
11 - 15 yrs
Upto ₹32L / yr (Varies
)
Linux/Unix
SQL
Shell Scripting
skill iconAmazon Web Services (AWS)
CI/CD
+2 more

The Production Infrastructure Manager is responsible for overseeing and maintaining the infrastructure that powers our payment gateway systems in a high-availability production environment. This role requires deep technical expertise in cloud platforms, networking, and security, along with strong leadership capability to guide a team of infrastructure engineers. You will ensure the system’s reliability, performance, and compliance with regulatory standards while driving continuous improvement.


Key Responsibilities:

Infrastructure Management

  • Manage and optimize infrastructure for payment gateway systems to ensure high availability, reliability, and scalability.
  • Oversee daily operations of production environments, including AWS cloud services, load balancers, databases, and monitoring systems.
  • Implement and maintain infrastructure automation, provisioning, configuration management, and disaster recovery strategies.
  • Develop and maintain capacity planning, monitoring, and backup mechanisms to support peak transaction periods.
  • Oversee regular patching, updates, and version control to minimize vulnerabilities.

Team Leadership

  • Lead and mentor a team of infrastructure engineers and administrators.
  • Provide technical direction to ensure efficient and effective implementation of infrastructure solutions.

Cross-Functional Collaboration

  • Work closely with development, security, and product teams to ensure infrastructure aligns with business needs and regulatory requirements (PCI-DSS, GDPR).
  • Ensure infrastructure practices meet industry standards and security requirements (PCI-DSS, ISO 27001).

Monitoring & Incident Management

  • Monitor infrastructure performance using tools like Prometheus, Grafana, Datadog, etc.
  • Conduct incident response, root cause analysis, and post-mortems to prevent recurring issues.
  • Manage and execute on-call duties, ensuring timely resolution of infrastructure-related issues.

Documentation

  • Maintain comprehensive documentation, including architecture diagrams, processes, and disaster recovery plans.

Skills and Qualifications

Required

  • Bachelor’s degree in Computer Science, IT, or equivalent experience.
  • 8+ years of experience managing production infrastructure in high-availability, mission-critical environments (fintech or payment gateways preferred).
  • Expertise in AWS cloud environments.
  • Strong experience with Infrastructure as Code (IaC) tools such as Terraform or CloudFormation.
  • Deep understanding of:
  • Networking (load balancers, firewalls, VPNs, distributed systems)
  • Database systems (SQL/NoSQL), HA & DR strategies
  • Automation tools (Ansible, Chef, Puppet) and containerization/orchestration (Docker, Kubernetes)
  • Security best practices, encryption, vulnerability management, PCI-DSS compliance
  • Experience with monitoring tools (Prometheus, Grafana, Datadog).
  • Strong analytical and problem-solving skills.
  • Excellent communication and leadership capabilities.

Preferred

  • Experience in fintech/payment industry with regulatory exposure.
  • Ability to operate effectively under pressure and ensure service continuity.


Read more
Global digital transformation solutions provider.

Global digital transformation solutions provider.

Agency job
via Peak Hire Solutions by Dhara Thakkar
Thiruvananthapuram, Chennai, Pune
4 - 7 yrs
₹10L - ₹20L / yr
skill iconC#
Test Automation (QA)
Manual testing
Play Framework
SQL
+6 more

Role Proficiency:

Performs tests in strict compliance independently guides other testers and assists test leads


Additional Comments:

Position Title: - Automation + Manual Tester Primary

Skills: Playwright, xUnit, Allure Report, Page Object Model, .Net, C#, Database Queries

Secondary Skills: GIT, JIRA, Manual Testing Experience: 4 to 5 years ESSENTIAL FUNCTIONS AND


BASIC DUTIES

1. Leadership in Automation Strategy: o Assess the feasibility and scope of automation efforts to ensure they align with project timelines and requirements. o Identify opportunities for process improvements and automation within the software development life cycle (SDLC).

2. Automation Test Framework Development: o Design, develop, and implement reusable test automation frameworks for various testing phases (unit, integration, functional, performance, etc.). o Ensure the automation frameworks integrate well with CI/CD pipelines and other development tools. o Maintain and optimize test automation scripts and frameworks for continuous improvements.

3. Team Management: o Lead and mentor a team of automation engineers, ensuring they follow best practices, writing efficient test scripts, and developing scalable automation solutions. o Conduct regular performance evaluations and provide constructive feedback. o Facilitate knowledge-sharing sessions within the team.

4. Collaboration with Cross-functional Teams: o Work closely with development, QA, and operations teams to ensure proper implementation of automated testing and automation practices. o Collaborate with business analysts, product owners, and project managers to understand business requirements and translate them into automated test cases.

5. Continuous Integration & Delivery (CI/CD): o Ensure that automated tests are integrated into the CI/CD pipelines to facilitate continuous testing. o Identify and resolve issues related to the automation processes within the CI/CD pipeline.

6. Test Planning and Estimation: o Contribute to the test planning phase by identifying key automation opportunities. o Estimate effort and time required for automating test cases and other automation tasks.

7. Test Reporting and Metrics: o Monitor automation test results and generate detailed reports on test coverage, defects, and progress. o Analyze test results to identify trends, bottlenecks, or issues in the automation process and make necessary improvements.

8. Automation Tools Management: o Evaluate, select, and manage automation tools and technologies that best meet the needs of the project. o Ensure that the automation tools used align with the overall project requirements and help to achieve optimal efficiency.

9. Test Environment and Data Management: o Work on setting up and maintaining the test environments needed for automation. o Ensure automation scripts work across multiple environments, including staging, testing, and production environments.

10. Risk Management & Issue Resolution:

• Proactively identify risks associated with the automation efforts and provide solutions or mitigation strategies.

• Troubleshoot issues in the automation scripts, framework, and infrastructure to ensure minimal downtime and quick issue resolution.

11. Develop and Maintain Automated Tests: Write and maintain automated scripts for different testing levels, including regression, functional, and integration tests.

12. Bug Identification and Tracking: Report, track, and manage defects identified through automation testing to ensure quick resolution.

13. Improve Test Coverage: Identify gaps in test coverage and develop additional test scripts to improve test comprehensiveness. 14. Automation Documentation: Create and maintain detailed documentation for test automation processes, scripts, and frameworks.

15. Quality Assurance: Ensure that all automated testing activities meet the quality standards, contributing to delivering a high-quality software product.

16. Stakeholder Communication: Regularly update project stakeholders about automation progress, risks, and areas for improvement.


REQUIRED KNOWLEDGE

1. Automation Tools Expertise: Proficiency in tools like Playwright, Allure reports and integration with CI/CD pipelines.

2. Programming Languages: Strong knowledge of languages such as .NET and test frameworks like xUnit.

3. Version Control: Experience using Git for script management and collaboration.

4. Test Automation Frameworks: Ability to design scalable, reusable frameworks for different types of tests (functional, integration, etc.).

5. Leadership and Mentoring: Lead and mentor automation teams, ensuring adherence to best practices and continuous improvement.

6. Problem-Solving: Strong troubleshooting and analytical skills to identify and resolve automation issues quickly.

7. Collaboration and Communication: Excellent communication skills for working with cross-functional teams and presenting test results.

8. Time Management: Ability to estimate, prioritize, and manage automation tasks to meet project deadlines.

9. Quality Focus: Strong commitment to improving software quality, test coverage, and automation efficiency.


Skills: xUnit, Allure report, Playwright, C#

Read more
Forbes Advisor

at Forbes Advisor

3 candid answers
Nikita Sinha
Posted by Nikita Sinha
Chennai
11 - 16 yrs
Upto ₹50L / yr (Varies
)
Google Webmaster Tools
CI/CD
Cloud Computing
skill iconAmazon Web Services (AWS)
Google Cloud Platform (GCP)

A DevSecOps Staff Engineer integrates security into DevOps practices, designing secure CI/CD

pipelines, building and automating secure cloud infrastructure and ensuring compliance across

development, operations, and security teams.


Responsibilities

• Design, build and maintain secure CI/CD pipelines utilizing DevSecOps principles and

practices to increase automation and reduce human involvement in the process

• Integrate tools of SAST, DAST, SCA, etc. within pipelines to enable automated application

building, testing, securing and deployment.

• Implement security controls for cloud platforms (AWS, GCP), including IAM, container

security (EKS/ECS), and data encryption for services like S3 or BigQuery, etc.

• Automate vulnerability scanning, monitoring, and compliance processes by collaborating

with DevOps and Development teams to minimize risks in deployment pipelines.

• Suggesting architecture improvements, recommending process improvements.

• Review cloud deployment architectures and implement required security controls.

• Mentor other engineers on security practices and processes.


Requirements

• Bachelor's degree, preferably in CS or a related field, or equivalent experience

• 10+ years of overall industry experience with AWS Certified - Security Specialist.• Must have implementation experience using security tools and processes related to SAST,

DAST and Pen Testing

• AWS-specific: 5+ years’ experience with using a broad range of AWS technologies (e.g.

EC2, RDS, ELB, S3, VPC, CloudWatch) to develop and maintain an Amazon AWS based

cloud solution, with an emphasis on best practice cloud security.

• Experienced with CI/CD tool chain (GitHub Actions, Packages, Jenkins, etc.)

• Passionate about solving security challenges and being informed of available and

emerging security threats and various security technologies.

• Must be familiar with the OWASP Top 10 Security Risks and Controls

• Good skills in at least one or more scripting languages: Python, Bash

• Good knowledge in Kubernetes, Docker Swarm or other cluster management software.

• Willing to work in shifts as required


Good to Have

• AWS Certified DevOps Engineer

• Observability: Experience with system monitoring tools (e.g. CloudWatch, New Relic,

etc.).

• Experience with Terraform/Ansible/Chef/Puppet

• Operating Systems: Windows and Linux system administration.


Perks:

● Day off on the 3rd Friday of every month (one long weekend each month)

● Monthly Wellness Reimbursement Program to promote health well-being

● Monthly Office Commutation Reimbursement Program

● Paid paternity and maternity leaves

Read more
AdTech Industry

AdTech Industry

Agency job
via Peak Hire Solutions by Dhara Thakkar
Noida
8 - 12 yrs
₹50L - ₹75L / yr
Ansible
Terraform
skill iconAmazon Web Services (AWS)
Platform as a Service (PaaS)
CI/CD
+30 more

ROLE & RESPONSIBILITIES:

We are hiring a Senior DevSecOps / Security Engineer with 8+ years of experience securing AWS cloud, on-prem infrastructure, DevOps platforms, MLOps environments, CI/CD pipelines, container orchestration, and data/ML platforms. This role is responsible for creating and maintaining a unified security posture across all systems used by DevOps and MLOps teams — including AWS, Kubernetes, EMR, MWAA, Spark, Docker, GitOps, observability tools, and network infrastructure.


KEY RESPONSIBILITIES:

1.     Cloud Security (AWS)-

  • Secure all AWS resources consumed by DevOps/MLOps/Data Science: EC2, EKS, ECS, EMR, MWAA, S3, RDS, Redshift, Lambda, CloudFront, Glue, Athena, Kinesis, Transit Gateway, VPC Peering.
  • Implement IAM least privilege, SCPs, KMS, Secrets Manager, SSO & identity governance.
  • Configure AWS-native security: WAF, Shield, GuardDuty, Inspector, Macie, CloudTrail, Config, Security Hub.
  • Harden VPC architecture, subnets, routing, SG/NACLs, multi-account environments.
  • Ensure encryption of data at rest/in transit across all cloud services.

 

2.     DevOps Security (IaC, CI/CD, Kubernetes, Linux)-

Infrastructure as Code & Automation Security:

  • Secure Terraform, CloudFormation, Ansible with policy-as-code (OPA, Checkov, tfsec).
  • Enforce misconfiguration scanning and automated remediation.

CI/CD Security:

  • Secure Jenkins, GitHub, GitLab pipelines with SAST, DAST, SCA, secrets scanning, image scanning.
  • Implement secure build, artifact signing, and deployment workflows.

Containers & Kubernetes:

  • Harden Docker images, private registries, runtime policies.
  • Enforce EKS security: RBAC, IRSA, PSP/PSS, network policies, runtime monitoring.
  • Apply CIS Benchmarks for Kubernetes and Linux.

Monitoring & Reliability:

  • Secure observability stack: Grafana, CloudWatch, logging, alerting, anomaly detection.
  • Ensure audit logging across cloud/platform layers.


3.     MLOps Security (Airflow, EMR, Spark, Data Platforms, ML Pipelines)-

Pipeline & Workflow Security:

  • Secure Airflow/MWAA connections, secrets, DAGs, execution environments.
  • Harden EMR, Spark jobs, Glue jobs, IAM roles, S3 buckets, encryption, and access policies.

ML Platform Security:

  • Secure Jupyter/JupyterHub environments, containerized ML workspaces, and experiment tracking systems.
  • Control model access, artifact protection, model registry security, and ML metadata integrity.

Data Security:

  • Secure ETL/ML data flows across S3, Redshift, RDS, Glue, Kinesis.
  • Enforce data versioning security, lineage tracking, PII protection, and access governance.

ML Observability:

  • Implement drift detection (data drift/model drift), feature monitoring, audit logging.
  • Integrate ML monitoring with Grafana/Prometheus/CloudWatch.


4.     Network & Endpoint Security-

  • Manage firewall policies, VPN, IDS/IPS, endpoint protection, secure LAN/WAN, Zero Trust principles.
  • Conduct vulnerability assessments, penetration test coordination, and network segmentation.
  • Secure remote workforce connectivity and internal office networks.


5.     Threat Detection, Incident Response & Compliance-

  • Centralize log management (CloudWatch, OpenSearch/ELK, SIEM).
  • Build security alerts, automated threat detection, and incident workflows.
  • Lead incident containment, forensics, RCA, and remediation.
  • Ensure compliance with ISO 27001, SOC 2, GDPR, HIPAA (as applicable).
  • Maintain security policies, procedures, RRPs (Runbooks), and audits.


IDEAL CANDIDATE:

  • 8+ years in DevSecOps, Cloud Security, Platform Security, or equivalent.
  • Proven ability securing AWS cloud ecosystems (IAM, EKS, EMR, MWAA, VPC, WAF, GuardDuty, KMS, Inspector, Macie).
  • Strong hands-on experience with Docker, Kubernetes (EKS), CI/CD tools, and Infrastructure-as-Code.
  • Experience securing ML platforms, data pipelines, and MLOps systems (Airflow/MWAA, Spark/EMR).
  • Strong Linux security (CIS hardening, auditing, intrusion detection).
  • Proficiency in Python, Bash, and automation/scripting.
  • Excellent knowledge of SIEM, observability, threat detection, monitoring systems.
  • Understanding of microservices, API security, serverless security.
  • Strong understanding of vulnerability management, penetration testing practices, and remediation plans.


EDUCATION:

  • Master’s degree in Cybersecurity, Computer Science, Information Technology, or related field.
  • Relevant certifications (AWS Security Specialty, CISSP, CEH, CKA/CKS) are a plus.


PERKS, BENEFITS AND WORK CULTURE:

  • Competitive Salary Package
  • Generous Leave Policy
  • Flexible Working Hours
  • Performance-Based Bonuses
  • Health Care Benefits
Read more
Tradelab Technologies
Aakanksha Yadav
Posted by Aakanksha Yadav
Mumbai
2 - 5 yrs
₹7L - ₹18L / yr
skill iconDocker
skill iconKubernetes
CI/CD
skill iconJenkins

Job Title: DevOps Engineer

Location: Mumbai

Experience: 2–4 Years

Department: Technology

About InCred

InCred is a new-age financial services group leveraging technology and data science to make lending quick, simple, and hassle-free. Our mission is to empower individuals and businesses by providing easy access to financial services while upholding integrity, innovation, and customer-centricity. We operate across personal loans, education loans, SME financing, and wealth management, driving financial inclusion and socio-economic progress. [incred.com], [canvasbusi...smodel.com]

Role Overview

As a DevOps Engineer, you will play a key role in automating, scaling, and maintaining our cloud infrastructure and CI/CD pipelines. You will collaborate with development, QA, and operations teams to ensure high availability, security, and performance of our systems that power millions of transactions.

Key Responsibilities

  • Cloud Infrastructure Management: Deploy, monitor, and optimize infrastructure on AWS (EC2, EKS, S3, VPC, IAM, RDS, Route53) or similar platforms.
  • CI/CD Automation: Build and maintain pipelines using tools like Jenkins, GitLab CI, or similar.
  • Containerization & Orchestration: Manage Docker and Kubernetes clusters for scalable deployments.
  • Infrastructure as Code: Implement and maintain IaC using Terraform or equivalent tools.
  • Monitoring & Logging: Set up and manage tools like Prometheus, Grafana, ELK stack for proactive monitoring.
  • Security & Compliance: Ensure systems adhere to security best practices and regulatory requirements.
  • Performance Optimization: Troubleshoot and optimize system performance, network configurations, and application deployments.
  • Collaboration: Work closely with developers and QA teams to streamline release cycles and improve deployment efficiency. [nexthire.breezy.hr], [nexthire.breezy.hr]

Required Skills

  • 2–4 years of hands-on experience in DevOps roles.
  • Strong knowledge of Linux administration and shell scripting (Bash/Python).
  • Experience with AWS services and cloud architecture.
  • Proficiency in CI/CD tools (Jenkins, GitLab CI) and version control systems (Git).
  • Familiarity with Docker, Kubernetes, and container orchestration.
  • Knowledge of Terraform or similar IaC tools.
  • Understanding of networking, security, and performance tuning.
  • Exposure to monitoring tools (Prometheus, Grafana) and log management.

Preferred Qualifications

  • Experience in financial services or fintech environments.
  • Knowledge of microservices architecture and enterprise-grade SaaS setups.
  • Familiarity with compliance standards in BFSI (Banking & Financial Services Industry).

Why Join InCred?

  • Culture: High-performance, ownership-driven, and innovation-focused environment.
  • Growth: Opportunities to work on cutting-edge tech and scale systems for millions of users.
  • Rewards: Competitive compensation, ESOPs, and performance-based incentives.
  • Impact: Be part of a mission-driven organization transforming India’s credit landscape.


Read more
Tradelab Software Private Limited
Mumbai
3 - 5 yrs
₹10L - ₹15L / yr
skill iconDocker
skill iconKubernetes
CI/CD

About Us:

Tradelab Technologies Pvt Ltd is not for those seeking comfort—we are for those hungry to make a mark in the trading and fintech industry. If you are looking for just another backend role, this isn’t it. We want risk-takers, relentless learners, and those who find joy in pushing their limits

every day. If you thrive in high-stakes environments and have a deep passion for performance driven backend systems, we want you.


What We Expect:

• You should already be exceptional at Golang. If you need hand-holding, this isn’t the place for you.

• You thrive on challenges, not on perks or financial rewards.

• You measure success by your own growth, not external validation.

• Taking calculated risks excites you—you’re here to build, break, and learn.

• You don’t clock in for a paycheck; you clock in to outperform yourself in a high-frequency trading

environment.

• You understand the stakes—milliseconds can make or break trades, and precision is everything.


What You Will Do:

• Develop and optimize high-performance backend systems in Golang for trading platforms and financial

services.

• Architect low-latency, high-throughput microservices that push the boundaries of speed and efficiency.

• Build event-driven, fault-tolerant systems that can handle massive real-time data streams.

• Own your work—no babysitting, no micromanagement.

• Work alongside equally driven engineers who expect nothing less than brilliance.

• Learn faster than you ever thought possible.


Must-Have Skills:

Proven expertise in Golang (if you need to prove yourself, this isn’t the role for you).

• Deep understanding of concurrency, memory management, and system design.

• Experience with Trading, market data processing, or low-latency systems.

• Strong knowledge of distributed systems, message queues (Kafka, RabbitMQ), and real-time processing.

• Hands-on with Docker, Kubernetes, and CI/CD pipelines.

• A portfolio of work that speaks louder than a resume.


Nice-to-Have Skills:

• Past experience in fintech.

• Contributions to open-source Golang projects.

• A history of building something impactful from scratch.

• Understanding of FIX protocol, WebSockets, and streaming APIs.

Read more
Virtana

at Virtana

2 candid answers
Krutika Devadiga
Posted by Krutika Devadiga
Pune
4 - 10 yrs
Best in industry
skill iconJava
skill iconKubernetes
skill iconGo Programming (Golang)
skill iconPython
Apache Kafka
+13 more

Senior Software Engineer 

Challenge convention and work on cutting edge technology that is transforming the way our customers manage their physical, virtual and cloud computing environments. Virtual Instruments seeks highly talented people to join our growing team, where your contributions will impact the development and delivery of our product roadmap. Our award-winning Virtana Platform provides the only real-time, system-wide, enterprise scale solution for providing visibility into performance, health and utilization metrics, translating into improved performance and availability while lowering the total cost of the infrastructure supporting mission-critical applications.  

We are seeking an individual with expert knowledge in Systems Management and/or Systems Monitoring Software, Observability platforms and/or Performance Management Software and Solutions with insight into integrated infrastructure platforms like Cisco UCS, infrastructure providers like Nutanix, VMware, EMC & NetApp and public cloud platforms like Google Cloud and AWS to expand the depth and breadth of Virtana Products. 


Work Location: Pune/ Chennai


Job Type: Hybrid

 

Role Responsibilities: 

  • The engineer will be primarily responsible for architecture, design and development of software solutions for the Virtana Platform 
  • Partner and work closely with cross functional teams and with other engineers and product managers to architect, design and implement new features and solutions for the Virtana Platform. 
  • Communicate effectively across the departments and R&D organization having differing levels of technical knowledge.  
  • Work closely with UX Design, Quality Assurance, DevOps and Documentation teams. Assist with functional and system test design and deployment automation 
  • Provide customers with complex and end-to-end application support, problem diagnosis and problem resolution 
  • Learn new technologies quickly and leverage 3rd party libraries and tools as necessary to expedite delivery 

 

Required Qualifications:    

  • Minimum of 7+ years of progressive experience with back-end development in a Client Server Application development environment focused on Systems Management, Systems Monitoring and Performance Management Software. 
  • Deep experience in public cloud environment using Kubernetes and other distributed managed services like Kafka etc (Google Cloud and/or AWS) 
  • Experience with CI/CD and cloud-based software development and delivery 
  • Deep experience with integrated infrastructure platforms and experience working with one or more data collection technologies like SNMP, REST, OTEL, WMI, WBEM. 
  • Minimum of 6 years of development experience with one or more of these high level languages like GO, Python, Java. Deep experience with one of these languages is required. 
  • Bachelor’s or Master’s degree in computer science, Computer Engineering or equivalent 
  • Highly effective verbal and written communication skills and ability to lead and participate in multiple projects 
  • Well versed with identifying opportunities and risks in a fast-paced environment and ability to adjust to changing business priorities 
  • Must be results-focused, team-oriented and with a strong work ethic 

 

Desired Qualifications: 

  • Prior experience with other virtualization platforms like OpenShift is a plus 
  • Prior experience as a contributor to engineering and integration efforts with strong attention to detail and exposure to Open-Source software is a plus 
  • Demonstrated ability as a lead engineer who can architect, design and code with strong communication and teaming skills 
  • Deep development experience with the development of Systems, Network and performance Management Software and/or Solutions is a plus 

  

About Virtana:  Virtana delivers the industry’s only broadest and deepest Observability Platform that allows organizations to monitor infrastructure, de-risk cloud migrations, and reduce cloud costs by 25% or more. 

  

Over 200 Global 2000 enterprise customers, such as AstraZeneca, Dell, Salesforce, Geico, Costco, Nasdaq, and Boeing, have valued Virtana’s software solutions for over a decade. 

  

Our modular platform for hybrid IT digital operations includes Infrastructure Performance Monitoring and Management (IPM), Artificial Intelligence for IT Operations (AIOps), Cloud Cost Management (Fin Ops), and Workload Placement Readiness Solutions. Virtana is simplifying the complexity of hybrid IT environments with a single cloud-agnostic platform across all the categories listed above. The $30B IT Operations Management (ITOM) Software market is ripe for disruption, and Virtana is uniquely positioned for success. 

Read more
AdTech Industry

AdTech Industry

Agency job
via Peak Hire Solutions by Dhara Thakkar
Noida
8 - 12 yrs
₹0.1L - ₹0.1L / yr
skill iconPython
MLOps
Apache Airflow
Apache Spark
AWS CloudFormation
+23 more

Review Criteria

  • Strong MLOps profile
  • 8+ years of DevOps experience and 4+ years in MLOps / ML pipeline automation and production deployments
  • 4+ years hands-on experience in Apache Airflow / MWAA managing workflow orchestration in production
  • 4+ years hands-on experience in Apache Spark (EMR / Glue / managed or self-hosted) for distributed computation
  • Must have strong hands-on experience across key AWS services including EKS/ECS/Fargate, Lambda, Kinesis, Athena/Redshift, S3, and CloudWatch
  • Must have hands-on Python for pipeline & automation development
  • 4+ years of experience in AWS cloud, with recent companies
  • (Company) - Product companies preferred; Exception for service company candidates with strong MLOps + AWS depth

 

Preferred

  • Hands-on in Docker deployments for ML workflows on EKS / ECS
  • Experience with ML observability (data drift / model drift / performance monitoring / alerting) using CloudWatch / Grafana / Prometheus / OpenSearch.
  • Experience with CI / CD / CT using GitHub Actions / Jenkins.
  • Experience with JupyterHub/Notebooks, Linux, scripting, and metadata tracking for ML lifecycle.
  • Understanding of ML frameworks (TensorFlow / PyTorch) for deployment scenarios.

 

Job Specific Criteria

  • CV Attachment is mandatory
  • Please provide CTC Breakup (Fixed + Variable)?
  • Are you okay for F2F round?
  • Have candidate filled the google form?

 

Role & Responsibilities

We are looking for a Senior MLOps Engineer with 8+ years of experience building and managing production-grade ML platforms and pipelines. The ideal candidate will have strong expertise across AWS, Airflow/MWAA, Apache Spark, Kubernetes (EKS), and automation of ML lifecycle workflows. You will work closely with data science, data engineering, and platform teams to operationalize and scale ML models in production.

 

Key Responsibilities:

  • Design and manage cloud-native ML platforms supporting training, inference, and model lifecycle automation.
  • Build ML/ETL pipelines using Apache Airflow / AWS MWAA and distributed data workflows using Apache Spark (EMR/Glue).
  • Containerize and deploy ML workloads using Docker, EKS, ECS/Fargate, and Lambda.
  • Develop CI/CT/CD pipelines integrating model validation, automated training, testing, and deployment.
  • Implement ML observability: model drift, data drift, performance monitoring, and alerting using CloudWatch, Grafana, Prometheus.
  • Ensure data governance, versioning, metadata tracking, reproducibility, and secure data pipelines.
  • Collaborate with data scientists to productionize notebooks, experiments, and model deployments.

 

Ideal Candidate

  • 8+ years in MLOps/DevOps with strong ML pipeline experience.
  • Strong hands-on experience with AWS:
  • Compute/Orchestration: EKS, ECS, EC2, Lambda
  • Data: EMR, Glue, S3, Redshift, RDS, Athena, Kinesis
  • Workflow: MWAA/Airflow, Step Functions
  • Monitoring: CloudWatch, OpenSearch, Grafana
  • Strong Python skills and familiarity with ML frameworks (TensorFlow/PyTorch/Scikit-learn).
  • Expertise with Docker, Kubernetes, Git, CI/CD tools (GitHub Actions/Jenkins).
  • Strong Linux, scripting, and troubleshooting skills.
  • Experience enabling reproducible ML environments using Jupyter Hub and containerized development workflows.

 

Education:

  • Master’s degree in computer science, Machine Learning, Data Engineering, or related field.

 

Read more
Media and Entertainment Industry

Media and Entertainment Industry

Agency job
via Peak Hire Solutions by Dhara Thakkar
Noida
4 - 8 yrs
₹20L - ₹45L / yr
TypeScript
skill iconMongoDB
Microservices
MVC Framework
Google Cloud Platform (GCP)
+14 more

Required Skills: TypeScript, MVC, Cloud experience (Azure, AWS, etc.), mongodb, Express.js, Nest.js

 

Criteria:

Need candidates from Growing startups or Product based companies only

1. 4–8 years’ experience in backend engineering

2. Minimum 2+ years hands-on experience with:

  • TypeScript
  • Express.js / Nest.js

3. Strong experience with MongoDB (or MySQL / PostgreSQL / DynamoDB)

4. Strong understanding of system design & scalable architecture

5. Hands-on experience in:

  • Event-driven architecture / Domain-driven design
  • MVC / Microservices

6. Strong in automated testing (especially integration tests)

7. Experience with CI/CD pipelines (GitHub Actions or similar)

8. Experience managing production systems

9. Solid understanding of performance, reliability, observability

10. Cloud experience (AWS preferred; GCP/Azure acceptable)

11. Strong coding standards — Clean Code, code reviews, refactoring

 

Description 

About the opportunity

We are looking for an exceptional Senior Software Engineer to join our Backend team. This is a unique opportunity to join a fast-growing company where you will get to solve real customer and business problems, shape the future of a product built for Bharat and build the engineering culture of the team. You will have immense responsibility and autonomy to push the boundaries of engineering to deliver scalable and resilient systems.

As a Senior Software Engineer, you will be responsible for shipping innovative features at breakneck speed, designing the architecture, mentoring other engineers on the team and pushing for a high bar of engineering standards like code quality, automated testing, performance, CI/CD, etc. If you are someone who loves solving problems for customers, technology, the craft of software engineering, and the thrill of building startups, we would like to talk to you.

 

What you will be doing

  • Build and ship features in our Node.js (and now migrating to TypeScript) codebase that directly impact user experience and help move the top and bottom line of the business.
  • Collaborate closely with our product, design and data team to build innovative features to deliver a world class product to our customers. At company, product managers don’t “tell” what to build. In fact, we all collaborate on how to solve a problem for our customers and the business. Engineering plays a big part in it.
  • Design scalable platforms that empower our product and marketing teams to rapidly experiment.
  • Own the quality of our products by writing automated tests, reviewing code, making systems observable and resilient to failures.
  • Drive code quality and pay down architectural debt by continuous analysis of our codebases and systems, and continuous refactoring.
  • Architect our systems for faster iterations, releasability, scalability and high availability using practices like Domain Driven Design, Event Driven Architecture, Cloud Native Architecture and Observability.
  • Set the engineering culture with the rest of the team by defining how we should work as a team, set standards for quality, and improve the speed of engineering execution.

 

The role could be ideal for you if you

  • Experience of 4-8 years of working in backend engineering with at least 2 years of production experience in TypeScript, Express.js (or another popular framework like Nest.js) and MongoDB (or any popular database like MySQL, PostgreSQL, DynamoDB, etc.).
  • Well versed with one or more architectures and design patterns such as MVC, Domain Driven Design, CQRS, Event Driven Architecture, Cloud Native Architecture, etc.
  • Experienced in writing automated tests (especially integration tests) and Continuous Integration. At company, engineers own quality and hence, writing automated tests is crucial to the role.
  • Experience with managing production infrastructure using technologies like public cloud providers (AWS, GCP, Azure, etc.). Bonus: if you have experience in using Kubernetes.
  • Experience in observability techniques like code instrumentation for metrics, tracing and logging.
  • Care deeply about code quality, code reviews, software architecture (think about Object Oriented Programming, Clean Code, etc.), scalability and reliability. Bonus: if you have experience in this from your past roles.
  • Understand the importance of shipping fast in a startup environment and constantly try to find ingenious ways to achieve the same.
  • Collaborate well with everyone on the team. We communicate a lot and don’t hesitate to get quick feedback from other members on the team sooner than later.
  • Can take ownership of goals and deliver them with high accountability.

 

Don’t hesitate to try out new technologies. At company, nobody is limited to a role. Every engineer in our team is an expert of at least one technology but often ventures out in adjacent technologies like React.js, Flutter, Data Platforms, AWS and Kubernetes. If you are not excited by this, you will not like working at company. Bonus: if you have experience in adjacent technologies like AWS (or any public cloud provider, Github Actions (or CircleCI), Kubernetes, Infrastructure as Code (Terraform, Pulumi, etc.), etc.

 

 

Read more
Global Digital Transformation Solutions Provider

Global Digital Transformation Solutions Provider

Agency job
via Peak Hire Solutions by Dhara Thakkar
Pune
6 - 12 yrs
₹15L - ₹30L / yr
skill iconMachine Learning (ML)
skill iconAmazon Web Services (AWS)
skill iconKubernetes
ECS
Amazon Redshift
+14 more

Core Responsibilities:

  • The MLE will design, build, test, and deploy scalable machine learning systems, optimizing model accuracy and efficiency
  • Model Development: Algorithms and architectures span traditional statistical methods to deep learning along with employing LLMs in modern frameworks.
  • Data Preparation: Prepare, cleanse, and transform data for model training and evaluation.
  • Algorithm Implementation: Implement and optimize machine learning algorithms and statistical models.
  • System Integration: Integrate models into existing systems and workflows.
  • Model Deployment: Deploy models to production environments and monitor performance.
  • Collaboration: Work closely with data scientists, software engineers, and other stakeholders.
  • Continuous Improvement: Identify areas for improvement in model performance and systems.

 

Skills:

  • Programming and Software Engineering: Knowledge of software engineering best practices (version control, testing, CI/CD).
  • Data Engineering: Ability to handle data pipelines, data cleaning, and feature engineering. Proficiency in SQL for data manipulation + Kafka, Chaossearch logs, etc for troubleshooting; Other tech touch points are ScyllaDB (like BigTable), OpenSearch, Neo4J graph
  • Model Deployment and Monitoring: MLOps Experience in deploying ML models to production environments.
  • Knowledge of model monitoring and performance evaluation.

 

Required experience:

  • Amazon SageMaker: Deep understanding of SageMaker's capabilities for building, training, and deploying ML models; understanding of the Sagemaker pipeline with ability to analyze gaps and recommend/implement improvements
  • AWS Cloud Infrastructure: Familiarity with S3, EC2, Lambda and using these services in ML workflows
  • AWS data: Redshift, Glue
  • Containerization and Orchestration: Understanding of Docker and Kubernetes, and their implementation within AWS (EKS, ECS)

 

Skills: Aws, Aws Cloud, Amazon Redshift, Eks

 

Must-Haves

Machine Learning +Aws+ (EKS OR ECS OR Kubernetes) + (Redshift AND Glue) + Sagemaker

Notice period - 0 to 15days only

Hybrid work mode- 3 days office, 2 days at home

Read more
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Find more jobs
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort