Cutshort logo

50+ DevOps Jobs in India

Apply to 50+ DevOps Jobs on CutShort.io. Find your next job, effortlessly. Browse DevOps Jobs and apply today!

icon
Zolvit (formerly Vakilsearch)

at Zolvit (formerly Vakilsearch)

1 video
2 recruiters
Lakshmi J
Posted by Lakshmi J
Chennai
2 - 4 yrs
₹10L - ₹16L / yr
DevOps
Linux administration
Unix administration
Shell Scripting
CI/CD
+5 more

We are looking for a passionate DevOps Engineer who can support deployment and monitor our Production, QE, and Staging environments performance. Applicants should have a strong understanding of UNIX internals and should be able to clearly articulate how it works. Knowledge of shell scripting & security aspects is a must. Any experience with infrastructure as code is a big plus. The key responsibility of the role is to manage deployments, security, and support of business solutions. Having experience in database applications like Postgres, ELK, NodeJS, NextJS & Ruby on Rails is a huge plus. At VakilSearch. Experience doesn't matter, passion to produce change matters



Responsibilities and Accountabilities:

  • As part of the DevOps team, you will be responsible for configuration, optimization, documentation, and support of the infra components of VakilSearch’s product which are hosted in cloud services & on-prem facility
  • Design, build tools and framework that support deploying and managing our platform & Exploring new tools, technologies, and processes to improve speed, efficiency, and scalability
  • Support and troubleshoot scalability, high availability, performance, monitoring, backup, and restore of different Env 
  • Manage resources in a cost-effective, innovative manner including assisting subordinates ineffective use of resources and tools
  • Resolve incidents as escalated from Monitoring tools and Business Development Team
  • Implement and follow security guidelines, both policy and technology to protect our data
  • Identify root cause for issues and develop long-term solutions to fix recurring issues and Document it
  • Strong in performing production operation activities even at night times if required
  • Ability to automate [Scripts] recurring tasks to increase velocity and quality
  • Ability to manage and deliver multiple project phases at the same time

I Qualification(s): 

  • Experience in working with Linux Server, DevOps tools, and Orchestration tools 
  • Linux, AWS, GCP, Azure, CompTIA+, and any other certification are a value-add 

II Experience Required in DevOps Aspects:

  • Length of Experience: Minimum 1-4 years of experience
  • Nature of Experience: 
  • Experience in Cloud deployments, Linux administration[ Kernel Tuning is a value add ], Linux clustering, AWS, virtualization, and networking concepts [ Azure, GCP value add ]
  • Experience in deployment solutions CI/CD like Jenkins, GitHub Actions [ Release Management is a value add ]
  • Hands-on experience in any of the configuration management IaC tools like Chef, Terraform, and CloudFormation [ Ansible & Puppet is a value add ]
  • Administration, Configuring and utilizing Monitoring and Alerting tools like Prometheus, Grafana, Loki, ELK, Zabbix, Datadog, etc
  • Experience with Containerization and orchestration tools like Docker, and Kubernetes [ Docker swarm is a value add ]Good Scripting skills in at least one interpreted language - Shell/bash scripting or Ruby/Python/Perl
  • Experience in Database applications like PostgreSQL, MongoDB & MySQL [DataOps]
  • Good at Version Control & source code management systems like GitHub, GIT
  • Experience in Serverless [ Lambda/GCP cloud function/Azure function ]
  • Experience in Web Server Nginx, and Apache
  • Knowledge in Redis, RabbitMQ, ELK, REST API [ MLOps Tools is a value add ]
  • Knowledge in Puma, Unicorn, Gunicorn & Yarn
  • Hands-on VMWare ESXi/Xencenter deployments is a value add
  • Experience in Implementing and troubleshooting TCP/IP networks, VPN, Load Balancing & Web application firewalls
  • Deploying, Configuring, and Maintaining Linux server systems ON premises and off-premises
  • Code Quality like SonarQube is a value-add
  • Test Automation like Selenium, JMeter, and JUnit is a value-add
  • Experience in Heroku and OpenStack is a value-add 
  • Experience in Identifying Inbound and Outbound Threats and resolving it
  • Knowledge of CVE & applying the patches for OS, Ruby gems, Node, and Python packages  
  • Documenting the Security fix for future use
  • Establish cross-team collaboration with security built into the software development lifecycle 
  • Forensics and Root Cause Analysis skills are mandatory 
  • Weekly Sanity Checks of the on-prem and off-prem environment 

 

III Skill Set & Personality Traits required:

  • An understanding of programming languages such as Ruby, NodeJS, ReactJS, Perl, Java, Python, and PHP
  • Good written and verbal communication skills to facilitate efficient and effective interaction with peers, partners, vendors, and customers


IV Age Group: 21 – 36 Years


V Cost to the Company: As per industry standards


Read more
Bengaluru (Bangalore)
5 - 10 yrs
₹25L - ₹50L / yr
skill iconNodeJS (Node.js)
skill iconReact.js
skill iconPython
skill iconJava
Data engineering
+10 more

Job Title : Senior Software Engineer (Full Stack — AI/ML & Data Applications)

Experience : 5 to 10 Years

Location : Bengaluru, India

Employment Type : Full-Time | Onsite


Role Overview :

We are seeking a Senior Full Stack Software Engineer with strong technical leadership and hands-on expertise in AI/ML, data-centric applications, and scalable full-stack architectures.

In this role, you will design and implement complex applications integrating ML/AI models, lead full-cycle development, and mentor engineering teams.


Mandatory Skills :

Full Stack Development (React/Angular/Vue + Node.js/Python/Java), Data Engineering (Spark/Kafka/ETL), ML/AI Model Integration (TensorFlow/PyTorch/scikit-learn), Cloud & DevOps (AWS/GCP/Azure, Docker, Kubernetes, CI/CD), SQL/NoSQL Databases (PostgreSQL/MongoDB).


Key Responsibilities :

  • Architect, design, and develop scalable full-stack applications for data and AI-driven products.
  • Build and optimize data ingestion, processing, and pipeline frameworks for large datasets.
  • Deploy, integrate, and scale ML/AI models in production environments.
  • Drive system design, architecture discussions, and API/interface standards.
  • Ensure engineering best practices across code quality, testing, performance, and security.
  • Mentor and guide junior developers through reviews and technical decision-making.
  • Collaborate cross-functionally with product, design, and data teams to align solutions with business needs.
  • Monitor, diagnose, and optimize performance issues across the application stack.
  • Maintain comprehensive technical documentation for scalability and knowledge-sharing.

Required Skills & Experience :

  • Education : B.E./B.Tech/M.E./M.Tech in Computer Science, Data Science, or equivalent fields.
  • Experience : 5+ years in software development with at least 2+ years in a senior or lead role.
  • Full Stack Proficiency :
  • Front-end : React / Angular / Vue.js
  • Back-end : Node.js / Python / Java
  • Data Engineering : Experience with data frameworks such as Apache Spark, Kafka, and ETL pipeline development.
  • AI/ML Expertise : Practical exposure to TensorFlow, PyTorch, or scikit-learn and deploying ML models at scale.
  • Databases : Strong knowledge of SQL & NoSQL systems (PostgreSQL, MongoDB) and warehousing tools (Snowflake, BigQuery).
  • Cloud & DevOps : Working knowledge of AWS, GCP, or Azure; containerization & orchestration (Docker, Kubernetes); CI/CD; MLflow/SageMaker is a plus.
  • Visualization : Familiarity with modern data visualization tools (D3.js, Tableau, Power BI).

Soft Skills :

  • Excellent communication and cross-functional collaboration skills.
  • Strong analytical mindset with structured problem-solving ability.
  • Self-driven with ownership mentality and adaptability in fast-paced environments.

Preferred Qualifications (Bonus) :

  • Experience deploying distributed, large-scale ML or data-driven platforms.
  • Understanding of data governance, privacy, and security compliance.
  • Exposure to domain-driven data/AI use cases in fintech, healthcare, retail, or e-commerce.
  • Experience working in Agile environments (Scrum/Kanban).
  • Active open-source contributions or a strong GitHub technical portfolio.
Read more
Heaven Designs

at Heaven Designs

1 product
Reshika Mendiratta
Posted by Reshika Mendiratta
Remote only
2yrs+
Upto ₹12L / yr (Varies
)
skill iconPython
skill iconDjango
RESTful APIs
DevOps
CI/CD
+8 more

Backend Engineer (Python / Django + DevOps)


Company: SurgePV (A product by Heaven Designs Pvt. Ltd.)


About SurgePV

SurgePV is an AI-first solar design software built from more than a decade of hands-on experience designing and engineering thousands of solar installations at Heaven Designs. After working with nearly every solar design tool in the market, we identified major gaps in speed, usability, and intelligence—particularly for rooftop solar EPCs.

Our vision is to build the most powerful and intuitive solar design platform for rooftop installers, covering fast PV layouts, code-compliant engineering, pricing, proposals, and financing in a single workflow. SurgePV enables small and mid-sized solar EPCs to design more systems, close more deals, and accelerate the clean energy transition globally.

As SurgePV scales, we are building a robust backend platform to support complex geometry, pricing logic, compliance rules, and workflow automation at scale.


Role Overview

We are seeking a Backend Engineer (Python / Django + DevOps) to own and scale SurgePV’s core backend systems. You will be responsible for designing, building, and maintaining reliable, secure, and high-performance services that power our solar design platform.

This role requires strong ownership—you will work closely with the founders, frontend engineers, and product team to make architectural decisions and ensure the platform remains fast, observable, and scalable as global usage grows.


Key Responsibilities

  • Design, develop, and maintain backend services and REST APIs that power PV design, pricing, and core product workflows.
  • Collaborate with the founding team on system architecture, including authentication, authorization, billing, permissions, integrations, and multi-tenant design.
  • Build secure, scalable, and observable systems with structured logging, metrics, alerts, and rate limiting.
  • Own DevOps responsibilities for backend services, including Docker-based containerization, CI/CD pipelines, and production deployments.
  • Optimize PostgreSQL schemas, migrations, indexes, and queries for computation-heavy and geospatial workloads.
  • Implement caching strategies and performance optimizations where required.
  • Integrate with third-party APIs such as CRMs, financing providers, mapping platforms, and satellite or irradiance data services.
  • Write clean, maintainable, well-tested code and actively participate in code reviews to uphold engineering quality.

Required Skills & Qualifications (Must-Have)

  • 2–5 years of experience as a Backend Engineer.
  • Strong proficiency in Python and Django / Django REST Framework.
  • Solid computer science fundamentals, including data structures, algorithms, and basic distributed systems concepts.
  • Proven experience designing and maintaining REST APIs in production environments.
  • Hands-on DevOps experience, including:
  • Docker and containerized services
  • CI/CD pipelines (GitHub Actions, GitLab CI, or similar)
  • Deployments on cloud platforms such as AWS, GCP, Azure, or DigitalOcean
  • Strong working knowledge of PostgreSQL, including schema design, migrations, indexing, and query optimization.
  • Strong debugging skills and a habit of instrumenting systems using logs, metrics, and alerts.
  • Ownership mindset with the ability to take systems from spec → implementation → production → iteration.

Good-to-Have Skills

  • Experience working in early-stage startups or building 0→1 products.
  • Familiarity with Kubernetes or other container orchestration tools.
  • Experience with Infrastructure as Code (Terraform, Pulumi).
  • Exposure to monitoring and observability stacks such as Prometheus, Grafana, ELK, or similar tools.
  • Prior exposure to solar, CAD/geometry, geospatial data, or financial/pricing workflows.

What We Offer

  • Real-world impact: every feature you ship helps accelerate solar adoption on real rooftops.
  • Opportunity to work across backend engineering, DevOps, integrations, and performance optimization.
  • A mission-driven, fast-growing product focused on sustainability and clean energy.
Read more
Bengaluru (Bangalore)
8 - 9 yrs
₹20L - ₹25L / yr
DevOps

Job Title: Senior DevOps Engineer

Experience: 8+ Years

Joining: Immediate Joiner

Location: Bangalore (Onsite/Hybrid – as applicable)

Job Description:

We are looking for a highly experienced Senior DevOps Engineer with 8+ years of hands-on experience to join our team immediately. The ideal candidate will be responsible for designing, implementing, and managing scalable, secure, and highly available infrastructure.

Key Responsibilities:

  • Design, build, and maintain CI/CD pipelines for application deployment
  • Manage cloud infrastructure (AWS/Azure/GCP) and optimize cost and performance
  • Automate infrastructure using Infrastructure as Code (Terraform/CloudFormation)
  • Manage containerized applications using Docker and Kubernetes
  • Monitor system performance, availability, and security
  • Collaborate closely with development, QA, and security teams
  • Troubleshoot production issues and perform root cause analysis
  • Ensure high availability, disaster recovery, and backup strategies

Required Skills:

  • 8+ years of experience in DevOps / Site Reliability Engineering
  • Strong expertise in Linux/Unix administration
  • Hands-on experience with AWS / Azure / GCP
  • CI/CD tools: Jenkins, GitLab CI, GitHub Actions, Azure DevOps
  • Containers & orchestration: Docker, Kubernetes
  • Infrastructure as Code: Terraform, CloudFormation, Ansible
  • Monitoring tools: Prometheus, Grafana, ELK, CloudWatch
  • Strong scripting skills (Bash, Python)
  • Experience with security best practices and compliance

Good to Have:

  • Experience with microservices architecture
  • Knowledge of DevSecOps practices
  • Cloud certifications (AWS/Azure/GCP)
  • Experience in high-traffic production environments

Why Join Us:

  • Opportunity to work on scalable, enterprise-grade systems
  • Collaborative and growth-oriented work environment
  • Competitive compensation and benefits
  • Immediate joiners preferred.
Read more
Bengaluru (Bangalore)
3 - 6 yrs
₹10L - ₹15L / yr
Google Cloud Platform (GCP)
DevOps

Job Description

We are seeking a DevOps Engineer with 3+ years of experience and strong expertise in Google Cloud Platform (GCP) to design, automate, and manage scalable cloud infrastructure. The role involves building CI/CD pipelines, implementing Infrastructure as Code, and ensuring high availability, security, and performance of cloud-native applications.

Key Responsibilities

  • Design, deploy, and manage GCP infrastructure using best practices
  • Implement Infrastructure as Code (IaC) using Terraform
  • Build and maintain CI/CD pipelines using Jenkins, GitHub Actions, or GitLab CI
  • Manage containerized workloads using Docker and Kubernetes (GKE)
  • Configure and manage GCP networking (VPCs, Subnets, VPN, Cloud Interconnect, Load Balancers, Firewall rules)
  • Implement monitoring and logging using Cloud Monitoring and Cloud Logging
  • Ensure high availability, scalability, and security of applications
  • Troubleshoot production issues and perform root cause analysis
  • Collaborate with development and product teams to improve deployment and reliability
  • Optimize cloud cost, performance, and security

Required Skills & Qualifications

  • 3+ years of experience as a DevOps / Cloud Engineer
  • Strong hands-on experience with Google Cloud Platform (GCP)
  • Experience with Terraform for GCP resource provisioning
  • CI/CD experience with Jenkins / GitHub Actions
  • Hands-on experience with Docker and Kubernetes (GKE)
  • Good understanding of Linux and shell scripting
  • Knowledge of cloud networking concepts (TCP/IP, DNS, Load Balancers)
  • Experience with monitoring, logging, and alerting

Good to Have

  • Experience with Hybrid or Multi-cloud architectures
  • Knowledge of DevSecOps practices
  • Experience with SRE concepts
  • GCP certifications (Associate Cloud Engineer / Professional DevOps Engineer)

Why Join Us

  • Work on modern GCP-based cloud infrastructure
  • Opportunity to design and own end-to-end DevOps pipelines
  • Learning and growth opportunities in cloud and automation


Read more
PGAGI
Javeriya Shaik
Posted by Javeriya Shaik
Remote only
0 - 1 yrs
₹1 - ₹2 / mo
skill iconDocker
skill iconKubernetes
prometheus
skill icongrafana
DevOps
+1 more

About PGAGI:

At PGAGI, we believe in a future where AI and human intelligence coexist in harmony, creating a world that is smarter, faster, and better. We are not just building AI; we are shaping a future where AI is a fundamental and positive force for businesses, societies, and the planet.


Position Overview:

PGAGI Consultancy Pvt. Ltd. is seeking a proactive and motivated DevOps Intern with around 3-6 months of hands-on experience to support our AI model deployment and infrastructure initiatives. This role is ideal for someone looking to deepen their expertise in DevOps practices tailored to AI/ML environments, including CI/CD automation, cloud infrastructure, containerization, and monitoring.


Key Responsibilities:

AI Model Deployment & Integration

  • Assist in containerizing and deploying AI/ML models into production using Docker.
  • Support integration of models into existing systems and APIs.

Infrastructure Management

  • Help manage cloud and on-premise environments to ensure scalability and consistency.
  • Work with Kubernetes for orchestration and environment scaling.

CI/CD Pipeline Automation

  • Collaborate on building and maintaining automated CI/CD pipelines (e.g., GitHub Actions, Jenkins).
  • Implement basic automated testing and rollback mechanisms.

Hosting & Web Environment Management

  • Assist in managing hosting platforms, web servers, and CDN configurations.
  • Support DNS, load balancer setups, and ensure high availability of web services.

Monitoring, Logging & Optimization

  • Set up and maintain monitoring/logging tools like Prometheus and Grafana.
  • Participate in troubleshooting and resolving performance bottlenecks.

Security & Compliance

  • Apply basic DevSecOps practices including security scans and access control implementations.
  • Follow security and compliance checklists under supervision.

Cost & Resource Management

  • Monitor resource usage and suggest cost optimization strategies in cloud environments.

Documentation

  • Maintain accurate documentation for deployment processes and incident responses.

Continuous Learning & Innovation

  • Suggest improvements to workflows and tools.
  • Stay updated with the latest DevOps and AI infrastructure trends.


Requirements:

  • Around 6 months of experience in a DevOps or related technical role (internship or professional).
  • Basic understanding of Docker, Kubernetes, and CI/CD tools like GitHub Actions or Jenkins.
  • Familiarity with cloud platforms (AWS, GCP, or Azure) and monitoring tools (e.g., Prometheus, Grafana).
  • Exposure to scripting languages (e.g., Bash, Python) is a plus.
  • Strong problem-solving skills and eagerness to learn.
  • Good communication and documentation abilities.

Compensation

  • Joining Bonus: INR 2,500 one-time bonus upon joining.
  • Monthly Stipend: Base stipend of INR 8,000 per month, with the potential to increase up to INR 20,000 based on performance evaluations.
  • Performance-Based Pay Scale: Eligibility for monthly performance-based bonuses, rewarding exceptional project contributions and teamwork.
  • Additional Benefits: Access to professional development opportunities, including workshops, tech talks, and mentoring sessions.


Ready to kick-start your DevOps journey in a dynamic AI-driven environment? Apply now

#Devops #Docker #Kubernetes #DevOpsIntern

Read more
AI Powered Software Development (Product Company)

AI Powered Software Development (Product Company)

Agency job
via Recruiting Bond by Pavan Kumar
Bengaluru (Bangalore), Delhi, Gurugram, Noida, Hyderabad, Pune, Mumbai, India
3 - 8 yrs
₹15L - ₹30L / yr
DevOps
Reliability engineering
CloudOps
skill iconAmazon Web Services (AWS)
Google Cloud Platform (GCP)
+20 more

🚀 RECRUITING BOND HIRING


Role: CLOUD OPERATIONS & MONITORING ENGINEER - (THE GUARDIAN OF UPTIME)


⚡ THIS IS NOT A MONITORING ROLE


THIS IS A COMMAND ROLE

You don’t watch dashboards.

You control outcomes.


You don’t react to incidents.

You eliminate them before they escalate.


This role powers an AI-driven SaaS + IoT platform where:

---> Uptime is non-negotiable

---> Latency is hunted

---> Failures are never allowed to repeat


Incidents don’t grow.

Problems don’t hide.

Uptime is enforced.


🧠 WHAT YOU’LL OWN

(Real Work. Real Impact.)


🔍 Total Observability

---> Real-time visibility across cloud, application, database & infrastructure

---> High-signal dashboards (Grafana + cloud-native tools)

---> Performance trends tracked before growth breaks systems

🚨 Smart Alerting (No Noise)

---> Alerts that fire only when action is required

---> Zero false positives. Zero alert fatigue

Right signal → right person → right time


⚙ Automation as a Weapon

---> End-to-end automation of operational tasks

---> Standardized logging, metrics & alerting

---> Systems that scale without human friction


🧯 Incident Command & Reliability

---> First responder for critical incidents (on-call rotation)

---> Root cause analysis across network, app, DB & storage

Fix fast — then harden so it never breaks the same way again

📘 Operational Excellence

---> Battle-tested runbooks

---> Documentation that actually works under pressure

Every incident → a stronger platform


🛠️ TECHNOLOGIES YOU’LL MASTER

☁ Cloud: AWS | Azure | Google Cloud

📊 Monitoring: Grafana | Metrics | Traces | Logs

📡 Alerting: Production-grade alerting systems

🌐 Networking: DNS | Routing | Load Balancers | Security

🗄 Databases: Production systems under real pressure

⚙ DevOps: Automation | Reliability Engineering


🎯 WHO WE’RE LOOKING FOR

Engineers who take uptime personally.


You bring:

---> 3+ years in Cloud Ops / DevOps / SRE

---> Live production SaaS experience

---> Deep AWS / Azure / GCP expertise

---> Strong monitoring & alerting experience

---> Solid networking fundamentals

---> Calm, methodical incident response

---> Bonus (Highly Preferred):

---> B2B SaaS + IoT / hybrid platforms

---> Strong automation mindset

---> Engineers who think in systems, not tickets


💼 JOB DETAILS

📍 Bengaluru

🏢 Hybrid (WFH)

💰 (Final CTC depends on experience & interviews)


🌟 WHY THIS ROLE?

Most cloud teams manage uptime. We weaponize it.

Your work won’t just keep systems running — it will keep customers confident, operations flawless, and competitors wondering how it all works so smoothly.


📩 APPLY / REFER : 🔗 Know someone who lives for reliability, observability & cloud excellence?

Read more
AI Powered - Software Development (Product Company)

AI Powered - Software Development (Product Company)

Agency job
via Recruiting Bond by Pavan Kumar
India, Bengaluru (Bangalore), Delhi, Gurugram, Noida, Ghaziabad, Faridabad, Hyderabad, Mumbai, Pune
3 - 7 yrs
₹15L - ₹30L / yr
DevOps
Reliability engineering
CloudOps
Cloud Operations
Monitoring
+25 more

🚀 RECRUITING BOND HIRING


Role: CLOUD OPERATIONS & MONITORING ENGINEER - (THE GUARDIAN OF UPTIME)


⚡ THIS IS NOT A MONITORING ROLE


THIS IS A COMMAND ROLE

You don’t watch dashboards.

You control outcomes.


You don’t react to incidents.

You eliminate them before they escalate.


This role powers an AI-driven SaaS + IoT platform where:

---> Uptime is non-negotiable

---> Latency is hunted

---> Failures are never allowed to repeat


Incidents don’t grow.

Problems don’t hide.

Uptime is enforced.


🧠 WHAT YOU’LL OWN

(Real Work. Real Impact.)


🔍 Total Observability

---> Real-time visibility across cloud, application, database & infrastructure

---> High-signal dashboards (Grafana + cloud-native tools)

---> Performance trends tracked before growth breaks systems

🚨 Smart Alerting (No Noise)

---> Alerts that fire only when action is required

---> Zero false positives. Zero alert fatigue

Right signal → right person → right time


⚙ Automation as a Weapon

---> End-to-end automation of operational tasks

---> Standardized logging, metrics & alerting

---> Systems that scale without human friction


🧯 Incident Command & Reliability

---> First responder for critical incidents (on-call rotation)

---> Root cause analysis across network, app, DB & storage

Fix fast — then harden so it never breaks the same way again

📘 Operational Excellence

---> Battle-tested runbooks

---> Documentation that actually works under pressure

Every incident → a stronger platform


🛠️ TECHNOLOGIES YOU’LL MASTER

☁ Cloud: AWS | Azure | Google Cloud

📊 Monitoring: Grafana | Metrics | Traces | Logs

📡 Alerting: Production-grade alerting systems

🌐 Networking: DNS | Routing | Load Balancers | Security

🗄 Databases: Production systems under real pressure

⚙ DevOps: Automation | Reliability Engineering


🎯 WHO WE’RE LOOKING FOR

Engineers who take uptime personally.


You bring:

---> 3+ years in Cloud Ops / DevOps / SRE

---> Live production SaaS experience

---> Deep AWS / Azure / GCP expertise

---> Strong monitoring & alerting experience

---> Solid networking fundamentals

---> Calm, methodical incident response

---> Bonus (Highly Preferred):

---> B2B SaaS + IoT / hybrid platforms

---> Strong automation mindset

---> Engineers who think in systems, not tickets


💼 JOB DETAILS

📍 Bengaluru

🏢 Hybrid (WFH)

💰 (Final CTC depends on experience & interviews)


🌟 WHY THIS ROLE?

Most cloud teams manage uptime. We weaponize it.

Your work won’t just keep systems running — it will keep customers confident, operations flawless, and competitors wondering how it all works so smoothly.


📩 APPLY / REFER : 🔗 Know someone who lives for reliability, observability & cloud excellence?

Read more
NeoGenCode Technologies Pvt Ltd
Ritika Verma
Posted by Ritika Verma
Gurugram
5 - 10 yrs
₹15L - ₹20L / yr
DevOps
Bare metal
Physical server
Onpremises
AWS
+2 more

Job Title: Senior DevOps Engineer

Location: Gurgaon – Sector 39

Work Mode: 5 Days Onsite

Experience: 5+ Years

About the Role

We are looking for an experienced Senior DevOps Engineer to build, manage, and maintain highly reliable, scalable, and secure infrastructure. The role involves deploying product updates, handling production issues, implementing customer integrations, and leading DevOps best practices across teams.

Key Responsibilities

  • Manage and maintain production-grade infrastructure ensuring high availability and performance.
  • Deploy application updates, patches, and bug fixes across environments.
  • Handle Level-2 support and resolve escalated production issues.
  • Perform root cause analysis and implement preventive solutions.
  • Build automation tools and scripts to improve system reliability and efficiency.
  • Develop monitoring, logging, alerting, and reporting systems.
  • Ensure secure deployments following data encryption and cybersecurity best practices.
  • Collaborate with development, product, and QA teams for smooth releases.
  • Lead and mentor a small DevOps team (3–4 engineers).

Core Focus Areas

Server Setup & Management (60%)

  • Hands-on management of bare-metal servers.
  • Server provisioning, configuration, and lifecycle management.
  • Network configuration including redundancy, bonding, and performance tuning.

Queue Systems – Kafka / RabbitMQ (15%)

  • Implementation and management of message queues for distributed systems.

Storage Systems – SAN / NAS (15%)

  • Setup and management of enterprise storage systems.
  • Ensure backup, recovery, and data availability.

Database Knowledge (5%)

  • Working experience with Redis, MySQL/PostgreSQL, MongoDB, Elasticsearch.
  • Basic database administration and performance tuning.

Telecom Exposure (Good to Have – 5%)

  • Experience with SMS, voice systems, or real-time data processing environments.

Technical Skills Required

  • Linux administration & Shell scripting
  • CI/CD tools – Jenkins
  • Git (GitHub / SVN) and branching strategies
  • Docker & Kubernetes
  • AWS cloud services
  • Ansible for configuration management
  • Databases: MySQL, MariaDB, MongoDB
  • Web servers: Apache, Tomcat
  • Load balancing & HA: HAProxy, Keepalived
  • Monitoring tools: Nagios and related observability stacks


Read more
Ekloud INC
ashwini rathod
Posted by ashwini rathod
india
6 - 15 yrs
₹15L - ₹25L / yr
DevOps
API
Meta-data management
CI/CD
CI\CD VERSION
+12 more

Salesforce DevOps Engineer


Responsibilities

  • Support the design and implement of the DevOps strategy. This includes but is not limited to the CI/CD workflow (Version Control, and automated deployments) ,Sandbox Management , Documenting DevOps releases , Overseeing the Developer Workflow and ensuring Code Reviews take place.
  • Work closely with QA, Tech Leads, Senior Devs and Architects to ensure the smooth delivery of build artefacts into Salesforce environments.
  • Implement scripts utilising the Salesforce Metadata API and SFDX
  • Refine technical user stories as required, articulate clearly the technical solution required to meet a specific DevOps requirement.
  • Support the Tech Lead with ensuring best practises are adhered to, providing feedback as required.
  • Maintain the development workflow, guide and effectively communicate the workflow to Development teams
  • Design, implement, and maintain CI/CD pipelines.
  • Automate infrastructure provisioning and configuration management.
  • Monitor system performance and troubleshoot issues.
  • Ensure security and compliance across all environments.


Required Skills & Experience

  • Proficiency in CI/CD tools such as GitHub Actions.
  • 5+ years in Salesforce Development
  • Strong experience CI/CD technologies, Git (version control), Salesforce Metadata API, SFDX
  • Expertise in large-scale integration using SOAP, REST, Streaming (including Lightning Events), and Metadata APIs, facilitating the seamless connection of Salesforce with other systems.
  • Excellent technical documentation skills
  • Excellent communication skills


Desired Skills

  • Comfortable and effective in leading developer, ensuring project success and team cohesion
  • Financial Services industry experience

●Experience in working in both agile and waterfall methodologies.



Read more
AdTech Industry

AdTech Industry

Agency job
via Peak Hire Solutions by Dhara Thakkar
Noida
8 - 12 yrs
₹60L - ₹80L / yr
DevOps
Apache Spark
Apache Airflow
skill iconMachine Learning (ML)
Pipeline management
+13 more

Review Criteria:

  • Strong MLOps profile
  • 8+ years of DevOps experience and 4+ years in MLOps / ML pipeline automation and production deployments
  • 4+ years hands-on experience in Apache Airflow / MWAA managing workflow orchestration in production
  • 4+ years hands-on experience in Apache Spark (EMR / Glue / managed or self-hosted) for distributed computation
  • Must have strong hands-on experience across key AWS services including EKS/ECS/Fargate, Lambda, Kinesis, Athena/Redshift, S3, and CloudWatch
  • Must have hands-on Python for pipeline & automation development
  • 4+ years of experience in AWS cloud, with recent companies
  • (Company) - Product companies preferred; Exception for service company candidates with strong MLOps + AWS depth

 

Preferred:

  • Hands-on in Docker deployments for ML workflows on EKS / ECS
  • Experience with ML observability (data drift / model drift / performance monitoring / alerting) using CloudWatch / Grafana / Prometheus / OpenSearch.
  • Experience with CI / CD / CT using GitHub Actions / Jenkins.
  • Experience with JupyterHub/Notebooks, Linux, scripting, and metadata tracking for ML lifecycle.
  • Understanding of ML frameworks (TensorFlow / PyTorch) for deployment scenarios.

 

Job Specific Criteria:

  • CV Attachment is mandatory
  • Please provide CTC Breakup (Fixed + Variable)?
  • Are you okay for F2F round?
  • Have candidate filled the google form?

 

Role & Responsibilities:

We are looking for a Senior MLOps Engineer with 8+ years of experience building and managing production-grade ML platforms and pipelines. The ideal candidate will have strong expertise across AWS, Airflow/MWAA, Apache Spark, Kubernetes (EKS), and automation of ML lifecycle workflows. You will work closely with data science, data engineering, and platform teams to operationalize and scale ML models in production.

 

Key Responsibilities:

  • Design and manage cloud-native ML platforms supporting training, inference, and model lifecycle automation.
  • Build ML/ETL pipelines using Apache Airflow / AWS MWAA and distributed data workflows using Apache Spark (EMR/Glue).
  • Containerize and deploy ML workloads using Docker, EKS, ECS/Fargate, and Lambda.
  • Develop CI/CT/CD pipelines integrating model validation, automated training, testing, and deployment.
  • Implement ML observability: model drift, data drift, performance monitoring, and alerting using CloudWatch, Grafana, Prometheus.
  • Ensure data governance, versioning, metadata tracking, reproducibility, and secure data pipelines.
  • Collaborate with data scientists to productionize notebooks, experiments, and model deployments.

 

Ideal Candidate:

  • 8+ years in MLOps/DevOps with strong ML pipeline experience.
  • Strong hands-on experience with AWS:
  • Compute/Orchestration: EKS, ECS, EC2, Lambda
  • Data: EMR, Glue, S3, Redshift, RDS, Athena, Kinesis
  • Workflow: MWAA/Airflow, Step Functions
  • Monitoring: CloudWatch, OpenSearch, Grafana
  • Strong Python skills and familiarity with ML frameworks (TensorFlow/PyTorch/Scikit-learn).
  • Expertise with Docker, Kubernetes, Git, CI/CD tools (GitHub Actions/Jenkins).
  • Strong Linux, scripting, and troubleshooting skills.
  • Experience enabling reproducible ML environments using Jupyter Hub and containerized development workflows.

 

Education:

  • Master’s degree in computer science, Machine Learning, Data Engineering, or related field. 
Read more
Deqode

at Deqode

1 recruiter
Samiksha Agrawal
Posted by Samiksha Agrawal
Mumbai
3 - 6 yrs
₹5L - ₹15L / yr
DevOps
Google Cloud Platform (GCP)
Terraform
skill iconJenkins
CI/CD
+2 more

Role: Senior Platform Engineer (GCP Cloud)

Experience Level: 3 to 6 Years

Work location: Mumbai

Mode : Hybrid


Role & Responsibilities:

  • Build automation software for cloud platforms and applications
  • Drive Infrastructure as Code (IaC) adoption
  • Design self-service, self-healing monitoring and alerting tools
  • Automate CI/CD pipelines (Git, Jenkins, SonarQube, Docker)
  • Build Kubernetes container platforms
  • Introduce new cloud technologies for business innovation

Requirements:

  • Hands-on experience with GCP Cloud
  • Knowledge of cloud services (compute, storage, network, messaging)
  • IaC tools experience (Terraform/CloudFormation)
  • SQL & NoSQL databases (Postgres, Cassandra)
  • Automation tools (Puppet/Chef/Ansible)
  • Strong Linux administration skills
  • Programming: Bash/Python/Java/Scala
  • CI/CD pipeline expertise (Jenkins, Git, Maven)
  • Multi-region deployment experience
  • Agile/Scrum/DevOps methodology


Read more
Navi Mumbai
6 - 10 yrs
₹12L - ₹18L / yr
DevOps
Microsoft SQL Server
Windows Azure

About Us:

Teknobuilt is an innovative construction technology company accelerating Digital and AI platform to help all aspects of program management and execution for workflow automation, collaborative manual tasks and siloed systems. Our platform has received innovation awards and grants in Canada, UK and S. Korea and we are at the frontiers of solving key challenges in the built environment and digital health, safety and quality.

Teknobuilt's vision is helping the world build better- safely, smartly and sustainably. We are on a mission to modernize construction by bringing Digitally Integrated Project Execution System - PACE and expert services for midsize to large construction and infrastructure projects. PACE is an end-to-end digital solution that helps in Real Time Project Execution, Health and Safety, Quality and Field management for greater visibility and cost savings. PACE enables digital workflows, remote working, AI based analytics to bring speed, flow and surety in project delivery. Our platform has received recognition globally for innovation and we are experiencing a period of significant growth for our solutions.


Job description:

IT Infrastructure & System Administration:

Manage Windows/Linux servers, desktop systems, Active Directory, DNS, DHCP, and virtual environments (VMware/Hyper-V). Monitor system performance and implement improvements for efficiency and availability.

Oversee patch management, backups, disaster recovery, and security configurations. Ensure IT compliance, conduct audits, and maintain detailed documentation

DevOps & Cloud Operations:

Design, implement, and manage CI/CD pipelines using Jenkins, GitHub Actions, or similar tools Manage container orchestration using Kubernetes; deploy infrastructure using Terraform Administer and optimize AWS cloud infrastructure Automate deployment, monitoring, and alerting solutions for production environments

Security, Maintenance & Support Define and enforce IT and DevOps security policies and procedures Perform root cause analysis (RCA) for system failures and outages Provide Tier 2/3 support and resolve complex system and production issues.

Collaboration & Communication Coordinate IT projects (e.g., upgrades, migrations, cloud implementations) Collaborate with engineering and product teams for release cycles and production deployments.

Maintain clear communication with internal stakeholders and provide regular reporting.


Qualification:

8+ years of experience in IT systems administration and/or DevOps roles

Minimum of 8-10 years of experience as a Windows Administrator or in a similar role.

Strong knowledge of Windows Server (2016/2019/2022) and Windows operating systems.

Experience with Active Directory, Group Policy, DNS, DHCP, and other Windows-based services.

Familiarity with virtualization technologies (e.g., VMware, Hyper-V). Proficiency in scripting languages (e.g., PowerShell).

Strong understanding of networking principles and protocols.

Relevant certifications (e.g., MCSA, MCSE) are a plus.


Salary Range: Competitive

Employment Type: Full Time

Location: Mumbai / Navi Mumbai

Qualification: Any graduate or master’s degree in science, engineering or technology

Read more
Poshmark

at Poshmark

3 candid answers
1 recruiter
Eman Khan
Posted by Eman Khan
Chennai
8 - 15 yrs
₹20L - ₹40L / yr
skill iconKubernetes
skill iconAmazon Web Services (AWS)
Terraform
Reliability engineering
DevOps

We’re looking for an experienced Site Reliability Engineer to fill the mission-critical role of ensuring that our complex, web-scale systems are healthy, monitored, automated, and designed to scale. You will use your background as an operations generalist to work closely with our development teams from the early stages of design all the way through identifying and resolving production issues. The ideal candidate will be passionate about an operations role that involves deep knowledge of both the application and the product, and will also believe that automation is a key component to operating large-scale systems.


6-Month Accomplishments

  • Familiarize with poshmark tech stack and functional requirements.
  • Get comfortable with automation tools/frameworks used within cloudops organization and deployment processes associated with.
  • Gain in depth knowledge related to related product functionality and infrastructure required for it.
  • Start Contributing by working on small to medium scale projects.
  • Understand and follow on call rotation as a secondary to get familiarized with the on call process.


12+ Month Accomplishments

  • Execute projects related to comms functionality, independently, with little guidance from lead.
  • Create meaningful alerts and dashboards for various sub-system involved in targeted infrastructure.
  • Identify gaps in infrastructure and suggest improvements or work on it.
  • Get involved in on-call rotation.


Responsibilities

  • Serve as a primary point responsible for the overall health, performance, and capacity of one or more of our Internet-facing services.
  • Gain deep knowledge of our complex applications.
  • Assist in the roll-out and deployment of new product features and installations to facilitate our rapid iteration and constant growth.
  • Develop tools to improve our ability to rapidly deploy and effectively monitor custom applications in a large-scale UNIX environment.
  • Work closely with development teams to ensure that platforms are designed with "operability" in mind.
  • Function well in a fast-paced, rapidly-changing environment.
  • Participate in a 24x7 on-call rotation.


Desired Skills

  • 5+ years of experience in Systems Engineering/Site Reliability Operations role is required, ideally in a startup or fast-growing company.
  • 5+ years in a UNIX-based large-scale web operations role.
  • 5+ years of experience in doing 24/7 support for large scale production environments.
  • Battle-proven, real-life experience in running a large scale production operation.
  • Experience working on cloud-based infrastructure e.g AWS, GCP, Azure.
  • Hands-on experience with continuous integration tools such as Jenkins, configuration management with Ansible, systems monitoring and alerting with tools such as Nagios, New Relic, Graphite.
  • Experience scripting/coding
  • Ability to use a wide variety of open source technologies and tools.


Technologies we use:

  • Ruby, JavaScript, NodeJs, Tomcat, Nginx, HaProxy
  • MongoDB, RabbitMQ, Redis, ElasticSearch.
  • Amazon Web Services (EC2, RDS, CloudFront, S3, etc.)
  • Terraform, Packer, Jenkins, Datadog, Kubernetes, Docker, Ansible and other DevOps tools.
Read more
Poshmark

at Poshmark

3 candid answers
1 recruiter
Eman Khan
Posted by Eman Khan
Chennai
4 - 8 yrs
₹15L - ₹30L / yr
skill iconKubernetes
skill iconAmazon Web Services (AWS)
Terraform
Reliability engineering
DevOps

About Poshmark

Poshmark is a leading fashion resale marketplace powered by a vibrant, highly engaged community of buyers and sellers and real-time social experiences. Designed to make online selling fun, more social and easier than ever, Poshmark empowers its sellers to turn their closet into a thriving business and share their style with the world. Since its founding in 2011, Poshmark has grown its community to over 130 million users and generated over $10 billion in GMV, helping sellers realize billions in earnings, delighting buyers with deals and one-of-a-kind items, and building a more sustainable future for fashion. For more information, please visit www.poshmark.com, and for company news, visit newsroom.poshmark.com.


We’re looking for an experienced Site Reliability Engineer to fill the mission-critical role of ensuring that our complex, web-scale systems are healthy, monitored, automated, and designed to scale. You will use your background as an operations generalist to work closely with our development teams from the early stages of design all the way through identifying and resolving production issues. The ideal candidate will be passionate about an operations role that involves deep knowledge of both the application and the product, and will also believe that automation is a key component to operating large-scale systems.


6-Month Accomplishments

  • Familiarize with poshmark tech stack and functional requirements.
  • Get comfortable with automation tools/frameworks used within cloudops organization and deployment processes associated with.
  • Gain in depth knowledge related to related product functionality and infrastructure required for it.
  • Start Contributing by working on small to medium scale projects.
  • Understand and follow on call rotation as a secondary to get familiarized with the on call process.


12+ Month Accomplishments

  • Execute projects related to comms functionality, independently, with little guidance from lead.
  • Create meaningful alerts and dashboards for various sub-system involved in targeted infrastructure.
  • Identify gaps in infrastructure and suggest improvements or work on it.
  • Get involved in on-call rotation.


Responsibilities

  • Serve as a primary point responsible for the overall health, performance, and capacity of one or more of our Internet-facing services.
  • Gain deep knowledge of our complex applications.
  • Assist in the roll-out and deployment of new product features and installations to facilitate our rapid iteration and constant growth.
  • Develop tools to improve our ability to rapidly deploy and effectively monitor custom applications in a large-scale UNIX environment.
  • Work closely with development teams to ensure that platforms are designed with "operability" in mind.
  • Function well in a fast-paced, rapidly-changing environment.
  • Participate in a 24x7 on-call rotation


Desired Skills

  • 4+ years of experience in Systems Engineering/Site Reliability Operations role is required, ideally in a startup or fast-growing company.
  • 4+ years in a UNIX-based large-scale web operations role.
  • 4+ years of experience in doing 24/7 support for large scale production environments.
  • Battle-proven, real-life experience in running a large scale production operation.
  • Experience working on cloud-based infrastructure e.g AWS, GCP, Azure.
  • Hands-on experience with continuous integration tools such as Jenkins, configuration management with Ansible, systems monitoring and alerting with tools such as Nagios, New Relic, Graphite.
  • Experience scripting/coding
  • Ability to use a wide variety of open source technologies and tools.


Technologies we use:

  • Ruby, JavaScript, NodeJs, Tomcat, Nginx, HaProxy
  • MongoDB, RabbitMQ, Redis, ElasticSearch.
  • Amazon Web Services (EC2, RDS, CloudFront, S3, etc.)
  • Terraform, Packer, Jenkins, Datadog, Kubernetes, Docker, Ansible and other DevOps tools.
Read more
Tarento Group

at Tarento Group

3 candid answers
1 recruiter
Reshika Mendiratta
Posted by Reshika Mendiratta
STOCKHOLM (Sweden), Bengaluru (Bangalore)
8yrs+
Best in industry
DevOps
Microsoft Windows Server
Microsoft IIS administration
Windows Azure
Powershell
+2 more

About Tarento:

Tarento is a fast-growing technology consulting company headquartered in Stockholm, with a strong presence in India and clients across the globe. We specialize in digital transformation, product engineering, and enterprise solutions, working across diverse industries including retail, manufacturing, and healthcare. Our teams combine Nordic values with Indian expertise to deliver innovative, scalable, and high-impact solutions.

 

We're proud to be recognized as a Great Place to Work, a testament to our inclusive culture, strong leadership, and commitment to employee well-being and growth. At Tarento, you’ll be part of a collaborative environment where ideas are valued, learning is continuous, and careers are built on passion and purpose.


Scope of Work:

  • Support the migration of applications from Windows Server 2008 to Windows Server 2019 or 2022 in an IaaS environment.
  • Migrate IIS websites, Windows Services, and related application components.
  • Assist with migration considerations for SQL Server connections, instances, and basic data-related dependencies.
  • Evaluate and migrate message queues (MSMQ or equivalent technologies).
  • Document the existing environment, migration steps, and post-migration state.
  • Work closely with DevOps, development, and infrastructure teams throughout the project.


Required Skills & Experience:

  • Strong hands-on experience with IIS administration, configuration, and application migration.
  • Proven experience migrating workloads between Windows Server versions, ideally legacy to modern.
  • Knowledge of Windows Services setup, configuration, and troubleshooting.
  • Practical understanding of SQL Server (connection strings, service accounts, permissions).
  • Experience with queues IBM/MSMQ or similar) and their migration considerations.
  • Ability to identify migration risks, compatibility constraints, and remediation options.
  • Strong troubleshooting and analytical skills.
  • Familiarity with Microsoft technologies (.Net, etc)
  • Networking and Active Directory related knowledge

Desirable / Nice-to-Have

  • Exposure to CI/CD tools, especially TeamCity and Octopus Deploy.
  • Familiarity with Azure services and related tools (Terraform, etc)
  • PowerShell scripting for automation or configuration tasks.
  • Understanding enterprise change management and documentation practices.
  • Security

Soft Skills

  • Clear written and verbal communication.
  • Ability to work independently while collaborating with cross-functional teams.
  • Strong attention to detail and a structured approach to execution.
  • Troubleshooting
  • Willingness to learn.


Location & Engagement Details

We are looking for a Senior DevOps Consultant for an onsite role in Stockholm (Sundbyberg office). This opportunity is open to candidates currently based in Bengaluru who are willing to relocate to Sweden for the assignment.

The role will start with an initial 6-month onsite engagement, with the possibility of extension based on project requirements and performance.

Read more
Remote only
1 - 4 yrs
₹5L - ₹10L / yr
DevOps
Azure
Cloud
skill iconDocker
skill iconKubernetes

About the role

 

We’re looking for a hands-on Junior System/Cloud Engineer to help keep our cloud infrastructure and internal IT humming. You’ll work closely with a senior IT/Cloud Engineer across Azure, AWS, and Google Cloud—provisioning VMs/EC2/Compute Engine instances, basic database setup, and day-to-day maintenance. You’ll also install and maintain team tools (e.g., Elasticsearch, Tableau, MSSQL) and pitch in with end-user support for laptops and any other system issues when needed.

 

What you’ll do

 

Cloud provisioning & operations (multi-cloud)

·         Create, configure, and maintain virtual machines and related resources in Azure, AWS (EC2), and Google Compute Engine (networks, security groups/firewalls, storage, OS patching and routine maintenance/backups).

·         Assist with database setup (managed services or VM-hosted), backups, and access controls under guidance from senior engineers.

·         Implement tagging, least-privilege IAM, and routine patching for compliance and cost hygiene.

Tooling installation & maintenance

·         Install, configure, upgrade, and monitor tools/softwares required like Elasticsearch and Tableau server and any other and manage service lifecycle (systemd/Windows service), security basics, and health checks.

·         Document installation steps, configurations, and runbooks.

Monitoring & incident support

·         Set up basic monitoring/alerts using cloud-native tools and logs.

·         End-user & endpoint support (as needed)

·         Provide first-line support for laptops/desktops (Windows/macOS), peripherals, conferencing, VPN, and common apps; escalate when appropriate.

·         Assist with device setup, imaging, patching, and inventory; keep tickets and resolutions well documented.

What you’ll bring

·         Experience: 1–3 years in DevOps/System Admin/IT Ops with exposure to at least one major cloud (primarily Azure, good to have skills on AWS, or GCP); eagerness to work across all three.

Core skills:

·         Linux and Windows server admin basics (users, services, networking, storage).

·         VM provisioning and troubleshooting in Azure, AWS EC2, or GCE; understanding of security groups/firewalls, SSH/RDP, and snapshots/images.

·         Installation/maintenance of team tools (ex: Elasticsearch, Tableau Server etc).

·         Scripting (Bash and/or PowerShell); Git fundamentals; comfort with ticketing systems.

·         Clear documentation and communication habits.

Nice to have:

·         Terraform or ARM/CloudFormation basics; container fundamentals (Docker).

·         Monitoring/logging familiarity (Elastic/Azure Monitor).

·         Basic networking (DNS, HTTP, TLS, VPN, Nginx)

·         Azure certification (AZ-104, AZ-204)

 

To process your resume for the next process, please fill out the Google form with your updated resume.


https://forms.gle/b8JeStRaWrWv3tzLA

Read more
Industrial Automation

Industrial Automation

Agency job
via Michael Page by Pramod P
Bengaluru (Bangalore), Bommasandra Industrial Area
8 - 13 yrs
₹20L - ₹44L / yr
skill iconPython
skill iconC++
skill iconRust
gitlab
DevOps
+4 more

Mode Employment – Fulltime and Permanent

Working Location: Bommasandra Industrial Area, Hosur Main Road, Bangalore

Working Days: 5 days

Working Model: Hybrid - 3 days WFO and 2 days Home


Position Overview

As the Lead Software Engineer in our Research & Innovation team, you’ll play a strategic role in establishing and driving the technical vision for industrial AI solutions. Working closely with the Lead AI Engineer, you will form a leadership tandem to define the roadmap for the team, cultivate an innovative culture, and ensure that projects are strategically aligned with the organization’s goals. Your leadership will be crucial in developing, mentoring, and empowering the team as we expand, helping create an environment where innovative ideas can translate seamlessly from research to industry-ready products.


Key Responsibilities:

  • Define and drive the technical strategy for embedding AI into industrial automation products, with a focus on scalability, quality, and industry compliance.
  • Lead the development of a collaborative, high-performing engineering team, mentoring junior engineers, automation experts, and researchers.
  • Establish and oversee processes and standards for agile and DevOps practices, ensuring project alignment with strategic goals.
  • Collaborate with stakeholders to align project goals, define priorities, and manage timelines, while driving innovative, research-based solutions.
  • Act as a key decision-maker on technical issues, architecture, and system design, ensuring long-term maintainability and scalability of solutions.
  • Ensure adherence to industry standards, certifications, and compliance, and advocate for industry best practices within the team.
  • Stay updated on software engineering trends and AI applications in embedded systems, incorporating the latest advancements into the team’s strategic planning.


Qualifications:

  • Bachelor’s or Master’s degree in Computer Science, Engineering, or related field.
  • Extensive experience in software engineering, with a proven track record of leading technical teams, ideally in manufacturing or embedded systems.
  • Strong expertise in Python and C++/Rust, Gitlab toolchains, and system architecture for embedded applications.
  • Experience in DevOps, CI/CD, and agile methodologies, with an emphasis on setting and maintaining high standards across a team.
  • Exceptional communication and collaboration skills in English.
  • Willingness to travel as needed.


Preferred:

  • Background in driving team culture, agile project management, and experience embedding AI in industrial products.
  • Familiarity with sociocratic or consent-based management practices.
  • Knowledge in embedded programming is an advantage.
Read more
Ekloud INC
Remote only
6 - 20 yrs
₹21L - ₹30L / yr
cpq
Salesforce
salesforce CPQ
Sales
Agile/Scrum
+3 more

Salesforce Technical Product Manager


Location - Remote

Shift Timing - 5pm to 2 am

Preferred Skills/Experiences: - Project Management

CPQ Specialist certification is a huge plus

Salesforce Admin Certification (201) and Salesforce Certified Sales Cloud Consultant Preferred


Professional Experience/Qualifications:

Has successfully led development and delivery of multiple complex business technology solutions into production that have achieved or surpassed business goals. Experience developing and supporting mission critical applications optimized to run in the cloud or virtualized environments. Deep knowledge of system architecture, technical design, and system and software development technology.


Required Skills/Experiences:

6+ years of IT experience with a Bachelor’s degree in Computer Science, MIS, Computer Engineering or equivalent technical degree


3+ years of experience as a business analyst/product manager supporting Salesforce CPQ platform.


Deep understanding of the Salesforce.com product suite including CPQ, Sales Cloud, Service Cloud, FSL, Community Cloud, and the AppExchange.


Managing application development at scale, employing SDLC methodologies including Agile, Scrum, and DevOps


Strong subject matter expertise with core SaaS principles: Product setup, Product Options, Product configuration & rules, Pricing, Quoting, Subscription Management, amendments, renewals, Billing and Revenue Recognition


Skilled at setting and communicating priorities effectively; able to manage multiple tasks and priorities


Exceptional ability to lead change using positive and collaborative methods


Strong communication skills in writing, speaking, and presenting


Able to communicate technical or complex subject matter in business terms


Demonstrated acumen and passion for both business and technology, including the latest digital innovations as they apply to the creation of business value


Accepts ownership and welcomes responsibility

Read more
AbleCredit

at AbleCredit

2 candid answers
Utkarsh Apoorva
Posted by Utkarsh Apoorva
Bengaluru (Bangalore)
5 - 8 yrs
₹20L - ₹35L / yr
CI/CD
DevOps
Security Information and Event Management (SIEM)
ISO/IEC 27001:2005

Role: Senior Security Engineer

Salary: INR 20-35L per annum

Performance Bonus: Up to 10% of the base salary

Location: Hulimavu, Bangalore, India

Experience: 5-8 years



About AbleCredit:

AbleCredit has built a foundational AI platform to help BFSI enterprises reduce OPEX by up to 70% by powering workflows for onboarding, claims, credit, and collections. Our GenAI model achieves over 95% accuracy in understanding Indian dialects and excels in financial analysis.


The company was founded in June 2023 by Utkarsh Apoorva (IIT Delhi, built Reshamandi, Guitarstreet, Edulabs); Harshad Saykhedkar (IITB, ex-AI Lead at Slack); and Ashwini Prabhu (IIML, co-founder of Mythiksha, ex-Product Head at Reshamandi, HandyTrain).




What Work You’ll Do

  • Be the guardian of trust — every system you secure will protect millions of data interactions.
  • Operate like a builder, not a gatekeeper — automate guardrails that make security invisible but ever-present.
  • You’ll define what ‘secure by default’ means for a next-generation AI SaaS platform.
  • Own the security posture of our cloud-native SaaS platform — design, implement, and enforce security controls across AWS, Linux, and Kubernetes (EKS) environments.
  • Drive security compliance initiatives such as SOC 2 Type II, ISO 27001, and RBI-aligned frameworks — build systems that enforce, not just document, compliance.
  • Architect defense-in-depth systems across EC2, S3, IAM, and VPC layers, ensuring secure configuration, least-privilege access, and continuous compliance.
  • Build and automate security pipelines — integrate AWS Security Hub, GuardDuty, Inspector, WAF, and CloudTrail into continuous detection and response systems.
  • Lead vulnerability management and incident readiness — identify, prioritize, and remediate vulnerabilities across the stack while ensuring traceable audit logs.
  • Implement and maintain zero-trust and least-privilege access controls using AWS IAM, SSO, and modern secrets management tools like AWS SSM or Vault.
  • Serve as a trusted advisor — train developers, review architecture, and proactively identify risks before they surface.




The Skills You Have..

  • Deep hands-on experience with AWS security architecture — IAM, VPCs, EKS, EC2, S3, CloudTrail, Security Hub, WAF, GuardDuty, and Inspector.
  • Strong background in Linux hardening, container security, and DevSecOps automation.
  • Proficiency with infrastructure-as-code (Terraform, CloudFormation) and integrating security controls into provisioning.
  • Knowledge of zero-trust frameworks, least-privilege IAM, and secrets management (Vault, SSM, KMS).
  • Experience with SIEM and monitoring tools — configuring alerts, analyzing logs, and responding to incidents.
  • Familiarity with compliance automation and continuous assurance — especially SOC 2, ISO 27001, or RBI frameworks.
  • Understanding of secure software supply chains — dependency scanning, artifact signing, and policy enforcement in CI/CD.
  • Ability to perform risk assessment, threat modeling, and architecture review collaboratively with engineering teams.



What You Should Have Done in the Past

  • Secured cloud-native SaaS systems built entirely on AWS (EC2, EKS, S3, IAM, VPC).
  • Led or contributed to SOC 2 Type II or ISO 27001 certification initiatives, ideally in a regulated industry such as FinTech.
  • Designed secure CI/CD pipelines with integrated code scanning, image validation, and secrets rotation.
  • (Bonus) Built internal security automation frameworks or tooling for continuous monitoring and compliance checks.





Read more
Global digital transformation solutions provider.

Global digital transformation solutions provider.

Agency job
via Peak Hire Solutions by Dhara Thakkar
Bengaluru (Bangalore), Chennai, Coimbatore, Hosur, Hyderabad
12 - 15 yrs
₹20L - ₹35L / yr
DevOps
Automation
skill iconGitHub
Agile management
Agile/Scrum
+3 more

Job Details

- Job Title: DevOps and SRE -Technical Project Manager

- Industry: Global digital transformation solutions provider

- Domain - Information technology (IT)

- Experience Required: 12-15 years

- Employment Type: Full Time

- Job Location: Bangalore, Chennai, Coimbatore, Hosur & Hyderabad

- CTC Range: Best in Industry


Job Description

Company’s DevOps Practice is seeking a highly skilled DevOps and SRE Technical Project Manager to lead large-scale transformation programs for enterprise customers. The ideal candidate will bring deep expertise in DevOps and Site Reliability Engineering (SRE), combined with strong program management, stakeholder leadership, and the ability to drive end-to-end execution of complex initiatives.


Key Responsibilities

  • Lead the planning, execution, and successful delivery of DevOps and SRE transformation programs for enterprise clients, including full oversight of project budgets, financials, and margins.
  • Partner with senior stakeholders to define program objectives, roadmaps, milestones, and success metrics aligned with business and technology goals.
  • Develop and implement actionable strategies to optimize development, deployment, release management, observability, and operational workflows across client environments.
  • Provide technical leadership and strategic guidance to cross-functional engineering teams, ensuring alignment with industry standards, best practices, and company delivery methodologies.
  • Identify risks, dependencies, and blockers across programs, and proactively implement mitigation and contingency plans.
  • Monitor program performance, KPIs, and financial health; drive corrective actions and margin optimization where necessary.
  • Facilitate strong communication, collaboration, and transparency across engineering, product, architecture, and leadership teams.
  • Deliver periodic program updates to internal and client stakeholders, highlighting progress, risks, challenges, and improvement opportunities.
  • Champion a culture of continuous improvement, operational excellence, and innovation by encouraging adoption of emerging DevOps, SRE, automation, and cloud-native practices.
  • Support GitHub migration initiatives, including planning, execution, troubleshooting, and governance setup for repository and workflow migrations.

 

Requirements

  • Bachelor’s degree in Computer Science, Engineering, Business Administration, or a related technical discipline.
  • 15+ years of IT experience, including at least 5 years in a managerial or program leadership role.
  • Proven experience leading large-scale DevOps and SRE transformation programs with measurable business impact.
  • Strong program management expertise, including planning, execution oversight, risk management, and financial governance.
  • Solid understanding of Agile methodologies (Scrum, Kanban) and modern software development practices.
  • Deep hands-on knowledge of DevOps principles, CI/CD pipelines, automation frameworks, Infrastructure as Code (IaC), and cloud-native tooling.
  • Familiarity with SRE practices such as service reliability, observability, SLIs/SLOs, incident management, and performance optimization.
  • Experience with GitHub migration projects—including repository analysis, migration planning, tooling adoption, and workflow modernization.
  • Excellent communication, stakeholder management, and interpersonal skills with the ability to influence and lead cross-functional teams.
  • Strong analytical, organizational, and problem-solving skills with a results-oriented mindset.
  • Preferred certifications: PMP, PgMP, ITIL, Agile/Scrum Master, or relevant technical certifications.

 

Skills: Devops Tools, Cloud Infrastructure, Team Management


Must-Haves

DevOps principles (5+ years), SRE practices (5+ years), GitHub migration (3+ years), CI/CD pipelines (5+ years), Agile methodologies (5+ years)

Notice period - 0 to 15days only 

Read more
AI-First Company

AI-First Company

Agency job
via Peak Hire Solutions by Dhara Thakkar
Bengaluru (Bangalore), Mumbai, Hyderabad, Gurugram
5 - 17 yrs
₹30L - ₹45L / yr
Data engineering
Data architecture
SQL
Data modeling
GCS
+47 more

ROLES AND RESPONSIBILITIES:

You will be responsible for architecting, implementing, and optimizing Dremio-based data Lakehouse environments integrated with cloud storage, BI, and data engineering ecosystems. The role requires a strong balance of architecture design, data modeling, query optimization, and governance enablement in large-scale analytical environments.


  • Design and implement Dremio lakehouse architecture on cloud (AWS/Azure/Snowflake/Databricks ecosystem).
  • Define data ingestion, curation, and semantic modeling strategies to support analytics and AI workloads.
  • Optimize Dremio reflections, caching, and query performance for diverse data consumption patterns.
  • Collaborate with data engineering teams to integrate data sources via APIs, JDBC, Delta/Parquet, and object storage layers (S3/ADLS).
  • Establish best practices for data security, lineage, and access control aligned with enterprise governance policies.
  • Support self-service analytics by enabling governed data products and semantic layers.
  • Develop reusable design patterns, documentation, and standards for Dremio deployment, monitoring, and scaling.
  • Work closely with BI and data science teams to ensure fast, reliable, and well-modeled access to enterprise data.


IDEAL CANDIDATE:

  • Bachelor’s or Master’s in Computer Science, Information Systems, or related field.
  • 5+ years in data architecture and engineering, with 3+ years in Dremio or modern lakehouse platforms.
  • Strong expertise in SQL optimization, data modeling, and performance tuning within Dremio or similar query engines (Presto, Trino, Athena).
  • Hands-on experience with cloud storage (S3, ADLS, GCS), Parquet/Delta/Iceberg formats, and distributed query planning.
  • Knowledge of data integration tools and pipelines (Airflow, DBT, Kafka, Spark, etc.).
  • Familiarity with enterprise data governance, metadata management, and role-based access control (RBAC).
  • Excellent problem-solving, documentation, and stakeholder communication skills.


PREFERRED:

  • Experience integrating Dremio with BI tools (Tableau, Power BI, Looker) and data catalogs (Collibra, Alation, Purview).
  • Exposure to Snowflake, Databricks, or BigQuery environments.
  • Experience in high-tech, manufacturing, or enterprise data modernization programs.
Read more
Capital Squared
Remote only
5 - 10 yrs
₹25L - ₹55L / yr
MLOps
DevOps
Google Cloud Platform (GCP)
CI/CD
skill iconPostgreSQL
+4 more

Role: Full-Time, Long-Term Required: Docker, GCP, CI/CD Preferred: Experience with ML pipelines


OVERVIEW

We are seeking a DevOps engineer to join as a core member of our technical team. This is a long-term position for someone who wants to own infrastructure and deployment for a production machine learning system. You will ensure our prediction pipeline runs reliably, deploys smoothly, and scales as needed.


The ideal candidate thinks about failure modes obsessively, automates everything possible, and builds systems that run without constant attention.


CORE TECHNICAL REQUIREMENTS

Docker (Required): Deep experience with containerization. Efficient Dockerfiles, layer caching, multi-stage builds, debugging container issues. Experience with Docker Compose for local development.


Google Cloud Platform (Required): Strong GCP experience: Cloud Run for serverless containers, Compute Engine for VMs, Artifact Registry for images, Cloud Storage, IAM. You can navigate the console but prefer scripting everything.


CI/CD (Required): Build and maintain deployment pipelines. GitHub Actions required. You automate testing, building, pushing, and deploying. You understand the difference between continuous integration and continuous deployment.


Linux Administration (Required): Comfortable on the command line. SSH, diagnose problems, manage services, read logs, fix things. Bash scripting is second nature.


PostgreSQL (Required): Database administration basics—backups, monitoring, connection management, basic performance tuning. Not a DBA, but comfortable keeping a production database healthy.


Infrastructure as Code (Preferred): Terraform, Pulumi, or similar. Infrastructure should be versioned, reviewed, and reproducible—not clicked together in a console.


WHAT YOU WILL OWN

Deployment Pipeline: Maintaining and improving deployment scripts and CI/CD workflows. Code moves from commit to production reliably with appropriate testing gates.


Cloud Run Services: Managing deployments for model fitting, data cleansing, and signal discovery services. Monitor health, optimize cold starts, handle scaling.


VM Infrastructure: PostgreSQL and Streamlit on GCP VMs. Instance management, updates, backups, security.


Container Registry: Managing images in GitHub Container Registry and Google Artifact Registry. Cleanup policies, versioning, access control.


Monitoring and Alerting: Building observability. Logging, metrics, health checks, alerting. Know when things break before users tell us.


Environment Management: Configuration across local and production. Secrets management. Environment parity where it matters.


WHAT SUCCESS LOOKS LIKE

Deployments are boring—no drama, no surprises. Systems recover automatically from transient failures. Engineers deploy with confidence. Infrastructure changes are versioned and reproducible. Costs are reasonable and resources scale appropriately.


ENGINEERING STANDARDS

Automation First: If you do something twice, automate it. Manual processes are bugs waiting to happen.


Documentation: Runbooks, architecture diagrams, deployment guides. The next person can understand and operate the system.


Security Mindset: Secrets never in code. Least-privilege access. You think about attack surfaces.


Reliability Focus: Design for failure. Backups are tested. Recovery procedures exist and work.


CURRENT ENVIRONMENT

GCP (Cloud Run, Compute Engine, Artifact Registry, Cloud Storage), Docker, Docker Compose, GitHub Actions, PostgreSQL 16, Bash deployment scripts with Python wrapper.


WHAT WE ARE LOOKING FOR

Ownership Mentality: You see a problem, you fix it. You do not wait for assignment.


Calm Under Pressure: When production breaks, you diagnose methodically.


Communication: You explain infrastructure decisions to non-infrastructure people. You document what you build.


Long-Term Thinking: You build systems maintained for years, not quick fixes creating tech debt.


EDUCATION

University degree in Computer Science, Engineering, or related field preferred. Equivalent demonstrated expertise also considered.


TO APPLY

Include: (1) CV/resume, (2) Brief description of infrastructure you built or maintained, (3) Links to relevant work if available, (4) Availability and timezone.

Read more
AsperAI

at AsperAI

4 candid answers
Bisman Gill
Posted by Bisman Gill
BLR
3 - 6 yrs
Upto ₹33L / yr (Varies
)
CI/CD
skill iconKubernetes
skill iconDocker
kubeflow
TensorFlow
+7 more

About the Role

We are seeking a highly skilled and experienced AI Ops Engineer to join our team. In this role, you will be responsible for ensuring the reliability, scalability, and efficiency of our AI/ML systems in production. You will work at the intersection of software engineering, machine learning, and DevOps— helping to design, deploy, and manage AI/ML models and pipelines that power mission-critical business applications.

The ideal candidate has hands-on experience in AI/ML operations and orchestrating complex data pipelines, a strong understanding of cloud-native technologies, and a passion for building robust, automated, and scalable systems.


Key Responsibilities

  • AI/ML Systems Operations: Develop and manage systems to run and monitor production AI/ML workloads, ensuring performance, availability, cost-efficiency and convenience.
  • Deployment & Automation: Build and maintain ETL, ML and Agentic pipelines, ensuring reproducibility and smooth deployments across environments.
  • Monitoring & Incident Response: Design observability frameworks for ML systems (alerts and notifications, latency, cost, etc.) and lead incident triage, root cause analysis, and remediation.
  • Collaboration: Partner with data scientists, ML engineers, and software engineers to operationalize models at scale.
  • Optimization: Continuously improve infrastructure, workflows, and automation to reduce latency, increase throughput, and minimize costs.
  • Governance & Compliance: Implement MLOps best practices, including versioning, auditing, security, and compliance for data and models.
  • Leadership: Mentor junior engineers and contribute to the development of AI Ops standards and playbooks.


Qualifications

  • Bachelor’s or Master’s degree in Computer Science, Engineering, or related field (or equivalent practical experience).
  • 4+ years of experience in AI/MLOps, DevOps, SRE, Data Engineering, or with at least 2+ years in AI/ML-focused operations.
  • Strong expertise with cloud platforms (AWS, Azure, GCP) and container orchestration (Kubernetes, Docker).
  • Hands-on experience with ML pipelines and frameworks (MLflow, Kubeflow, Airflow, SageMaker, Vertex AI, etc.).
  • Proficiency in Python and/or other scripting languages for automation.
  • Familiarity with monitoring/observability tools (Prometheus, Grafana, Datadog, ELK, etc.).
  • Deep understanding of CI/CD, GitOps, and Infrastructure as Code (Terraform, Helm, etc.).
  • Knowledge of data governance, model drift detection, and compliance in AI systems.
  • Excellent problem-solving, communication, and collaboration skills.

Nice-to-Have

  • Experience in large-scale distributed systems and real-time data streaming (Kafka, Flink, Spark).
  • Familiarity with data science concepts, and frameworks such as scikit-learn, Keras, PyTorch, Tensorflow, etc.
  • Full Stack Development knowledge to collaborate effectively across end-to-end solution delivery
  • Contributions to open-source MLOps/AI Ops tools or platforms.
  • Exposure to Responsible AI practices, model fairness, and explainability frameworks

Why Join Us

  • Opportunity to shape and scale AI/ML operations in a fast-growing, innovation-driven environment.
  • Work alongside leading data scientists and engineers on cutting-edge AI solutions.
  • Competitive compensation, benefits, and career growth opportunities.
Read more
Codemonk

at Codemonk

4 candid answers
2 recruiters
Reshika Mendiratta
Posted by Reshika Mendiratta
Bengaluru (Bangalore)
1yr+
Upto ₹10L / yr (Varies
)
DevOps
skill iconAmazon Web Services (AWS)
CI/CD
skill iconDocker
skill iconKubernetes
+3 more

Role Overview

We are seeking a DevOps Engineer with 2 years of experience to join our innovative team. The ideal

candidate will bridge the gap between development and operations, implementing and maintaining our

cloud infrastructure while ensuring secure deployment pipelines and robust security practices for our

client projects.


Responsibilities:

  • Design, implement, and maintain CI/CD pipelines.
  • Containerize applications using Docker and orchestrate deployments
  • Manage and optimize cloud infrastructure on AWS and Azure platforms
  • Monitor system performance and implement automation for operational tasks to ensure optimal
  • performance, security, and scalability.
  • Troubleshoot and resolve infrastructure and deployment issues
  • Create and maintain documentation for processes and configurations
  • Collaborate with cross-functional teams to gather requirements, prioritise tasks, and contribute to project completion.
  • Stay informed about emerging technologies and best practices within the fields of DevOps and cloud computing.


Requirements:

  • 2+ years of hands-on experience with AWS cloud services
  • Strong proficiency in CI/CD pipeline configuration
  • Expertise in Docker containerisation and container management
  • Proficiency in shell scripting (Bash/Power-Shell)
  • Working knowledge of monitoring and logging tools
  • Knowledge of network security and firewall configuration
  • Strong communication and collaboration skills, with the ability to work effectively within a team
  • environment
  • Understanding of networking concepts and protocols in AWS and/or Azure
Read more
Ekloud INC
ashwini rathod
Posted by ashwini rathod
Remote only
6 - 20 yrs
₹20L - ₹30L / yr
CPQ
Billing
Sales
DevOps
APL
+3 more

Hiring : Salesforce CPQ Developer 


Experience : 6+ years

Shift timings : 7:30PM to 3:30AM

Location : India (Remote) 


Key skills: CPQ steel brick , Billing sales cloud , LWC ,Apex integrations, Devops, APL, ETL Tool


Design scalable and efficient Salesforce Sales Cloud solutions that meet best practices and business requirements.


Lead the technical design and implementation of Sales Cloud features, including CPQ, Partner Portal, Lead, Opportunity and Quote Management.


Provide technical leadership and mentorship to development teams. Review and approve technical designs, code, and configurations.

Work with business stakeholders to gather requirements, provide guidance, and ensure that solutions meet their needs. Translate business requirements into technical specifications.


Oversee and guide the development of custom Salesforce applications, including custom objects, workflows, triggers, and LWC/ Apex code.

Ensure data quality, integrity, and security within the Salesforce platform. Implement data migration strategies and manage data integrations.


Establish and enforce Salesforce development standards, best practices, and governance processes. Monitor and optimize the performance of Salesforce solutions, including addressing performance issues and ensuring efficient use of resources.


Stay up-to-date with Salesforce updates and new features. Propose and implement innovative solutions to enhance Salesforce capabilities and improve business processes.

Document design, code consistently throughout the design/development process

Diagnose, resolve, and document system issues to support project team.


Research questions with respect to both maintenance and development activities. 

Perform post-migration system review and ongoing support.

Prepare and deliver demonstrations/presentations to client audiences, professional seniors/peers


Adhere to best practices constantly around code/data source control, ticket tracking, etc. during the course of an assignment

 

Skills/Experience:


Bachelor’s degree in Computer Science, Information Systems, or related field.

6+ years of experience in architecting and designing full stack solutions on the Salesforce Platform.

Must have 3+ years of Experience in architecting, designing and developing Salesforce CPQ (SteelBrick CPQ) and Billing solutions.

Minimum 3+ years of Lightning Framework development experience (Aura & LWC).

CPQ Specialist and Salesforce Platform Developer II certification is required.


Extensive development experience with Apex Classes, Triggers, Visualforce, Lightning, Batch Apex, Salesforce DX, Apex Enterprise Patterns, Apex Mocks, Force.com API, Visual Flows, Platform Events, SOQL, Salesforce APIs, and other programmatic solutions on the Salesforce platform.


Experience in debugging APEX CPU Error, SOQL queries Exceptions, Refactoring code and working with complex implementations involving features like asynchronous processing


Clear insight of Salesforce platform best practices, coding and design guidelines and governor limits.

Experience with Development Tools and Technologies: Visual Studio Code, GIT, and DevOps Setup to automate deployment/releases.

Knowledge of integration architecture as well as third-party integration tools and ETL (Such as Informatica, Workato, Boomi, Mulesoft etc.) with Salesforce


Experience in Agile development, iterative development, and proof of concepts (POCs).

Excellent written and verbal communication skills with ability to lead technical projects and manage multiple priorities in a fast-paced environment.

Read more
iMerit
Bengaluru (Bangalore)
6 - 9 yrs
₹10L - ₹15L / yr
DevOps
Terraform
Apache Kafka
skill iconPython
skill iconGo Programming (Golang)
+4 more

Exp: 7- 10 Years

CTC: up to 35 LPA


Skills:

  • 6–10 years DevOps / SRE / Cloud Infrastructure experience
  • Expert-level Kubernetes (networking, security, scaling, controllers)
  • Terraform Infrastructure-as-Code mastery
  • Hands-on Kafka production experience
  • AWS cloud architecture and networking expertise
  • Strong scripting in Python, Go, or Bash
  • GitOps and CI/CD tooling experience


Key Responsibilities:

  • Design highly available, secure cloud infrastructure supporting distributed microservices at scale
  • Lead multi-cluster Kubernetes strategy optimized for GPU and multi-tenant workloads
  • Implement Infrastructure-as-Code using Terraform across full infrastructure lifecycle
  • Optimize Kafka-based data pipelines for throughput, fault tolerance, and low latency
  • Deliver zero-downtime CI/CD pipelines using GitOps-driven deployment models
  • Establish SRE practices with SLOs, p95 and p99 monitoring, and FinOps discipline
  • Ensure production-ready disaster recovery and business continuity testing



If interested Kindly share your updated resume at 82008 31681

Read more
Albert Invent

at Albert Invent

4 candid answers
3 recruiters
Bisman Gill
Posted by Bisman Gill
Remote, BLR
12yrs+
Upto ₹65L / yr (Varies
)
skill iconAmazon Web Services (AWS)
DevOps
Cloud Computing
Security Information and Event Management (SIEM)
Databases
+2 more

Seeking a Senior Staff Cloud Engineer who will lead the design, development, and optimization of scalable cloud architectures, drive automation across the platform, and collaborate with cross-functional stakeholders to deliver secure, high-performance cloud solutions aligned with business goals

Responsibilities:

  • Cloud Architecture & Strategy
  • Define and evolve the company’s cloud architecture, with AWS as the primary platform.
  • Design secure, scalable, and resilient cloud-native and event-driven architectures to support product growth and enterprise demands.
  • Create and scale up our platform for integrations with our enterprise customers (webhooks, data pipelines, connectors, batch ingestions, etc)
  • Partner with engineering and product to convert custom solutions into productised capabilities.
  • Security & Compliance Enablement
  • Act as a foundational partner in building out the company’s security andcompliance functions.
  • Help define cloud security architecture, policies, and controls to meet enterprise and customer requirements.
  • Guide compliance teams on technical approaches to SOC2, ISO 27001, GDPR, and GxP standards.
  • Mentor engineers and security specialists on embedding secure-by-design and compliance-first practices.
  • Customer & Solutions Enablement
  • Work with Solutions Engineering and customers to design and validate complex deployments.
  • Contribute to processes that productise custom implementations into scalable platform features.
  • Leadership & Influence
  • Serve as a technical thought leader across cloud, data, and security domains.
  • Collaborate with cross-functional leadership (Product, Platform, TPM, Security) to align technical strategy with business goals.
  • Act as an advisor to security and compliance teams during their growth, helping establish scalable practices and frameworks.
  • Represent the company in customer and partner discussions as a trusted cloud and security subject matter expert.
  • Data Platforms & Governance
  • Provide guidance to the data engineering team on database architecture, storage design, and integration patterns.
  • Advise on selection and optimisation of a wide variety of databases (relational, NoSQL, time-series, graph, analytical).
  • Collaborate on data governance frameworks covering lifecycle management, retention, classification, and access controls.
  • Partner with data and compliance teams to ensure regulatory alignment and strong data security practices.
  • Developer Experience & DevOps
  • Build and maintain tools, automation, and CI/CD pipelines that accelerate developer velocity.
  • Promote best practices for infrastructure as code, containerisation, observability, and cost optimisation.
  • Embed security, compliance, and reliability standards into the development lifecycle.

Requirements:

  • 12+ years of experience in cloud engineering or architecture roles.
  • Deep expertise in AWS and strong understanding of modern distributed application design (microservices, containers, event-driven architectures).
  • Hands-on experience with a wide range of databases (SQL, NoSQL, analytical, and specialized systems).
  • Strong foundation in data management and governance, including lifecycle and compliance.
  • Experience supporting or helping build security and compliance functions within a SaaS or enterprise environment.
  • Expertise with IaC (Terraform, CDK, CloudFormation) and CI/CD pipelines.
  • Strong foundation in networking, security, observability, and performance engineering.
  • Excellent communication and influencing skills, with the ability to partner across technical and business functions.

Good to Have:

  • Exposure to Azure, GCP, or other cloud environments.
  • Experience working in SaaS/PaaS at enterprise scale.
  • Background in product engineering, with experience shaping technical direction in collaboration with product teams.
  • Knowledge of regulatory and compliance standards (SOC2, ISO 27001, GDPR, and GxP).

About Albert Invent

Albert Invent is a cutting-edge AI-driven software company headquartered in Oakland, California, on a mission to empower scientists and innovators in chemistry and materials science to invent the future faster. Every day, scientists in 30+ countries use Albert to accelerate R&D with AI trained like a chemist, bringing better products to market, faster.


Why Join Albert Invent

  • Joining Albert Invent means becoming part of a mission-driven, fast-growing global team at the intersection of AI, data, and advanced materials science.
  • You will collaborate with world-class scientists and technologists to redefine how new materials are discovered, developed, and brought to market.
  • The culture is built on curiosity, collaboration, and ownership, with a strong focus on learning and impact.
  • You will enjoy the opportunity to work on cutting-edge AI tools that accelerate real- world R&D and solve global challenges from sustainability to advanced manufacturing while growing your careers in a high-energy environment.


Read more
Financial Services Industry

Financial Services Industry

Agency job
via Peak Hire Solutions by Dhara Thakkar
Hyderabad
4 - 5 yrs
₹10L - ₹20L / yr
skill iconPython
CI/CD
SQL
skill iconKubernetes
Stakeholder management
+14 more

Required Skills: CI/CD Pipeline, Kubernetes, SQL Database, Excellent Communication & Stakeholder Management, Python

 

Criteria:

Looking for 15days and max 30 days of notice period candidates.

looking candidates from Hyderabad location only

Looking candidates from EPAM company only 

1.4+ years of software development experience

2. Strong experience with Kubernetes, Docker, and CI/CD pipelines in cloud-native environments.

3. Hands-on with NATS for event-driven architecture and streaming.

4. Skilled in microservices, RESTful APIs, and containerized app performance optimization.

5. Strong in problem-solving, team collaboration, clean code practices, and continuous learning.

6.  Proficient in Python (Flask) for building scalable applications and APIs.

7. Focus: Java, Python, Kubernetes, Cloud-native development

8. SQL database 

 

Description

Position Overview

We are seeking a skilled Developer to join our engineering team. The ideal candidate will have strong expertise in Java and Python ecosystems, with hands-on experience in modern web technologies, messaging systems, and cloud-native development using Kubernetes.


Key Responsibilities

  • Design, develop, and maintain scalable applications using Java and Spring Boot framework
  • Build robust web services and APIs using Python and Flask framework
  • Implement event-driven architectures using NATS messaging server
  • Deploy, manage, and optimize applications in Kubernetes environments
  • Develop microservices following best practices and design patterns
  • Collaborate with cross-functional teams to deliver high-quality software solutions
  • Write clean, maintainable code with comprehensive documentation
  • Participate in code reviews and contribute to technical architecture decisions
  • Troubleshoot and optimize application performance in containerized environments
  • Implement CI/CD pipelines and follow DevOps best practices
  •  

Required Qualifications

  • Bachelor's degree in Computer Science, Information Technology, or related field
  • 4+ years of experience in software development
  • Strong proficiency in Java with deep understanding of web technology stack
  • Hands-on experience developing applications with Spring Boot framework
  • Solid understanding of Python programming language with practical Flask framework experience
  • Working knowledge of NATS server for messaging and streaming data
  • Experience deploying and managing applications in Kubernetes
  • Understanding of microservices architecture and RESTful API design
  • Familiarity with containerization technologies (Docker)
  • Experience with version control systems (Git)


Skills & Competencies

  • Skills Java (Spring Boot, Spring Cloud, Spring Security) 
  • Python (Flask, SQL Alchemy, REST APIs)
  • NATS messaging patterns (pub/sub, request/reply, queue groups)
  • Kubernetes (deployments, services, ingress, ConfigMaps, Secrets)
  • Web technologies (HTTP, REST, WebSocket, gRPC)
  • Container orchestration and management
  • Soft Skills Problem-solving and analytical thinking
  • Strong communication and collaboration
  • Self-motivated with ability to work independently
  • Attention to detail and code quality
  • Continuous learning mindset
  • Team player with mentoring capabilities


Read more
Appiness Interactive
Bengaluru (Bangalore)
8 - 14 yrs
₹14L - ₹20L / yr
DevOps
Windows Azure
Powershell
cicd
yaml
+1 more

Job Description :


We are looking for an experienced DevOps Engineer with strong expertise in Azure DevOps, CI/CD pipelines, and PowerShell scripting, who has worked extensively with .NET-based applications in a Windows environment.


Mandatory Skills

  • Strong hands-on experience with Azure DevOps
  • GIT version control
  • CI/CD pipelines (Classic & YAML)
  • Excellent experience in PowerShell scripting
  • Experience working with .NET-based applications
  • Understanding of Solutions, Project files, MSBuild
  • Experience using Visual Studio / MSBuild tools
  • Strong experience in Windows environment
  • End-to-end experience in build, release, and deployment pipelines


Good to Have Skills

  • Terraform (optional / good to have)
  • Experience with JFrog Artifactory
  • SonarQube integration knowledge


Read more
Hashone Careers
Madhavan I
Posted by Madhavan I
Remote only
5 - 10 yrs
₹20L - ₹40L / yr
DevOps
skill iconAmazon Web Services (AWS)
skill iconKubernetes
cicd
skill iconPython
+1 more

Job Description: DevOps Engineer

Location: Bangalore / Hybrid / Remote

Company: LodgIQ

Industry: Hospitality / SaaS / Machine Learning


About LodgIQ

Headquartered in New York, LodgIQ delivers a revolutionary B2B SaaS platform to the

travel industry. By leveraging machine learning and artificial intelligence, we enable precise

forecasting and optimized pricing for hotel revenue management. Backed by Highgate

Ventures and Trilantic Capital Partners, LodgIQ is a well-funded, high-growth startup with a

global presence.


Role Summary:

We are seeking a Senior DevOps Engineer with 5+ years of strong hands-on experience in

AWS, Kubernetes, CI/CD, infrastructure as code, and cloud-native technologies. This

role involves designing and implementing scalable infrastructure, improving system

reliability, and driving automation across our cloud ecosystem.


Key Responsibilities:

• Architect, implement, and manage scalable, secure, and resilient cloud

infrastructure on AWS

• Lead DevOps initiatives including CI/CD pipelines, infrastructure automation,

and monitoring

• Deploy and manage Kubernetes clusters and containerized microservices

• Define and implement infrastructure as code using

Terraform/CloudFormation

• Monitor production and staging environments using tools like CloudWatch,

Prometheus, and Grafana

• Support MongoDB and MySQL database administration and optimization

• Ensure high availability, performance tuning, and cost optimization

• Guide and mentor junior engineers, and enforce DevOps best practices

• Drive system security, compliance, and audit readiness in cloud environments

• Collaborate with engineering, product, and QA teams to streamline release

processes


Required Qualifications:

• 5+ years of DevOps/Infrastructure experience in production-grade environments

• Strong expertise in AWS services: EC2, EKS, IAM, S3, RDS, Lambda, VPC, etc.

• Proven experience with Kubernetes and Docker in production

• Proficient with Terraform, CloudFormation, or similar IaC tools

• Hands-on experience with CI/CD pipelines using Jenkins, GitHub Actions, or

similar

• Advanced scripting in Python, Bash, or Go

• Solid understanding of networking, firewalls, DNS, and security protocols

• Exposure to monitoring and logging stacks (e.g., ELK, Prometheus, Grafana)

• Experience with MongoDB and MySQL in cloud environments

Preferred Qualifications:

• AWS Certified DevOps Engineer or Solutions Architect

• Experience with service mesh (Istio, Linkerd), Helm, or ArgoCD

• Familiarity with Zero Downtime Deployments, Canary Releases, and Blue/Green

Deployments

• Background in high-availability systems and incident response

• Prior experience in a SaaS, ML, or hospitality-tech environment


Tools and Technologies You’ll Use:

• Cloud: AWS

• Containers: Docker, Kubernetes, Helm

• CI/CD: Jenkins, GitHub Actions

• IaC: Terraform, CloudFormation

• Monitoring: Prometheus, Grafana, CloudWatch

• Databases: MongoDB, MySQL

• Scripting: Bash, Python

• Collaboration: Git, Jira, Confluence, Slack


Why Join Us?

• Competitive salary and performance bonuses.

• Remote-friendly work culture.

• Opportunity to work on cutting-edge tech in AI and ML.

• Collaborative, high-growth startup environment.

• For more information, visit http://www.lodgiq.com

Read more
Top MNC

Top MNC

Agency job
Bengaluru (Bangalore)
7 - 12 yrs
₹10L - ₹25L / yr
Windows Azure
DevOps
MLFlow
MLOps


JD :


• Master’s degree in Computer Science, Computational Sciences, Data Science, Machine Learning, Statistics , Mathematics any quantitative field

• Expertise with object-oriented programming (Python, C++)

• Strong expertise in Python libraries like NumPy, Pandas, PyTorch, TensorFlow, and Scikit-learn

• Proven experience in designing and deploying ML systems on cloud platforms (AWS, GCP, or Azure).

• Hands-on experience with MLOps frameworks, model deployment pipelines, and model monitoring tools.

• Track record of scaling machine learning solutions from prototype to production. 

• Experience building scalable ML systems in fast-paced, collaborative environments.

• Working knowledge of adversarial machine learning techniques and their mitigation

• Agile and Waterfall methodologies.

• Personally invested in continuous improvement and innovation.

• Motivated, self-directed individual that works well with minimal supervision.

Read more
Global digital transformation solutions provider.

Global digital transformation solutions provider.

Agency job
via Peak Hire Solutions by Dhara Thakkar
Chennai, Kochi (Cochin), Pune, Trivandrum, Thiruvananthapuram
5 - 7 yrs
₹10L - ₹25L / yr
Google Cloud Platform (GCP)
skill iconJenkins
CI/CD
skill iconDocker
skill iconKubernetes
+15 more

Job Description

We are seeking a highly skilled Site Reliability Engineer (SRE) with strong expertise in Google Cloud Platform (GCP) and CI/CD automation to lead cloud infrastructure initiatives. The ideal candidate will design and implement robust CI/CD pipelines, automate deployments, ensure platform reliability, and drive continuous improvement in cloud operations and DevOps practices.


Key Responsibilities:

  • Design, develop, and optimize end-to-end CI/CD pipelines using Jenkins, with a strong focus on Declarative Pipeline syntax.
  • Automate deployment, scaling, and management of applications across various GCP services including GKE, Cloud Run, Compute Engine, Cloud SQL, Cloud Storage, VPC, and Cloud Functions.
  • Collaborate closely with development and DevOps teams to ensure seamless integration of applications into the CI/CD pipeline and GCP environment.
  • Implement and manage monitoring, logging, and ing solutions to maintain visibility, reliability, and performance of cloud infrastructure and applications.
  • Ensure compliance with security best practices and organizational policies across GCP environments.
  • Document processes, configurations, and architectural decisions to maintain operational transparency.
  • Stay updated with the latest GCP services, DevOps, and SRE best practices to enhance infrastructure efficiency and reliability.


Mandatory Skills:

  • Google Cloud Platform (GCP) – Hands-on experience with core GCP compute, networking, and storage services.
  • Jenkins – Expertise in Declarative Pipeline creation and optimization.
  • CI/CD – Strong understanding of automated build, test, and deployment workflows.
  • Solid understanding of SRE principles including automation, scalability, observability, and system reliability.
  • Familiarity with containerization and orchestration tools (Docker, Kubernetes – GKE).
  • Proficiency in scripting languages such as Shell, Python, or Groovy for automation tasks.


Preferred Skills:

  • Experience with TerraformAnsible, or Cloud Deployment Manager for Infrastructure as Code (IaC).
  • Exposure to monitoring and observability tools like Stackdriver, Prometheus, or Grafana.
  • Knowledge of multi-cloud or hybrid environments (AWS experience is a plus).
  • GCP certification (Professional Cloud DevOps Engineer / Cloud Architect) preferred.


Skills

Gcp, Jenkins, CICD Aws,


Nice to Haves

Experience with Terraform, Ansible, or Cloud Deployment Manager for Infrastructure as Code (IaC).

Exposure to monitoring and observability tools like Stackdriver, Prometheus, or Grafana.

Knowledge of multi-cloud or hybrid environments (AWS experience is a plus).

GCP certification (Professional Cloud DevOps Engineer / Cloud Architect) preferred.

 

******

Notice period - 0 to 15days only

Location – Pune, Trivandrum, Kochi, Chennai

Read more
Indore
2 - 6 yrs
₹4L - ₹8L / yr
skill iconMongoDB
skill iconReact.js
skill iconNodeJS (Node.js)
skill iconExpress
DevOps
+2 more

Key Responsibilities & Skills


Strong hands-on experience in React.js, Node.js, Express.js, MongoDB

Ability to lead and mentor a development team

Project ownership – sprint planning, code reviews, task allocation

Excellent communication skills for client interactions

Strong decision-making & problem-solving abilities


Nice-to-Have (Bonus Skills)


Experience in system architecture design

Deployment knowledge – AWS / DigitalOcean / Cloud

Understanding of CI/CD pipelines & best coding practices


Why Join InfoSparkles?


Lead from Day One

Work on modern & challenging tech projects

Excellent career growth in a leadership position

Read more
AI first Software Development and Implementation Industry

AI first Software Development and Implementation Industry

Agency job
via Peak Hire Solutions by Dhara Thakkar
Pune
10 - 15 yrs
₹20L - ₹32L / yr
Project Management
Salesforce
Project implementation
Roadmaps
Agile/Scrum
+9 more

Position name: Technical Project Manager (Salesforce)

Experience: 10 years and above

Job Location: Baner, Pune (*Mandatory work from office)

Notice Period: Looking for immediate/ early joiners up to 30 days notice


Job Description:

● What will your role look like

● Lead end-to-end project execution across Salesforce Clouds (Sales, Service, Marketing, etc.).

● Act as the primary delivery contact for customers — running calls, tracking progress, and ensuring transparency.

● Coordinate with Business Analysts and Architects to keep delivery aligned and on track.

● Step in to lead customer calls, demos, or presentations when required.

● Present project updates, roadmaps, and demos effectively using PowerPoint and Salesforce UI.

● Manage project velocity, risks, dependencies, and drive on-time delivery.

● Apply Agile practices to run sprints, retrospectives, and planning sessions.

● Ensure clarity on team priorities and resolve blockers quickly.


Why you will love this role

● You will work at the intersection of delivery and customer success, ensuring Salesforce initiatives land smoothly.

● Opportunity to lead enterprise-level Salesforce projects without being bogged down in day-to-day configuration.

● A role with high visibility, where your communication and leadership skills will have a direct business impact.

● Be a trusted partner for customers and an anchor for internal teams — driving ownership and execution.

● Exposure to a variety of Salesforce Clouds and modern DevOps practices.


We would like you to bring along

● Excellent communication and presentation skills — ability to simplify complex ideas for non-technical stakeholders.

● Hands-on experience managing Salesforce projects, with at least 2 full-cycle implementations.

● Working knowledge of the Salesforce platform — including declarative tools and a basic grasp of Apex/Flows.

● Strong sense of ownership and accountability to act as a single point of contact for delivery.

● Proficiency with project tools like Jira, Confluence, and version control systems (Git/Copado/etc.).

Read more
Media and Entertainment Industry

Media and Entertainment Industry

Agency job
via Peak Hire Solutions by Dhara Thakkar
Noida
5 - 7 yrs
₹15L - ₹25L / yr
DevOps
skill iconAmazon Web Services (AWS)
CI/CD
Infrastructure
Scripting
+28 more

Required Skills: Advanced AWS Infrastructure Expertise, CI/CD Pipeline Automation, Monitoring, Observability & Incident Management, Security, Networking & Risk Management, Infrastructure as Code & Scripting


Criteria:

  • 5+ years of DevOps/SRE experience in cloud-native, product-based companies (B2C scale preferred)
  • Strong hands-on AWS expertise across core and advanced services (EC2, ECS/EKS, Lambda, S3, CloudFront, RDS, VPC, IAM, ELB/ALB, Route53)
  • Proven experience designing high-availability, fault-tolerant cloud architectures for large-scale traffic
  • Strong experience building & maintaining CI/CD pipelines (Jenkins mandatory; GitHub Actions/GitLab CI a plus)
  • Prior experience running production-grade microservices deployments and automated rollout strategies (Blue/Green, Canary)
  • Hands-on experience with monitoring & observability tools (Grafana, Prometheus, ELK, CloudWatch, New Relic, etc.)
  • Solid hands-on experience with MongoDB in production, including performance tuning, indexing & replication
  • Strong scripting skills (Bash, Shell, Python) for automation
  • Hands-on experience with IaC (Terraform, CloudFormation, or Ansible)
  • Deep understanding of networking fundamentals (VPC, subnets, routing, NAT, security groups)
  • Strong experience in incident management, root cause analysis & production firefighting

 

Description

Role Overview

Company is seeking an experienced Senior DevOps Engineer to design, build, and optimize cloud infrastructure on AWS, automate CI/CD pipelines, implement monitoring and security frameworks, and proactively identify scalability challenges. This role requires someone who has hands-on experience running infrastructure at B2C product scale, ideally in media/OTT or high-traffic applications.

 

 Key Responsibilities

1. Cloud Infrastructure — AWS (Primary Focus)

  • Architect, deploy, and manage scalable infrastructure using AWS services such as EC2, ECS/EKS, Lambda, S3, CloudFront, RDS, ELB/ALB, VPC, IAM, Route53, etc.
  • Optimize cloud cost, resource utilization, and performance across environments.
  • Design high-availability, fault-tolerant systems for streaming workloads.

 

2. CI/CD Automation

  • Build and maintain CI/CD pipelines using Jenkins, GitHub Actions, or GitLab CI.
  • Automate deployments for microservices, mobile apps, and backend APIs.
  • Implement blue/green and canary deployments for seamless production rollouts.

 

3. Observability & Monitoring

  • Implement logging, metrics, and alerting using tools like Grafana, Prometheus, ELK, CloudWatch, New Relic, etc.
  • Perform proactive performance analysis to minimize downtime and bottlenecks.
  • Set up dashboards for real-time visibility into system health and user traffic spikes.

 

4. Security, Compliance & Risk Highlighting

• Conduct frequent risk assessments and identify vulnerabilities in:

  o Cloud architecture

  o Access policies (IAM)

  o Secrets & key management

  o Data flows & network exposure


• Implement security best practices including VPC isolation, WAF rules, firewall policies, and SSL/TLS management.

 

5. Scalability & Reliability Engineering

  • Analyze traffic patterns for OTT-specific load variations (weekends, new releases, peak hours).
  • Identify scalability gaps and propose solutions across:
  •   o Microservices
  •   o Caching layers
  •   o CDN distribution (CloudFront)
  •   o Database workloads
  • Perform capacity planning and load testing to ensure readiness for 10x traffic growth.

 

6. Database & Storage Support

  • Administer and optimize MongoDB for high-read/low-latency use cases.
  • Design backup, recovery, and data replication strategies.
  • Work closely with backend teams to tune query performance and indexing.

 

7. Automation & Infrastructure as Code

  • Implement IaC using Terraform, CloudFormation, or Ansible.
  • Automate repetitive infrastructure tasks to ensure consistency across environments.

 

Required Skills & Experience

Technical Must-Haves

  • 5+ years of DevOps/SRE experience in cloud-native, product-based companies.
  • Strong hands-on experience with AWS (core and advanced services).
  • Expertise in Jenkins CI/CD pipelines.
  • Solid background working with MongoDB in production environments.
  • Good understanding of networking: VPCs, subnets, security groups, NAT, routing.
  • Strong scripting experience (Bash, Python, Shell).
  • Experience handling risk identification, root cause analysis, and incident management.

 

Nice to Have

  • Experience with OTT, video streaming, media, or any content-heavy product environments.
  • Familiarity with containers (Docker), orchestration (Kubernetes/EKS), and service mesh.
  • Understanding of CDN, caching, and streaming pipelines.

 

Personality & Mindset

  • Strong sense of ownership and urgency—DevOps is mission critical at OTT scale.
  • Proactive problem solver with ability to think about long-term scalability.
  • Comfortable working with cross-functional engineering teams.

 

Why Join company?

• Build and operate infrastructure powering millions of monthly users.

• Opportunity to shape DevOps culture and cloud architecture from the ground up.

• High-impact role in a fast-scaling Indian OTT product.

Read more
Noida
5 - 7 yrs
₹20L - ₹25L / yr
DevOps

5+ years of DevOps/SRE experience in cloud-native, product-based companies (B2C scale preferred)

Strong hands-on AWS expertise across core and advanced services (EC2, ECS/EKS, Lambda, S3, CloudFront, RDS, VPC, IAM, ELB/ALB, Route53)

Proven experience designing high-availability, fault-tolerant cloud architectures for large-scale traffic

Strong experience building & maintaining CI/CD pipelines (Jenkins mandatory; GitHub Actions/GitLab CI a plus)

Prior experience running production-grade microservices deployments and automated rollout strategies (Blue/Green, Canary)

Hands-on experience with monitoring & observability tools (Grafana, Prometheus, ELK, CloudWatch, New Relic, etc.)

Solid hands-on experience with MongoDB in production, including performance tuning, indexing & replication

Strong scripting skills (Bash, Shell, Python) for automation

Hands-on experience with IaC (Terraform, CloudFormation, or Ansible)

Deep understanding of networking fundamentals (VPC, subnets, routing, NAT, security groups)

Strong experience in incident management, root cause analysis & production firefighti

Read more
Global digital transformation solutions provider.

Global digital transformation solutions provider.

Agency job
via Peak Hire Solutions by Dhara Thakkar
Hyderabad, Chennai, Kochi (Cochin), Bengaluru (Bangalore), Trivandrum, Thiruvananthapuram
12 - 15 yrs
₹20L - ₹40L / yr
skill iconJava
DevOps
CI/CD
ReAct (Reason + Act)
skill iconReact.js
+6 more

Role Proficiency:

Leverage expertise in a technology area (e.g. Java Microsoft technologies or Mainframe/legacy) to design system architecture.


Knowledge Examples:

  • Domain/ Industry Knowledge: Basic knowledge of standard business processes within the relevant industry vertical and customer business domain
  1. Technology Knowledge: Demonstrates working knowledge of more than one technology area related to own area of work (e.g. Java/JEE 5+ Microsoft technologies or Mainframe/legacy) customer technology landscape multiple frameworks (Struts JSF Hibernate etc.) within one technology area and their applicability. Consider low level details such as data structures algorithms APIs and libraries and best practices for one technology stack configuration parameters for successful deployment and configuration parameters for high performance within one technology stack
  2. Technology Trends: Demonstrates working knowledge of technology trends related to one technology stack and awareness of technology trends related to least two technologies
  3. Architecture Concepts and Principles: Demonstrates working knowledge of standard architectural principles models patterns (e.g. SOA N-Tier EDA etc.) and perspective (e.g. TOGAF Zachman etc.) integration architecture including input and output components existing integration methodologies and topologies source and external system non functional requirements data architecture deployment architecture architecture governance
  4. Design Patterns Tools and Principles: Applies specialized knowledge of design patterns design principles practices and design tools. Knowledge of documentation of design using tolls like EA
  5. Software Development Process Tools & Techniques: Demonstrates thorough knowledge of end-to-end SDLC process (Agile and Traditional) SDLC methodology programming principles tools best practices (refactoring code code package etc.)
  6. Project Management Tools and Techniques: Demonstrates working knowledge of project management process (such as project scoping requirements management change management risk management quality assurance disaster management etc.) tools (MS Excel MPP client specific time sheets capacity planning tools etc.)
  7. Project Management: Demonstrates working knowledge of project governance framework RACI matrix and basic knowledge of project metrics like utilization onsite to offshore ratio span of control fresher ratio SLAs and quality metrics
  8. Estimation and Resource Planning: Working knowledge of estimation and resource planning techniques (e.g. TCP estimation model) company specific estimation templates
  9. Working knowledge of industry knowledge management tools (such as portals wiki) company and customer knowledge management tools techniques (such as workshops classroom training self-study application walkthrough and reverse KT)
  10. Technical Standards Documentation & Templates: Demonstrates working knowledge of various document templates and standards (such as business blueprint design documents and test specifications)
  11. Requirement Gathering and Analysis: Demonstrates working knowledge of requirements gathering for ( non functional) requirements analysis for functional and non functional requirement analysis tools (such as functional flow diagrams activity diagrams blueprint storyboard) techniques (business analysis process mapping etc.) and requirements management tools (e.g.MS Excel) and basic knowledge of functional requirements gathering. Specifically identify Architectural concerns and to document them as part of IT requirements including NFRs
  12. Solution Structuring: Demonstrates working knowledge of service offering and products


Additional Comments:

Looking for a Senior Java Architect with 12+ years of experience. Key responsibilities include:

• Excellent technical background and end to end architecture to design and implement scalable maintainable and high performing systems integrating front end technologies with back-end services.

• Collaborate with front-end teams to architect React -based user interfaces that are robust, responsive and aligned with overall technical architecture.

• Expertise in cloud-based applications on Azure, leveraging key Azure services.

• Lead the adoption of DevOps practices, including CI/CD pipelines, automation, monitoring and logging to ensure reliable and efficient deployment cycles.

• Provide technical leadership to development teams, guiding them in building solutions that adhere to best practices, industry standards and customer requirements.

• Conduct code reviews to maintain high quality code and collaborate with team to ensure code is optimized for performance, scalability and security.

• Collaborate with stakeholders to defined requirements and deliver technical solutions aligned with business goals.

• Excellent communication skills

• Mentor team members providing guidance on technical challenges and helping them grow their skill set.

• Good to have experience in GCP and retail domain.

 

Skills: Devops, Azure, Java


Must-Haves

Java (12+ years), React, Azure, DevOps, Cloud Architecture

Strong Java architecture and design experience.

Expertise in Azure cloud services.

Hands-on experience with React and front-end integration.

Proven track record in DevOps practices (CI/CD, automation).

Notice period - 0 to 15days only

Location: Hyderabad, Chennai, Kochi, Bangalore, Trivandrum

Excellent communication and leadership skills.

Read more
Lovoj

at Lovoj

2 candid answers
LOVOJ CONTACT
Posted by LOVOJ CONTACT
Delhi
3 - 10 yrs
₹8L - ₹14L / yr
skill iconAmazon Web Services (AWS)
AWS Lambda
CI/CD
DevOps

Key Responsibilities

  • Design, implement, and maintain CI/CD pipelines for backend, frontend, and mobile applications.
  • Manage cloud infrastructure using AWS (EC2, Lambda, S3, VPC, RDS, CloudWatch, ECS/EKS).
  • Configure and maintain Docker containers and/or Kubernetes clusters.
  • Implement and maintain Infrastructure as Code (IaC) using Terraform / CloudFormation.
  • Automate build, deployment, and monitoring processes.
  • Manage code repositories using Git/GitHub/GitLab, enforce branching strategies.
  • Implement monitoring and alerting using tools like Prometheus, Grafana, CloudWatch, ELK, Splunk.
  • Ensure system scalability, reliability, and security.
  • Troubleshoot production issues and perform root-cause analysis.
  • Collaborate with engineering teams to improve deployment and development workflows.
  • Optimize infrastructure costs and improve performance.

Required Skills & Qualifications

  • 3+ years of experience in DevOps, SRE, or Cloud Engineering.
  • Strong hands-on knowledge of AWS cloud services.
  • Experience with Docker, containers, and orchestrators (ECS, EKS, Kubernetes).
  • Strong understanding of CI/CD tools: GitHub Actions, Jenkins, GitLab CI, or AWS CodePipeline.
  • Experience with Linux administration and shell scripting.
  • Strong understanding of Networking, VPC, DNS, Load Balancers, Security Groups.
  • Experience with monitoring/logging tools: CloudWatch, ELK, Prometheus, Grafana.
  • Experience with Terraform or CloudFormation (IaC).
  • Good understanding of Node.js or similar application deployments.
  • Knowledge of NGINX/Apache and load balancing concepts.
  • Strong problem-solving and communication skills.

Preferred/Good to Have

  • Experience with Kubernetes (EKS).
  • Experience with Serverless architectures (Lambda).
  • Experience with Redis, MongoDB, RDS.
  • Certification in AWS Solutions Architect / DevOps Engineer.
  • Experience with security best practices, IAM policies, and DevSecOps.
  • Understanding of cost optimization and cloud cost management.


Read more
AdTech Industry

AdTech Industry

Agency job
via Peak Hire Solutions by Dhara Thakkar
Noida
8 - 12 yrs
₹60L - ₹80L / yr
Apache Airflow
Apache Spark
AWS CloudFormation
MLOps
DevOps
+23 more

Review Criteria:

  • Strong MLOps profile
  • 8+ years of DevOps experience and 4+ years in MLOps / ML pipeline automation and production deployments
  • 4+ years hands-on experience in Apache Airflow / MWAA managing workflow orchestration in production
  • 4+ years hands-on experience in Apache Spark (EMR / Glue / managed or self-hosted) for distributed computation
  • Must have strong hands-on experience across key AWS services including EKS/ECS/Fargate, Lambda, Kinesis, Athena/Redshift, S3, and CloudWatch
  • Must have hands-on Python for pipeline & automation development
  • 4+ years of experience in AWS cloud, with recent companies
  • (Company) - Product companies preferred; Exception for service company candidates with strong MLOps + AWS depth

 

Preferred:

  • Hands-on in Docker deployments for ML workflows on EKS / ECS
  • Experience with ML observability (data drift / model drift / performance monitoring / alerting) using CloudWatch / Grafana / Prometheus / OpenSearch.
  • Experience with CI / CD / CT using GitHub Actions / Jenkins.
  • Experience with JupyterHub/Notebooks, Linux, scripting, and metadata tracking for ML lifecycle.
  • Understanding of ML frameworks (TensorFlow / PyTorch) for deployment scenarios.

 

Job Specific Criteria:

  • CV Attachment is mandatory
  • Please provide CTC Breakup (Fixed + Variable)?
  • Are you okay for F2F round?
  • Have candidate filled the google form?

 

Role & Responsibilities:

We are looking for a Senior MLOps Engineer with 8+ years of experience building and managing production-grade ML platforms and pipelines. The ideal candidate will have strong expertise across AWS, Airflow/MWAA, Apache Spark, Kubernetes (EKS), and automation of ML lifecycle workflows. You will work closely with data science, data engineering, and platform teams to operationalize and scale ML models in production.

 

Key Responsibilities:

  • Design and manage cloud-native ML platforms supporting training, inference, and model lifecycle automation.
  • Build ML/ETL pipelines using Apache Airflow / AWS MWAA and distributed data workflows using Apache Spark (EMR/Glue).
  • Containerize and deploy ML workloads using Docker, EKS, ECS/Fargate, and Lambda.
  • Develop CI/CT/CD pipelines integrating model validation, automated training, testing, and deployment.
  • Implement ML observability: model drift, data drift, performance monitoring, and alerting using CloudWatch, Grafana, Prometheus.
  • Ensure data governance, versioning, metadata tracking, reproducibility, and secure data pipelines.
  • Collaborate with data scientists to productionize notebooks, experiments, and model deployments.

 

Ideal Candidate:

  • 8+ years in MLOps/DevOps with strong ML pipeline experience.
  • Strong hands-on experience with AWS:
  • Compute/Orchestration: EKS, ECS, EC2, Lambda
  • Data: EMR, Glue, S3, Redshift, RDS, Athena, Kinesis
  • Workflow: MWAA/Airflow, Step Functions
  • Monitoring: CloudWatch, OpenSearch, Grafana
  • Strong Python skills and familiarity with ML frameworks (TensorFlow/PyTorch/Scikit-learn).
  • Expertise with Docker, Kubernetes, Git, CI/CD tools (GitHub Actions/Jenkins).
  • Strong Linux, scripting, and troubleshooting skills.
  • Experience enabling reproducible ML environments using Jupyter Hub and containerized development workflows.

 

Education:

  • Master’s degree in computer science, Machine Learning, Data Engineering, or related field. 
Read more
Talent Pro
Mayank choudhary
Posted by Mayank choudhary
Noida
8 - 12 yrs
₹60L - ₹80L / yr
DevOps
cicd
skill iconAmazon Web Services (AWS)
Terraform
Ansible
+1 more

Strong DevSecOps / Cloud Security profile

Mandatory (Experience 1) – Must have 8+ years total experience in DevSecOps / Cloud Security / Platform Security roles securing AWS workloads and CI/CD systems.

Mandatory (Experience 2) – Must have strong hands-on experience securing AWS services (including but not limited to) KMS, WAF, Shield, CloudTrail, AWS Config, Security Hub, Inspector, Macie and IAM governance

Mandatory (Experience 3) – Must have hands-on expertise in Identity & Access Security including RBAC, IRSA, PSP/PSS, SCPs and IAM least-privilege enforcement

Mandatory (Experience 4) – Must have hands-on experience with security automation using Terraform and Ansible for configuration hardening and compliance

Mandatory (Experience 5) – Must have strong container & Kubernetes security experience including Docker image scanning, EKS runtime controls, network policies, and registry security

Mandatory (Experience 6) – Must have strong CI/CD pipeline security expertise including SAST, DAST, SCA, Jenkins Security, artifact integrity, secrets protection, and automated remediation

Mandatory (Experience 7) – Must have experience securing data & ML platforms including databases, data centers/on-prem environments, MWAA/Airflow, and sensitive ETL/ML workflows

Mandatory (Company) - Product companies preferred; Exception for service company candidates with strong MLOps + AWS depth

Read more
Hashone Careers
Bengaluru (Bangalore), Pune, Hyderabad
5 - 10 yrs
₹12L - ₹25L / yr
DevOps
skill iconPython
cicd
skill iconKubernetes
skill iconDocker
+1 more

Job Description

Experience: 5 - 9 years

Location: Bangalore/Pune/Hyderabad

Work Mode: Hybrid(3 Days WFO)


Senior Cloud Infrastructure Engineer for Data Platform 


The ideal candidate will play a critical role in designing, implementing, and maintaining cloud infrastructure and CI/CD pipelines to support scalable, secure, and efficient data and analytics solutions. This role requires a strong understanding of cloud-native technologies, DevOps best practices, and hands-on experience with Azure and Databricks.


Key Responsibilities:


Cloud Infrastructure Design & Management

Architect, deploy, and manage scalable and secure cloud infrastructure on Microsoft Azure.

Implement best practices for Azure Resource Management, including resource groups, virtual networks, and storage accounts.

Optimize cloud costs and ensure high availability and disaster recovery for critical systems


Databricks Platform Management

Set up, configure, and maintain Databricks workspaces for data engineering, machine learning, and analytics workloads.

Automate cluster management, job scheduling, and monitoring within Databricks.

Collaborate with data teams to optimize Databricks performance and ensure seamless integration with Azure services.


CI/CD Pipeline Development

Design and implement CI/CD pipelines for deploying infrastructure, applications, and data workflows using tools like Azure DevOps, GitHub Actions, or similar.

Automate testing, deployment, and monitoring processes to ensure rapid and reliable delivery of updates.


Monitoring & Incident Management

Implement monitoring and alerting solutions using tools like Dynatrace, Azure Monitor, Log Analytics, and Databricks metrics.

Troubleshoot and resolve infrastructure and application issues, ensuring minimal downtime.


Security & Compliance

Enforce security best practices, including identity and access management (IAM), encryption, and network security.

Ensure compliance with organizational and regulatory standards for data protection and cloud operations.


Collaboration & Documentation

Work closely with cross-functional teams, including data engineers, software developers, and business stakeholders, to align infrastructure with business needs.

Maintain comprehensive documentation for infrastructure, processes, and configurations.


Required Qualifications

Education: Bachelor’s degree in Computer Science, Engineering, or a related field.


Must Have Experience:

6+ years of experience in DevOps or Cloud Engineering roles.

Proven expertise in Microsoft Azure services, including Azure Data Lake, Azure Databricks, Azure Data Factory (ADF), Azure Functions, Azure Kubernetes Service (AKS), and Azure Active Directory.

Hands-on experience with Databricks for data engineering and analytics.


Technical Skills:

Proficiency in Infrastructure as Code (IaC) tools like Terraform, ARM templates, or Bicep.

Strong scripting skills in Python, or Bash.

Experience with containerization and orchestration tools like Docker and Kubernetes.

Familiarity with version control systems (e.g., Git) and CI/CD tools (e.g., Azure DevOps, GitHub Actions).


Soft Skills:

Strong problem-solving and analytical skills.

Excellent communication and collaboration abilities.

Read more
Tops Infosolutions
Zurin Momin
Posted by Zurin Momin
Ahmedabad
5 - 12 yrs
₹9.5L - ₹14L / yr
skill iconLaravel
skill iconJavascript
DevOps
CI/CD
skill iconKubernetes
+2 more

Job Title: DevOps Engineer


Job Description: We are seeking an experienced DevOps Engineer to support our Laravel, JavaScript (Node.js, React, Next.js), and Python development teams. The role involves building and maintaining scalable CI/CD pipelines, automating deployments, and managing cloud infrastructure to ensure seamless delivery across multiple environments.


Responsibilities:

Design, implement, and maintain CI/CD pipelines for Laravel, Node.js, and Python projects.

Automate application deployment and environment provisioning using AWS and containerization tools.

Manage and optimize AWS infrastructure (EC2, ECS, RDS, S3, CloudWatch, IAM, Lambda).

Implement Infrastructure as Code (IaC) using Terraform or AWS CloudFormation. Manage configuration automation using Ansible.

Build and manage containerized environments using Docker (Kubernetes is a plus).

Monitor infrastructure and application performance using CloudWatch, Prometheus, or Grafana.

Ensure system security, data integrity, and high availability across environments.

Collaborate with development teams to streamline builds, testing, and deployments.

Troubleshoot and resolve infrastructure and deployment-related issues.


Required Skills:

AWS (EC2, ECS, RDS, S3, IAM, Lambda)

CI/CD Tools: Jenkins, GitLab CI/CD, AWS CodePipeline, CodeBuild, CodeDeploy

Infrastructure as Code: Terraform or AWS CloudFormation Configuration Management: Ansible

Containers: Docker (Kubernetes preferred)

Scripting: Bash, Python

Version Control: Git, GitHub, GitLab

Web Servers: Apache, Nginx (preferred)

Databases: MySQL, MongoDB (preferred)


Qualifications:

3+ years of experience as a DevOps Engineer in a production environment.

Proven experience supporting Laravel, Node.js, and Python-based applications.

Strong understanding of CI/CD, containerization, and automation practices.

Experience with infrastructure monitoring, logging, and performance optimization.

Familiarity with agile and collaborative development processes.

Read more
Product Innovation Company

Product Innovation Company

Agency job
via Peak Hire Solutions by Dhara Thakkar
Bengaluru (Bangalore)
3 - 5 yrs
₹8L - ₹10L / yr
Project Management
Program Management
Stakeholder management
IT program management
Software project management
+25 more

ROLES AND RESPONSIBILITIES:

Standardization and Governance:

  • Establishing and maintaining project management standards, processes, and methodologies.
  • Ensuring consistent application of project management policies and procedures.
  • Implementing and managing project governance processes.


Resource Management:

  • Facilitating the sharing of resources, tools, and methodologies across projects.
  • Planning and allocating resources effectively.
  • Managing resource capacity and forecasting future needs.


Communication and Reporting:

  • Ensuring effective communication and information flow among project teams and stakeholders.
  • Monitoring project progress and reporting on performance.
  • Communicating strategic work progress, including risks and benefits.


Project Portfolio Management:

  • Supporting strategic decision-making by aligning projects with organizational goals.
  • Selecting and prioritizing projects based on business objectives.
  • Managing project portfolios and ensuring efficient resource allocation across projects.


Process Improvement:

  • Identifying and implementing industry best practices into workflows.
  • Improving project management processes and methodologies.
  • Optimizing project delivery and resource utilization.


Training and Support:

  • Providing training and support to project managers and team members.
  • Offering project management tools, best practices, and reporting templates.


Other Responsibilities:

  • Managing documentation of project history for future reference.
  • Coaching project teams on implementing project management steps.
  • Analysing financial data and managing project costs.
  • Interfacing with functional units (Domain, Delivery, Support, Devops, HR etc).
  • Advising and supporting senior management.


IDEAL CANDIDATE:

  • 3+ years of proven experience in Project Management roles with strong exposure to PMO processes, standards, and governance frameworks.
  • Demonstrated ability to manage project status tracking, risk assessments, budgeting, variance analysis, and defect tracking across multiple projects.
  • Proficient in Project Planning and Scheduling using tools like MS Project and Advanced Excel (e.g., Gantt charts, pivot tables, macros).
  • Experienced in developing project dashboards, reports, and executive summaries for senior management and stakeholders.
  • Active participant in Agile environments, attending and contributing to Scrum calls, sprint planning, and retrospectives.
  • Holds a Bachelor’s degree in a relevant field (e.g., Engineering, Business, IT, etc.).
  • Preferably familiar with Jira, Azure DevOps, and Power BI for tracking and visualization of project data.
  • Exposure to working in product-based companies or fast-paced, innovation-driven environments is a strong advantage.
Read more
Agentic AI Platform

Agentic AI Platform

Agency job
via Peak Hire Solutions by Dhara Thakkar
Gurugram
3 - 6 yrs
₹20L - ₹30L / yr
SaaS
Release Management
skill iconGit
DevOps
CI/CD
+5 more

ROLES AND RESPONSIBILITIES:

  • Plan, schedule, and manage all releases across product and customer projects.
  • Define and maintain the release calendar, identifying dependencies and managing risks proactively.
  • Partner with engineering, QA, DevOps, and product management to ensure release readiness.
  • Create release documentation (notes, guides, videos) for both internal stakeholders and customers.
  • Run a release review process with product leads before publishing.
  • Publish releases and updates to the company website release section.
  • Drive communication of release details to internal teams and customers in a clear, concise way.
  • Manage post-release validation and rollback procedures when required.
  • Continuously improve release management through automation, tooling, and process refinement.


IDEAL CANDIDATE:

  • 3+ years of experience in Release Management, DevOps, or related roles.
  • Strong knowledge of CI/CD pipelines, source control (Git), and build/deployment practices.
  • Experience creating release documentation and customer-facing content (videos, notes, FAQs).
  • Excellent communication and stakeholder management skills; able to translate technical changes into business impact.
  • Familiarity with SaaS, iPaaS, or enterprise software environments is a strong plus.


PERKS, BENEFITS AND WORK CULTURE:

  • Competitive salary package.
  • Opportunity to learn from and work with senior leadership & founders.
  • Build solutions for large enterprises that move from concept to real-world impact.
  • Exceptional career growth pathways in a highly innovative and rapidly scaling environment.
Read more
Mckinely and rice
Pune, Noida
5 - 15 yrs
₹5L - ₹25L / yr
skill iconMongoDB
skill iconNodeJS (Node.js)
Generative AI
skill iconExpress
DevOps
+2 more

Company Overview 

McKinley Rice is not just a company; it's a dynamic community, the next evolutionary step in professional development. Spiritually, we're a hub where individuals and companies converge to unleash their full potential. Organizationally, we are a conglomerate composed of various entities, each contributing to the larger narrative of global excellence.

Redrob by McKinley Rice: Redefining Prospecting in the Modern Sales Era


Backed by a $40 million Series A funding from leading Korean & US VCs, Redrob is building the next frontier in global outbound sales. We’re not just another database—we’re a platform designed to eliminate the chaos of traditional prospecting. In a world where sales leaders chase meetings and deals through outdated CRMs, fragmented tools, and costly lead-gen platforms, Redrob provides a unified solution that brings everything under one roof.

Inspired by the breakthroughs of Salesforce, LinkedIn, and HubSpot, we’re creating a future where anyone, not just enterprise giants, can access real-time, high-quality data on 700 M+ decision-makers, all in just a few clicks.

At Redrob, we believe the way businesses find and engage prospects is broken. Sales teams deserve better than recycled data, clunky workflows, and opaque credit-based systems. That’s why we’ve built a seamless engine for:

  • Precision prospecting
  • Intent-based targeting
  • Data enrichment from 16+ premium sources
  • AI-driven workflows to book more meetings, faster

We’re not just streamlining outbound—we’re making it smarter, scalable, and accessible. Whether you’re an ambitious startup or a scaled SaaS company, Redrob is your growth copilot for unlocking warm conversations with the right people, globally.



EXPERIENCE



Duties you'll be entrusted with:


  • Develop and execute scalable APIs and applications using the Node.js or Nest.js framework
  • Writing efficient, reusable, testable, and scalable code.
  • Understanding, analyzing, and implementing – Business needs, feature modification requests, and conversion into software components
  • Integration of user-oriented elements into different applications, data storage solutions
  • Developing – Backend components to enhance performance and receptiveness, server-side logic, and platform, statistical learning models, highly responsive web applications
  • Designing and implementing – High availability and low latency applications, data protection and security features
  • Performance tuning and automation of applications and enhancing the functionalities of current software systems.
  • Keeping abreast with the latest technology and trends.


Expectations from you:


Basic Requirements


  • Minimum qualification: Bachelor’s degree or more in Computer Science, Software Engineering, Artificial Intelligence, or a related field.
  • Experience with Cloud platforms (AWS, Azure, GCP).
  • Strong understanding of monitoring, logging, and observability practices.
  • Experience with event-driven architectures (e.g., Kafka, RabbitMQ).
  • Expertise in designing, implementing, and optimizing Elasticsearch.
  • Work with modern tools including Jira, Slack, GitHub, Google Docs, etc.
  • Expertise in Event driven architecture.
  • Experience in Integrating Generative AI APIs.
  • Working experience in high user concurrency.
  • Experience in scaled databases for handling millions of records - indexing, retrieval, etc.,


Technical Skills


  • Demonstrable experience in web application development with expertise in Node.js or Nest.js.
  • Knowledge of database technologies and agile development methodologies.
  • Experience working with databases, such as MySQL or MongoDB.
  • Familiarity with web development frameworks, such as Express.js.
  • Understanding of microservices architecture and DevOps principles.
  • Well-versed with AWS and serverless architecture.



Soft Skills


  • A quick and critical thinker with the ability to come up with a number of ideas about a topic and bring fresh and innovative ideas to the table to enhance the visual impact of our content.
  • Potential to apply innovative and exciting ideas, concepts, and technologies.
  • Stay up-to-date with the latest design trends, animation techniques, and software advancements.
  • Multi-tasking and time-management skills, with the ability to prioritize tasks.


THRIVE


Some of the extensive benefits of being part of our team:


  • We offer skill enhancement and educational reimbursement opportunities to help you further develop your expertise.
  • The Member Reward Program provides an opportunity for you to earn up to INR 85,000 as an annual Performance Bonus.
  • The McKinley Cares Program has a wide range of benefits:
  • The wellness program covers sessions for mental wellness, and fitness and offers health insurance.
  • In-house benefits have a referral bonus window and sponsored social functions.
  • An Expanded Leave Basket including paid Maternity and Paternity Leaves and rejuvenation Leaves apart from the regular 20 leaves per annum. 
  • Our Family Support benefits not only include maternity and paternity leaves but also extend to provide childcare benefits.
  • In addition to the retention bonus, our McKinley Retention Benefits program also includes a Leave Travel Allowance program.
  • We also offer an exclusive McKinley Loan Program designed to assist our employees during challenging times and alleviate financial burdens.


Read more
Remote, Pune
4 - 7 yrs
₹5L - ₹20L / yr
skill iconNodeJS (Node.js)
TypeScript
skill iconPostgreSQL
Prisma
skill iconAmazon Web Services (AWS)
+1 more

Role: Senior Backend Engineer(Nodes.js+Typescript+Postgres)

Location: Pune

Type: Full-Time



 Who We Are:

After a highly successful launch, Azodha is ready to take its next major step. We are seeking a passionate and experienced Senior Backend Engineer to build and enhance a disruptive healthcare product. This is a unique opportunity to get in on the ground floor of a fast-growing startup and play a pivotal role in shaping both the product and the team.


If you are an experienced backend engineer who thrives in an agile startup environment and has a strong technical background, we want to hear from you!


About The Role:

As a Senior Backend Engineer at Azodha, you’ll play a key role in architecting, solutioning and driving development of our AI led interoperable digital enablement platform.You will work closely with the founder/CEO to refine the product vision, drive product innovation, delivery and grow with a strong technical team.


What You’ll Do:

* Technical Excellence: Design, develop, and scale backend services using Node.js and TypeScript, including REST and GraphQL APIs. Ensure systems are scalable, secure, and high-performing.

* Data Management and Integrity: Work with Prisma or TypeORM, and relational databases like PostgreSQL and MySQL

* Continuous Improvement: Stay updated with the latest trends in backend development, incorporating new technologies where appropriate. Drive innovation and efficiency within the team

* Utilize ORMs such as Prisma or TypeORM to interact with database and ensure data integrity.

* Follow Agile sprint methodology for development.

* Conduct code reviews to maintain code quality and adherence to best practices.

* Optimize API performance for optimal user experiences.

* Participate in the entire development lifecycle, from initial planning , design and maintenance 

* Troubleshoot and debug issues to ensure system stability.

* Collaborate with QA teams to ensure high quality releases.

* Mentor and provide guidance to junior developers, offering technical expertise and constructive feedback.


Requirements

* Bachelor's degree in Computer Science, software Engineering, or a related field.

* 5+ years of hands-on experience in backend development using Node.js and TypeScript.

* Experience working on Postgres or My SQL.

* Proficiency in TypeScript and its application in Node.js

* Experience with ORM such as Prisma or TypeORM.

* Familiarity with Agile development methodologies.

* Strong analytical and problem solving skills.

* Ability to work independently and in a team oriented, fast-paced environment.

* Excellent written and oral communication skills.

* Self motivated and proactive attitude.


Preferred:

* Experience with other backend technologies and languages.

* Familiarity with continuous integration and deployment process.

* Contributions to open-source projects related to backend development.


Note: please don't apply if you're profile if you're primary database is postgres SQL.


Join our team of talented engineers and be part of building cutting edge backend systems that drive our applications. As a Senior Backend Engineer, you'll have the opportunity to shape the future of our backend infrastructure and contribute company's success. If you are passionate about backend development and meet the above requirements, we encourage you to apply and become valued member of our team at Azodha.

Read more
Neuvamacro Technology Pvt Ltd
Remote only
5 - 10 yrs
₹13L - ₹18L / yr
PowerBI
Office 365
Microsoft Dynamics
skill iconAmazon Web Services (AWS)
skill iconJavascript
+10 more

We are seeking a highly skilled Power Platform Developer with deep expertise in designing, developing, and deploying solutions using Microsoft Power Platform. The ideal candidate will have strong knowledge of Power Apps, Power Automate, Power BI, Power Pages, and Dataverse, along with integration capabilities across Microsoft 365, Azure, and third-party systems.


Key Responsibilities

  • Solution Development:
  • Design and build custom applications using Power Apps (Canvas & Model-Driven).
  • Develop automated workflows using Power Automate for business process optimization.
  • Create interactive dashboards and reports using Power BI for data visualization and analytics.
  • Configure and manage Dataverse for secure data storage and modelling.
  • Develop and maintain Power Pages for external-facing portals.
  • Integration & Customization:
  • Integrate Power Platform solutions with Microsoft 365, Dynamics 365, Azure services, and external APIs.
  • Implement custom connectors and leverage Power Platform SDK for advanced scenarios.
  • Utilize Azure Functions, Logic Apps, and REST APIs for extended functionality.
  • Governance & Security:
  • Apply best practices for environment management, ALM (Application Lifecycle Management), and solution deployment.
  • Ensure compliance with security, data governance, and licensing guidelines.
  • Implement role-based access control and manage user permissions.
  • Performance & Optimization:
  • Monitor and optimize app performance, workflow efficiency, and data refresh strategies.
  • Troubleshoot and resolve technical issues promptly.
  • Collaboration & Documentation:
  • Work closely with business stakeholders to gather requirements and translate them into technical solutions.
  • Document architecture, workflows, and processes for maintainability.


Required Skills & Qualifications

  • Technical Expertise:
  • Strong proficiency in Power Apps (Canvas & Model-Driven)Power AutomatePower BIPower Pages, and Dataverse.
  • Experience with Microsoft 365, Dynamics 365, and Azure services.
  • Knowledge of JavaScript, TypeScript, C#, .NET, and Power Fx for custom development.
  • Familiarity with SQL, DAX, and data modeling.
  • Additional Skills:
  • Understanding of ALM practicessolution packaging, and deployment pipelines.
  • Experience with Git, Azure DevOps, or similar tools for version control and CI/CD.
  • Strong problem-solving and analytical skills.
  • Certifications (Preferred):
  • Microsoft Certified: Power Platform Developer Associate.
  • Microsoft Certified: Power Platform Solution Architect Expert.


Soft Skills

  • Excellent communication and collaboration skills.
  • Ability to work in agile environments and manage multiple priorities.
  • Strong documentation and presentation abilities.

 

Read more
Wissen Technology

at Wissen Technology

4 recruiters
Praffull Shinde
Posted by Praffull Shinde
Pune, Bengaluru (Bangalore)
4 - 9 yrs
Best in industry
skill iconPython
Linux
Python Scripting
DevOps
skill iconAmazon Web Services (AWS)
+3 more

Job Description: Python Engineer


Role Summary

We are looking for a talented Python Engineer to design, develop, and maintain high-quality backend applications and automation solutions. The ideal candidate should have strong programming skills, familiarity with modern development practices, and the ability to work in a fast-paced, collaborative environment.


Key Responsibilities:


Python Development & Automation

  • Design, develop, and maintain Python scripts, tools, and automation frameworks.
  • Build automation for operational tasks such as deployment, monitoring, system checks, and maintenance.
  • Write clean, modular, and well-documented Python code following best practices.
  • Develop APIs, CLI tools, or microservices when required.

Linux Systems Engineering

  • Manage, configure, and troubleshoot Linux environments (RHEL, CentOS, Ubuntu).
  • Perform system performance tuning, log analysis, and root-cause diagnostics.
  • Work with system services, processes, networking, file systems, and security controls.
  • Implement shell scripting (bash) alongside Python for system-level automation.

CI/CD & Infrastructure Support

  • Support integration of Python automation into CI/CD pipelines (Jenkins).
  • Participate in build and release processes for infrastructure components.
  • Ensure automation aligns with established infrastructure standards and governance.
  • Use Bash scripting together with Python to improve automation efficiency.

Cloud & DevOps Collaboration (if applicable)

  • Collaborate with Cloud/DevOps engineers on automation for AWS or other cloud platforms.
  • Integrate Python tools with configuration management tools such as Chef or Ansible, or with Terraform modules.
  • Contribute to containerization efforts (Docker, Kubernetes) leveraging Python automation.


Read more
appscrip

at appscrip

2 recruiters
Kanika Gaur
Posted by Kanika Gaur
Bengaluru (Bangalore)
1 - 3 yrs
₹4L - ₹10L / yr
DevOps
Windows Azure
skill iconAmazon Web Services (AWS)
Google Cloud Platform (GCP)

Job Title: Sr. DevOps Engineer

Experience Required: 2 to 4 years in DevOps or related fields

Employment Type: Full-time


About the Role:

We are seeking a highly skilled and experienced Lead DevOps Engineer. This role will focus on driving the design, implementation, and optimization of our CI/CD pipelines, cloud infrastructure, and operational processes. As a Lead DevOps Engineer, you will play a pivotal role in enhancing the scalability, reliability, and security of our systems while mentoring a team of DevOps engineers to achieve operational excellence.


Key Responsibilities:

Infrastructure Management: Architect, deploy, and maintain scalable, secure, and resilient cloud infrastructure (e.g., AWS, Azure, or GCP).

CI/CD Pipelines: Design and optimize CI/CD pipelines, to improve development velocity and deployment quality.

Automation: Automate repetitive tasks and workflows, such as provisioning cloud resources, configuring servers, managing deployments, and implementing infrastructure as code (IaC) using tools like Terraform, CloudFormation, or Ansible.

Monitoring & Logging: Implement robust monitoring, alerting, and logging systems for enterprise and cloud-native environments using tools like Prometheus, Grafana, ELK Stack, NewRelic or Datadog.

Security: Ensure the infrastructure adheres to security best practices, including vulnerability assessments and incident response processes.

Collaboration: Work closely with development, QA, and IT teams to align DevOps strategies with project goals.

Mentorship: Lead, mentor, and train a team of DevOps engineers to foster growth and technical expertise.

Incident Management: Oversee production system reliability, including root cause analysis and performance tuning.


Required Skills & Qualifications:

Technical Expertise:

Strong proficiency in cloud platforms like AWS, Azure, or GCP.

Advanced knowledge of containerization technologies (e.g., Docker, Kubernetes).

Expertise in IaC tools such as Terraform, CloudFormation, or Pulumi.

Hands-on experience with CI/CD tools, particularly Bitbucket Pipelines, Jenkins, GitLab CI/CD, Github Actions or CircleCI.

Proficiency in scripting languages (e.g., Python, Bash, PowerShell).

Soft Skills:

Excellent communication and leadership skills.

Strong analytical and problem-solving abilities.

Proven ability to manage and lead a team effectively.

Experience:

4 years + of experience in DevOps or Site Reliability Engineering (SRE).

4+ years + in a leadership or team lead role, with proven experience managing distributed teams, mentoring team members, and driving cross-functional collaboration.

Strong understanding of microservices, APIs, and serverless architectures.


Nice to Have:

Certifications like AWS Certified Solutions Architect, Kubernetes Administrator, or similar.

Experience with GitOps tools such as ArgoCD or Flux.

Knowledge of compliance standards (e.g., GDPR, SOC 2, ISO 27001).


Perks & Benefits:

Competitive salary and performance bonuses.

Comprehensive health insurance for you and your family.

Professional development opportunities and certifications, including sponsored certifications and access to training programs to help you grow your skills and expertise.

Flexible working hours and remote work options.

Collaborative and inclusive work culture.


Join us to build and scale world-class systems that empower innovation and deliver exceptional user experiences.


You can directly contact us: Nine three one six one two zero one three two

Read more
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Find more jobs
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort