Cutshort logo

50+ Kubernetes Jobs in Pune | Kubernetes Job openings in Pune

Apply to 50+ Kubernetes Jobs in Pune on CutShort.io. Explore the latest Kubernetes Job opportunities across top companies like Google, Amazon & Adobe.

icon
Celcom Solutions Global

at Celcom Solutions Global

4 candid answers
2 recruiters
Bisman Gill
Posted by Bisman Gill
Pune
8yrs+
Upto ₹27L / yr (Varies
)
skill iconJava
skill iconSpring Boot
skill iconReact.js
skill iconRedis
skill iconKubernetes



Responsibility of / Expectations from the Role

  • Proven hands-on Software Development experience. Proven working experience in Java development.
  • Hands on experience in designing and developing applications using Java EE platforms. Object Oriented analysis and design using common design patterns.
  • Excellent knowledge of Relational Databases, SQL and ORM technologies (JPA2, Hibernate). Experience in the Spring Framework.
  • Experience in the Spring Boot. Experience in developing web applications using at least one popular web framework (Spring MVC,struts2)
  • Troubleshoot and debug issues, identify root causes, and implement effective solutions.
  • Design, develop, and implement high-performance, scalable, and secure Java applications


Desired Competencies (Technical/Behavioral Competency)

  • Java EE platform, JPA2, Hibernate, Spring, MVC and struts2, Micro services, Spring boot, ReactJS, Redis cache, keycloak and Kubernetes is mandatory
  • Front end technology : HTML, CSS , Bootstrap , Java Script , React
  • Back end technology : Core Java , Spring Core, Spring Boot, Oracle DB , Microservices
  • Hands on exp in development on Java & Springboot should be 5+ years
  • Core Java, SpringBoot, React JS, SQL 
  • Good-to-Have Rest API, , Boot strap
Read more
Global Digital Transformation Solutions Provider

Global Digital Transformation Solutions Provider

Agency job
via Peak Hire Solutions by Dhara Thakkar
Pune, Trivandrum , Thiruvananthapuram
8 - 10 yrs
₹20L - ₹24L / yr
skill iconJava
skill iconPython
API
Google Cloud Platform (GCP)
skill iconAmazon Web Services (AWS)
+13 more

Job Details

Job Title: Lead Software Engineer - Java, Python, API Development

Industry: Global digital transformation solutions provider

Domain - Information technology (IT)

Experience Required: 8-10 years

Employment Type: Full Time

Job Location: Pune & Trivandrum/ Thiruvananthapuram

CTC Range: Best in Industry

 

Job Description

Job Summary

We are seeking a Lead Software Engineer with strong hands-on expertise in Java and Python to design, build, and optimize scalable backend applications and APIs. The ideal candidate will bring deep experience in cloud technologies, large-scale data processing, and leading the design of high-performance, reliable backend systems.

 

Key Responsibilities

  • Design, develop, and maintain backend services and APIs using Java and Python
  • Build and optimize Java-based APIs for large-scale data processing
  • Ensure high performance, scalability, and reliability of backend systems
  • Architect and manage backend services on cloud platforms (AWS, GCP, or Azure)
  • Collaborate with cross-functional teams to deliver production-ready solutions
  • Lead technical design discussions and guide best practices

 

Requirements

  • 8+ years of experience in backend software development
  • Strong proficiency in Java and Python
  • Proven experience building scalable APIs and data-driven applications
  • Hands-on experience with cloud services and distributed systems
  • Solid understanding of databases, microservices, and API performance optimization

 

Nice to Have

  • Experience with Spring Boot, Flask, or FastAPI
  • Familiarity with Docker, Kubernetes, and CI/CD pipelines
  • Exposure to Kafka, Spark, or other big data tools

 

Skills

Java, Python, API Development, Data Processing, AWS Backend

 

Skills: Java, API development, Data Processing, AWS backend, Python,

 

Must-Haves

Java (8+ years), Python (8+ years), API Development (8+ years), Cloud Services (AWS/GCP/Azure), Database & Microservices

8+ years of experience in backend software development

Strong proficiency in Java and Python

Proven experience building scalable APIs and data-driven applications

Hands-on experience with cloud services and distributed systems

Solid understanding of databases, microservices, and API performance optimization

Mandatory Skills: Java API AND AWS

 

******

Notice period - 0 to 15 days only

Job stability is mandatory

Location: Pune, Trivandrum

Read more
NonStop io Technologies Pvt Ltd
Kalyani Wadnere
Posted by Kalyani Wadnere
Pune
4 - 7 yrs
Best in industry
DevOps
Reliability engineering
skill iconAmazon Web Services (AWS)
Windows Azure
Google Cloud Platform (GCP)
+5 more

About NonStop io Technologies:

NonStop io Technologies is a value-driven company with a strong focus on process-oriented software engineering. We specialize in Product Development and have a decade's worth of experience in building web and mobile applications across various domains. NonStop io Technologies follows core principles that guide its operations and believes in staying invested in a product's vision for the long term. We are a small but proud group of individuals who believe in the 'givers gain' philosophy and strive to provide value in order to seek value. We are committed to and specialize in building cutting-edge technology products and serving as trusted technology partners for startups and enterprises. We pride ourselves on fostering innovation, learning, and community engagement. Join us to work on impactful projects in a collaborative and vibrant environment.


Brief Description:

We are looking for a skilled and proactive DevOps Engineer to join our growing engineering team. The ideal candidate will have hands-on experience in building, automating, and managing scalable infrastructure and CI CD pipelines. You will work closely with development, QA, and product teams to ensure reliable deployments, performance, and system security.


Roles and Responsibilities:

● Design, implement, and manage CI CD pipelines for multiple environments

● Automate infrastructure provisioning using Infrastructure as Code tools

● Manage and optimize cloud infrastructure on AWS, Azure, or GCP

● Monitor system performance, availability, and security

● Implement logging, monitoring, and alerting solutions

● Collaborate with development teams to streamline release processes

● Troubleshoot production issues and ensure high availability

● Implement containerization and orchestration solutions such as Docker and Kubernetes

● Enforce DevOps best practices across the engineering lifecycle

● Ensure security compliance and data protection standards are maintained


Requirements:

● 4 to 7 years of experience in DevOps or Site Reliability Engineering

● Strong experience with cloud platforms such as AWS, Azure, or GCP - Relevant Certifications will be a great advantage

● Hands-on experience with CI CD tools like Jenkins, GitHub Actions, GitLab CI, or Azure DevOps

● Experience working in microservices architecture

● Exposure to DevSecOps practices

● Experience in cost optimization and performance tuning in cloud environments

● Experience with Infrastructure as Code tools such as Terraform, CloudFormation, or ARM

● Strong knowledge of containerization using Docker

● Experience with Kubernetes in production environments

● Good understanding of Linux systems and shell scripting

● Experience with monitoring tools such as Prometheus, Grafana, ELK, or Datadog

● Strong troubleshooting and debugging skills

● Understanding of networking concepts and security best practices


Why Join Us?

● Opportunity to work on a cutting-edge healthcare product

● A collaborative and learning-driven environment

● Exposure to AI and software engineering innovations

● Excellent work ethic and culture


If you're passionate about technology and want to work on impactful projects, we'd love to hear from you!

Read more
Service Co

Service Co

Agency job
via Vikash Technologies by Rishika Teja
Pune
4 - 5 yrs
₹10L - ₹15L / yr
skill iconAmazon Web Services (AWS)
Terraform
IAAC
Bash
skill iconPython
+3 more

 • Strong hands-on experience with AWS services.


 • Expertise in Terraform and IaC principles. 


• Experience building CI/CD pipelines and working with Git. 


• Proficiency with Docker and Kubernetes. 


• Solid understanding of Linux administration, networking fundamentals, and IAM. 


• Familiarity with monitoring and observability tools (CloudWatch, Prometheus, Grafana, ELK, Datadog). 


• Knowledge of security and compliance tools (Trivy, SonarQube, Checkov, Snyk).


 • Scripting experience in Bash, Python, or PowerShell.


 • Exposure to GCP, Azure, or multi-cloud architectures is a plus.

Read more
One2n

at One2n

3 candid answers
Krunali Lole
Posted by Krunali Lole
Remote, Pune
9 - 12 yrs
₹30L - ₹45L / yr
SRE
Monitoring
DevOps
Terraform
open telemetry
+7 more


About the role:

We are looking for a Staff Site Reliability Engineer who can operate at a staff level across multiple teams and clients. If you care about designing reliable platforms, influencing system architecture, and raising reliability standards across teams, you’ll enjoy working at One2N.

At One2N, you will work with our startups and enterprise clients, solving One-to-N scale problems where the proof of concept is already established and the focus is on scalability, maintainability, and long-term reliability. In this role, you will drive reliability, observability, and infrastructure architecture across systems, influencing design decisions, defining best practices, and guiding teams to build resilient, production-grade systems.


Key responsibilities:

  • Own and drive reliability and infrastructure strategy across multiple products or client engagements
  • Design and evolve platform engineering and self-serve infrastructure patterns used by product engineering teams
  • Lead architecture discussions around observability, scalability, availability, and cost efficiency.
  • Define and standardize monitoring, alerting, SLOs/SLIs, and incident management practices.
  • Build and review production-grade CI/CD and IaC systems used across teams
  • Act as an escalation point for complex production issues and incident retrospectives.
  • Partner closely with engineering leads, product teams, and clients to influence system design decisions early.
  • Mentor young engineers through design reviews, technical guidance, and best practices.
  • Improve Developer Experience (DX) by reducing cognitive load, toil, and operational friction.
  • Help teams mature their on-call processes, reliability culture, and operational ownership.
  • Stay ahead of trends in cloud-native infrastructure, observability, and platform engineering, and bring relevant ideas into practice


About you:

  • 9+ years of experience in SRE, DevOps, or software engineering roles
  • Strong experience designing and operating Kubernetes-based systems on AWS at scale
  • Deep hands-on expertise in observability and telemetry, including tools like OpenTelemetry, Datadog, Grafana, Prometheus, ELK, Honeycomb, or similar.
  • Proven experience with infrastructure as code (Terraform, Pulumi) and cloud architecture design.
  • Strong understanding of distributed systems, microservices, and containerized workloads.
  • Ability to write and review production-quality code (Golang, Python, Java, or similar)
  • Solid Linux fundamentals and experience debugging complex system-level issues
  • Experience driving cross-team technical initiatives.
  • Excellent analytical and problem-solving skills, keen attention to detail, and a passion for continuous improvement.
  • Strong written, communication, and collaboration skills, with the ability to work effectively in a fast-paced, agile environment.


Nice to have:

  • Experience working in consulting or multi-client environments.
  • Exposure to cost optimization, or large-scale AWS account management
  • Experience building internal platforms or shared infrastructure used by multiple teams.
  • Prior experience influencing or defining engineering standards across organizations.


Read more
Global Digital Transformation Solutions Provider

Global Digital Transformation Solutions Provider

Agency job
via Peak Hire Solutions by Dhara Thakkar
Bengaluru (Bangalore), Chennai, Hyderabad, Kochi (Cochin), Noida, Pune, Thiruvananthapuram
7 - 10 yrs
₹21L - ₹30L / yr
Perforce
DevOps
skill iconGit
skill iconGitHub
skill iconPython
+7 more

JOB DETAILS:

* Job Title: Specialist I - DevOps Engineering

* Industry: Global Digital Transformation Solutions Provider

* Salary: Best in Industry

* Experience: 7-10 years

* Location: Bengaluru (Bangalore), Chennai, Hyderabad, Kochi (Cochin), Noida, Pune, Thiruvananthapuram

 

Job Description

Job Summary:

As a DevOps Engineer focused on Perforce to GitHub migration, you will be responsible for executing seamless and large-scale source control migrations. You must be proficient with GitHub Enterprise and Perforce, possess strong scripting skills (Python/Shell), and have a deep understanding of version control concepts.

The ideal candidate is a self-starter, a problem-solver, and thrives on challenges while ensuring smooth transitions with minimal disruption to development workflows.

 

Key Responsibilities:

  • Analyze and prepare Perforce repositories — clean workspaces, merge streams, and remove unnecessary files.
  • Handle large files efficiently using Git Large File Storage (LFS) for files exceeding GitHub’s 100MB size limit.
  • Use git-p4 fusion (Python-based tool) to clone and migrate Perforce repositories incrementally, ensuring data integrity.
  • Define migration scope — determine how much history to migrate and plan the repository structure.
  • Manage branch renaming and repository organization for optimized post-migration workflows.
  • Collaborate with development teams to determine migration points and finalize migration strategies.
  • Troubleshoot issues related to file sizes, Python compatibility, network connectivity, or permissions during migration.

 

Required Qualifications:

  • Strong knowledge of Git/GitHub and preferably Perforce (Helix Core) — understanding of differences, workflows, and integrations.
  • Hands-on experience with P4-Fusion.
  • Familiarity with cloud platforms (AWS, Azure) and containerization technologies (Docker, Kubernetes).
  • Proficiency in migration tools such as git-p4 fusion — installation, configuration, and troubleshooting.
  • Ability to identify and manage large files using Git LFS to meet GitHub repository size limits.
  • Strong scripting skills in Python and Shell for automating migration and restructuring tasks.
  • Experience in planning and executing source control migrations — defining scope, branch mapping, history retention, and permission translation.
  • Familiarity with CI/CD pipeline integration to validate workflows post-migration.
  • Understanding of source code management (SCM) best practices, including version history and repository organization in GitHub.
  • Excellent communication and collaboration skills for cross-team coordination and migration planning.
  • Proven practical experience in repository migration, large file management, and history preservation during Perforce to GitHub transitions.

 

Skills: Github, Kubernetes, Perforce, Perforce (Helix Core), Devops Tools

 

Must-Haves

Git/GitHub (advanced), Perforce (Helix Core) (advanced), Python/Shell scripting (strong), P4-Fusion (hands-on experience), Git LFS (proficient)

Read more
Virtana

at Virtana

2 candid answers
1 product
Krutika Devadiga
Posted by Krutika Devadiga
Pune
5 - 10 yrs
Best in industry
skill iconPython
skill iconKubernetes
skill iconDocker
skill iconAmazon Web Services (AWS)
Google Cloud Platform (GCP)
+3 more

Role Overview:

Challenge convention and work on cutting edge technology that is transforming the way our customers manage their physical, virtual and cloud computing environments. Virtual Instruments seeks highly talented people to join our growing team, where your contributions will impact the development and delivery of our product roadmap. Our award-winning Virtana Platform provides the only real-time, system-wide, enterprise scale solution for providing visibility into performance, health and utilization metrics, translating into improved performance and availability while lowering the total cost of the infrastructure supporting mission-critical applications.


We are seeking an individual with knowledge in Systems Management and/or Systems Monitoring Software and/or Performance Management Software and Solutions with insight into integrated infrastructure platforms like Cisco UCS, infrastructure providers like Nutanix, VMware, EMC & NetApp and public cloud platforms like Google Cloud and AWS to expand the depth and breadth of Virtana Products.


Work Location: Pune/ Chennai


Job Type: Hybrid


Role Responsibilities:

  • The engineer will be primarily responsible for design and development of software solutions for the Virtana Platform
  • Partner and work closely with team leads, architects and engineering managers to design and implement new integrations and solutions for the Virtana Platform.
  • Communicate effectively with people having differing levels of technical knowledge.
  • Work closely with Quality Assurance and DevOps teams assisting with functional and system testing design and deployment
  • Provide customers with complex application support, problem diagnosis and problem resolution

 

Required Qualifications:

  • Minimum of 4+ years of experience in a Web Application centric Client Server Application development environment focused on Systems Management, Systems Monitoring and Performance Management Software.
  • Able to understand and comprehend integrated infrastructure platforms and experience working with one or more data collection technologies like SNMP, REST, OTEL, WMI, WBEM.
  • Minimum of 4 years of development experience with one of these high level languages like Python, Java, GO is required.
  • Bachelor's (B.E, B.Tech) or Master's degree (M.E, M.Tech. MCA) in computer science, Computer Engineering or equivalent
  •  2 years of development experience in public cloud environment using Kubernetes etc (Google Cloud and/or AWS)

 

Desired Qualifications:

  • Prior experience with other virtualization platforms like OpenShift is a plus
  • Prior experience as a contributor to engineering and integration efforts with strong attention to detail and exposure to Open-Source software is a plus
  • Demonstrated ability as a strong technical engineer who can design and code with strong communication skills
  • Firsthand development experience with the development of Systems, Network and performance Management Software and/or Solutions is a plus
  • Ability to use a variety of debugging tools, simulators and test harnesses is a plus

 

About Virtana:

Virtana delivers the industry's only broadest and deepest Observability Platform that allows organizations to monitor infrastructure, de-risk cloud migrations, and reduce cloud costs by 25% or more.

Over 200 Global 2000 enterprise customers, such as AstraZeneca, Dell, Salesforce, Geico, Costco, Nasdaq, and Boeing, have valued Virtana's software solutions for over a decade.

Our modular platform for hybrid IT digital operations includes Infrastructure Performance Monitoring and Management (IPM), Artificial Intelligence for IT Operations (AIOps), Cloud Cost Management (Fin Ops), and Workload Placement Readiness Solutions. Virtana is simplifying the complexity of hybrid IT environments with a single cloud-agnostic platform across all the categories listed above. The $30BIT Operations Management (ITOM) Software market is ripe for disruption, and Virtana is uniquely positioned for success.

Read more
Deqode

at Deqode

1 recruiter
Samiksha Agrawal
Posted by Samiksha Agrawal
Pune
7 - 10 yrs
₹7L - ₹18L / yr
SRE
DevOps
Terraform
skill iconKubernetes
skill iconDocker

Role: DevOps Engineer

Experience: 7+ Years

Location: Pune / Trivandrum

Work Mode: Hybrid

𝐊𝐞𝐲 𝐑𝐞𝐬𝐩𝐨𝐧𝐬𝐢𝐛𝐢𝐥𝐢𝐭𝐢𝐞𝐬:

  • Drive CI/CD pipelines for microservices and cloud architectures
  • Design and operate cloud-native platforms (AWS/Azure)
  • Manage Kubernetes/OpenShift clusters and containerized applications
  • Develop automated pipelines and infrastructure scripts
  • Collaborate with cross-functional teams on DevOps best practices
  • Mentor development teams on continuous delivery and reliability
  • Handle incident management, troubleshooting, and root cause analysis

𝐌𝐚𝐧𝐝𝐚𝐭𝐨𝐫𝐲 𝐒𝐤𝐢𝐥𝐥𝐬:

  • 7+ years in DevOps/SRE roles
  • Strong experience with AWS or Azure
  • Hands-on with Docker, Kubernetes, and/or OpenShift
  • Proficiency in Jenkins, Git, Maven, JIRA
  • Strong scripting skills (Shell, Python, Perl, Ruby, JavaScript)
  • Solid networking knowledge and troubleshooting skills
  • Excellent communication and collaboration abilities

𝐏𝐫𝐞𝐟𝐞𝐫𝐫𝐞𝐝 𝐒𝐤𝐢𝐥𝐥𝐬:

  • Experience with Helm, monitoring tools (Splunk, Grafana, New Relic, Datadog)
  • Knowledge of Microservices and SOA architectures
  • Familiarity with database technologies


Read more
Global digital transformation solutions provider.

Global digital transformation solutions provider.

Agency job
via Peak Hire Solutions by Dhara Thakkar
Chennai, Kochi (Cochin), Pune, Trivandrum, Thiruvananthapuram
5 - 7 yrs
₹10L - ₹25L / yr
Google Cloud Platform (GCP)
skill iconJenkins
CI/CD
skill iconDocker
skill iconKubernetes
+15 more

Job Description

We are seeking a highly skilled Site Reliability Engineer (SRE) with strong expertise in Google Cloud Platform (GCP) and CI/CD automation to lead cloud infrastructure initiatives. The ideal candidate will design and implement robust CI/CD pipelines, automate deployments, ensure platform reliability, and drive continuous improvement in cloud operations and DevOps practices.


Key Responsibilities:

  • Design, develop, and optimize end-to-end CI/CD pipelines using Jenkins, with a strong focus on Declarative Pipeline syntax.
  • Automate deployment, scaling, and management of applications across various GCP services including GKE, Cloud Run, Compute Engine, Cloud SQL, Cloud Storage, VPC, and Cloud Functions.
  • Collaborate closely with development and DevOps teams to ensure seamless integration of applications into the CI/CD pipeline and GCP environment.
  • Implement and manage monitoring, logging, and ing solutions to maintain visibility, reliability, and performance of cloud infrastructure and applications.
  • Ensure compliance with security best practices and organizational policies across GCP environments.
  • Document processes, configurations, and architectural decisions to maintain operational transparency.
  • Stay updated with the latest GCP services, DevOps, and SRE best practices to enhance infrastructure efficiency and reliability.


Mandatory Skills:

  • Google Cloud Platform (GCP) – Hands-on experience with core GCP compute, networking, and storage services.
  • Jenkins – Expertise in Declarative Pipeline creation and optimization.
  • CI/CD – Strong understanding of automated build, test, and deployment workflows.
  • Solid understanding of SRE principles including automation, scalability, observability, and system reliability.
  • Familiarity with containerization and orchestration tools (Docker, Kubernetes – GKE).
  • Proficiency in scripting languages such as Shell, Python, or Groovy for automation tasks.


Preferred Skills:

  • Experience with TerraformAnsible, or Cloud Deployment Manager for Infrastructure as Code (IaC).
  • Exposure to monitoring and observability tools like Stackdriver, Prometheus, or Grafana.
  • Knowledge of multi-cloud or hybrid environments (AWS experience is a plus).
  • GCP certification (Professional Cloud DevOps Engineer / Cloud Architect) preferred.


Skills

Gcp, Jenkins, CICD Aws,


Nice to Haves

Experience with Terraform, Ansible, or Cloud Deployment Manager for Infrastructure as Code (IaC).

Exposure to monitoring and observability tools like Stackdriver, Prometheus, or Grafana.

Knowledge of multi-cloud or hybrid environments (AWS experience is a plus).

GCP certification (Professional Cloud DevOps Engineer / Cloud Architect) preferred.

 

******

Notice period - 0 to 15days only

Location – Pune, Trivandrum, Kochi, Chennai

Read more
NeoGenCode Technologies Pvt Ltd
Shivank Bhardwaj
Posted by Shivank Bhardwaj
Pune
6 - 8 yrs
₹12L - ₹22L / yr
skill iconNodeJS (Node.js)
skill iconReact.js
skill iconJavascript
skill iconGo Programming (Golang)
Elixir
+10 more


Job Description – Full Stack Developer (React + Node.js)

Experience: 5–8 Years

Location: Pune

Work Mode: WFO

Employment Type: Full-time


About the Role

We are looking for an experienced Full Stack Developer with strong hands-on expertise in React and Node.js to join our engineering team. The ideal candidate should have solid experience building scalable applications, working with production systems, and collaborating in high-performance tech environments.


Key Responsibilities

  • Design, develop, and maintain scalable full-stack applications using React and Node.js.
  • Collaborate with cross-functional teams to define, design, and deliver new features.
  • Write clean, maintainable, and efficient code following OOP/FP and SOLID principles.
  • Work with relational databases such as PostgreSQL or MySQL.
  • Deploy and manage applications in cloud environments (preferably GCP or AWS).
  • Optimize application performance, troubleshoot issues, and ensure high availability in production systems.
  • Utilize containerization tools like Docker for efficient development and deployment workflows.
  • Integrate third-party services and APIs, including AI APIs and tools.
  • Contribute to improving development processes, documentation, and best practices.


Required Skills

  • Strong experience with React.js (frontend).
  • Solid hands-on experience with Node.js (backend).
  • Good understanding of relational databases: PostgreSQL / MySQL.
  • Experience working in production environments and debugging live systems.
  • Strong understanding of OOP or Functional Programming, and clean coding standards.
  • Knowledge of Docker or other containerization tools.
  • Experience with cloud platforms (GCP or AWS).
  • Excellent written and verbal communication skills.


Good to Have

  • Experience with Golang or Elixir.
  • Familiarity with Kubernetes, RabbitMQ, Redis, etc.
  • Contributions to open-source projects.
  • Previous experience working with AI APIs or machine learning tools.


Read more
Virtana

at Virtana

2 candid answers
1 product
Krutika Devadiga
Posted by Krutika Devadiga
Pune
4 - 10 yrs
Best in industry
skill iconJava
skill iconKubernetes
skill iconGo Programming (Golang)
skill iconPython
Apache Kafka
+13 more

Senior Software Engineer 

Challenge convention and work on cutting edge technology that is transforming the way our customers manage their physical, virtual and cloud computing environments. Virtual Instruments seeks highly talented people to join our growing team, where your contributions will impact the development and delivery of our product roadmap. Our award-winning Virtana Platform provides the only real-time, system-wide, enterprise scale solution for providing visibility into performance, health and utilization metrics, translating into improved performance and availability while lowering the total cost of the infrastructure supporting mission-critical applications.  

We are seeking an individual with expert knowledge in Systems Management and/or Systems Monitoring Software, Observability platforms and/or Performance Management Software and Solutions with insight into integrated infrastructure platforms like Cisco UCS, infrastructure providers like Nutanix, VMware, EMC & NetApp and public cloud platforms like Google Cloud and AWS to expand the depth and breadth of Virtana Products. 


Work Location: Pune/ Chennai


Job Type: Hybrid

 

Role Responsibilities: 

  • The engineer will be primarily responsible for architecture, design and development of software solutions for the Virtana Platform 
  • Partner and work closely with cross functional teams and with other engineers and product managers to architect, design and implement new features and solutions for the Virtana Platform. 
  • Communicate effectively across the departments and R&D organization having differing levels of technical knowledge.  
  • Work closely with UX Design, Quality Assurance, DevOps and Documentation teams. Assist with functional and system test design and deployment automation 
  • Provide customers with complex and end-to-end application support, problem diagnosis and problem resolution 
  • Learn new technologies quickly and leverage 3rd party libraries and tools as necessary to expedite delivery 

 

Required Qualifications:    

  • Minimum of 7+ years of progressive experience with back-end development in a Client Server Application development environment focused on Systems Management, Systems Monitoring and Performance Management Software. 
  • Deep experience in public cloud environment using Kubernetes and other distributed managed services like Kafka etc (Google Cloud and/or AWS) 
  • Experience with CI/CD and cloud-based software development and delivery 
  • Deep experience with integrated infrastructure platforms and experience working with one or more data collection technologies like SNMP, REST, OTEL, WMI, WBEM. 
  • Minimum of 6 years of development experience with one or more of these high level languages like GO, Python, Java. Deep experience with one of these languages is required. 
  • Bachelor’s or Master’s degree in computer science, Computer Engineering or equivalent 
  • Highly effective verbal and written communication skills and ability to lead and participate in multiple projects 
  • Well versed with identifying opportunities and risks in a fast-paced environment and ability to adjust to changing business priorities 
  • Must be results-focused, team-oriented and with a strong work ethic 

 

Desired Qualifications: 

  • Prior experience with other virtualization platforms like OpenShift is a plus 
  • Prior experience as a contributor to engineering and integration efforts with strong attention to detail and exposure to Open-Source software is a plus 
  • Demonstrated ability as a lead engineer who can architect, design and code with strong communication and teaming skills 
  • Deep development experience with the development of Systems, Network and performance Management Software and/or Solutions is a plus 

  

About Virtana:  Virtana delivers the industry’s only broadest and deepest Observability Platform that allows organizations to monitor infrastructure, de-risk cloud migrations, and reduce cloud costs by 25% or more. 

  

Over 200 Global 2000 enterprise customers, such as AstraZeneca, Dell, Salesforce, Geico, Costco, Nasdaq, and Boeing, have valued Virtana’s software solutions for over a decade. 

  

Our modular platform for hybrid IT digital operations includes Infrastructure Performance Monitoring and Management (IPM), Artificial Intelligence for IT Operations (AIOps), Cloud Cost Management (Fin Ops), and Workload Placement Readiness Solutions. Virtana is simplifying the complexity of hybrid IT environments with a single cloud-agnostic platform across all the categories listed above. The $30B IT Operations Management (ITOM) Software market is ripe for disruption, and Virtana is uniquely positioned for success. 

Read more
Global Digital Transformation Solutions Provider

Global Digital Transformation Solutions Provider

Agency job
via Peak Hire Solutions by Dhara Thakkar
Pune, Hyderabad
6 - 10 yrs
₹24L - ₹40L / yr
skill iconMachine Learning (ML)
skill iconAmazon Web Services (AWS)
MLOps
ECS
skill iconKubernetes
+16 more

Core Responsibilities:

  • The MLE will design, build, test, and deploy scalable machine learning systems, optimizing model accuracy and efficiency
  • Model Development: Algorithms and architectures span traditional statistical methods to deep learning along with employing LLMs in modern frameworks.
  • Data Preparation: Prepare, cleanse, and transform data for model training and evaluation.
  • Algorithm Implementation: Implement and optimize machine learning algorithms and statistical models.
  • System Integration: Integrate models into existing systems and workflows.
  • Model Deployment: Deploy models to production environments and monitor performance.
  • Collaboration: Work closely with data scientists, software engineers, and other stakeholders.
  • Continuous Improvement: Identify areas for improvement in model performance and systems.

 

Skills:

  • Programming and Software Engineering: Knowledge of software engineering best practices (version control, testing, CI/CD).
  • Data Engineering: Ability to handle data pipelines, data cleaning, and feature engineering. Proficiency in SQL for data manipulation + Kafka, Chaossearch logs, etc for troubleshooting; Other tech touch points are ScyllaDB (like BigTable), OpenSearch, Neo4J graph
  • Model Deployment and Monitoring: MLOps Experience in deploying ML models to production environments.
  • Knowledge of model monitoring and performance evaluation.

 

Required experience:

  • Amazon SageMaker: Deep understanding of SageMaker's capabilities for building, training, and deploying ML models; understanding of the Sagemaker pipeline with ability to analyze gaps and recommend/implement improvements
  • AWS Cloud Infrastructure: Familiarity with S3, EC2, Lambda and using these services in ML workflows
  • AWS data: Redshift, Glue
  • Containerization and Orchestration: Understanding of Docker and Kubernetes, and their implementation within AWS (EKS, ECS)

 

Skills: Aws, Aws Cloud, Amazon Redshift, Eks

 

Must-Haves

Amazon SageMaker, AWS Cloud Infrastructure (S3, EC2, Lambda), Docker and Kubernetes (EKS, ECS), SQL, AWS data (Redshift, Glue)

Skills : Machine Learning, MLOps, AWS Cloud, Redshift OR Glue, Kubernetes, Sage maker


******

Notice period - 0 to 15 days only 

Location : Pune & Hyderabad only

Read more
Virtana

at Virtana

2 candid answers
1 product
Krutika Devadiga
Posted by Krutika Devadiga
Pune
8 - 13 yrs
Best in industry
skill iconJava
skill iconKubernetes
skill iconAmazon Web Services (AWS)
skill iconSpring Boot
skill iconGo Programming (Golang)
+13 more

Company Overview:

Virtana delivers the industry’s only unified platform for Hybrid Cloud Performance, Capacity and Cost Management. Our platform provides unparalleled, real-time visibility into the performance, utilization, and cost of infrastructure across the hybrid cloud – empowering customers to manage their mission critical applications across physical, virtual, and cloud computing environments. Our SaaS platform allows organizations to easily manage and optimize their spend in the public cloud, assure resources are performing properly through real-time monitoring, and provide the unique ability to plan migrations across the hybrid cloud. 

As we continue to expand our portfolio, we are seeking a highly skilled and hands-on Staff Software Engineer in backend technologies to contribute to the futuristic development of our sophisticated monitoring products.

 

Position Overview:

As a Staff Software Engineer specializing in backend technologies for Storage and Network monitoring in an AI enabled Data center as well as Cloud, you will play a critical role in designing, developing, and delivering high-quality features within aggressive timelines. Your expertise in microservices-based streaming architectures and strong hands-on development skills are essential to solve complex problems related to large-scale data processing. Proficiency in backend technologies such as Java, Python is crucial.



Work Location: Pune


Job Type: Hybrid

 

Key Responsibilities:

  • Hands-on Development: Actively participate in the design, development, and delivery of high-quality features, demonstrating strong hands-on expertise in backend technologies like Java, Python, Go or related languages.
  • Microservices and Streaming Architectures: Design and implement microservices-based streaming architectures to efficiently process and analyze large volumes of data, ensuring real-time insights and optimal performance.
  • Agile Development: Collaborate within an agile development environment to deliver features on aggressive schedules, maintaining a high standard of quality in code, design, and architecture.
  • Feature Ownership: Take ownership of features from inception to deployment, ensuring they meet product requirements and align with the overall product vision.
  • Problem Solving and Optimization: Tackle complex technical challenges related to data processing, storage, and real-time monitoring, and optimize backend systems for high throughput and low latency.
  • Code Reviews and Best Practices: Conduct code reviews, provide constructive feedback, and promote best practices to maintain a high-quality and maintainable codebase.
  • Collaboration and Communication: Work closely with cross-functional teams, including UI/UX designers, product managers, and QA engineers, to ensure smooth integration and alignment with product goals.
  • Documentation: Create and maintain technical documentation, including system architecture, design decisions, and API documentation, to facilitate knowledge sharing and onboarding.


Qualifications:

  • Bachelor's or Master's degree in Computer Science, Software Engineering, or a related field.
  • 8+ years of hands-on experience in backend development, demonstrating expertise in Java, Python or related technologies.
  • Strong domain knowledge in Storage and Networking, with exposure to monitoring technologies and practices.
  • Experience is handling the large data-lakes with purpose-built data stores (Vector databases, no-SQL, Graph, Time-series).
  • Practical knowledge of OO design patterns and Frameworks like Spring, Hibernate.
  • Extensive experience with cloud platforms such as AWS, Azure or GCP and development expertise on Kubernetes, Docker, etc.
  • Solid experience designing and delivering features with high quality on aggressive schedules.
  • Proven experience in microservices-based streaming architectures, particularly in handling large amounts of data for storage and networking monitoring.
  • Familiarity with performance optimization techniques and principles for backend systems.
  • Excellent problem-solving and critical-thinking abilities.
  • Outstanding communication and collaboration skills.


Why Join Us:

  • Opportunity to be a key contributor in the development of a leading performance monitoring company specializing in AI-powered Storage and Network monitoring.
  • Collaborative and innovative work environment.
  • Competitive salary and benefits package.
  • Professional growth and development opportunities.
  • Chance to work on cutting-edge technology and products that make a real impact.


If you are a hands-on technologist with a proven track record of designing and delivering high-quality features on aggressive schedules and possess strong expertise in microservices-based streaming architectures, we invite you to apply and help us redefine the future of performance monitoring.

Read more
Hashone Careers
Bengaluru (Bangalore), Pune, Hyderabad
5 - 10 yrs
₹12L - ₹25L / yr
DevOps
skill iconPython
cicd
skill iconKubernetes
skill iconDocker
+1 more

Job Description

Experience: 5 - 9 years

Location: Bangalore/Pune/Hyderabad

Work Mode: Hybrid(3 Days WFO)


Senior Cloud Infrastructure Engineer for Data Platform 


The ideal candidate will play a critical role in designing, implementing, and maintaining cloud infrastructure and CI/CD pipelines to support scalable, secure, and efficient data and analytics solutions. This role requires a strong understanding of cloud-native technologies, DevOps best practices, and hands-on experience with Azure and Databricks.


Key Responsibilities:


Cloud Infrastructure Design & Management

Architect, deploy, and manage scalable and secure cloud infrastructure on Microsoft Azure.

Implement best practices for Azure Resource Management, including resource groups, virtual networks, and storage accounts.

Optimize cloud costs and ensure high availability and disaster recovery for critical systems


Databricks Platform Management

Set up, configure, and maintain Databricks workspaces for data engineering, machine learning, and analytics workloads.

Automate cluster management, job scheduling, and monitoring within Databricks.

Collaborate with data teams to optimize Databricks performance and ensure seamless integration with Azure services.


CI/CD Pipeline Development

Design and implement CI/CD pipelines for deploying infrastructure, applications, and data workflows using tools like Azure DevOps, GitHub Actions, or similar.

Automate testing, deployment, and monitoring processes to ensure rapid and reliable delivery of updates.


Monitoring & Incident Management

Implement monitoring and alerting solutions using tools like Dynatrace, Azure Monitor, Log Analytics, and Databricks metrics.

Troubleshoot and resolve infrastructure and application issues, ensuring minimal downtime.


Security & Compliance

Enforce security best practices, including identity and access management (IAM), encryption, and network security.

Ensure compliance with organizational and regulatory standards for data protection and cloud operations.


Collaboration & Documentation

Work closely with cross-functional teams, including data engineers, software developers, and business stakeholders, to align infrastructure with business needs.

Maintain comprehensive documentation for infrastructure, processes, and configurations.


Required Qualifications

Education: Bachelor’s degree in Computer Science, Engineering, or a related field.


Must Have Experience:

6+ years of experience in DevOps or Cloud Engineering roles.

Proven expertise in Microsoft Azure services, including Azure Data Lake, Azure Databricks, Azure Data Factory (ADF), Azure Functions, Azure Kubernetes Service (AKS), and Azure Active Directory.

Hands-on experience with Databricks for data engineering and analytics.


Technical Skills:

Proficiency in Infrastructure as Code (IaC) tools like Terraform, ARM templates, or Bicep.

Strong scripting skills in Python, or Bash.

Experience with containerization and orchestration tools like Docker and Kubernetes.

Familiarity with version control systems (e.g., Git) and CI/CD tools (e.g., Azure DevOps, GitHub Actions).


Soft Skills:

Strong problem-solving and analytical skills.

Excellent communication and collaboration abilities.

Read more
Global Digital Transformation Solutions Provider

Global Digital Transformation Solutions Provider

Agency job
via Peak Hire Solutions by Dhara Thakkar
Pune
6 - 12 yrs
₹25L - ₹30L / yr
skill iconMachine Learning (ML)
AWS CloudFormation
Online machine learning
skill iconAmazon Web Services (AWS)
ECS
+20 more

MUST-HAVES: 

  • Machine Learning + Aws + (EKS OR ECS OR Kubernetes) + (Redshift AND Glue) + Sage maker
  • Notice period - 0 to 15 days only 
  • Hybrid work mode- 3 days office, 2 days at home


SKILLS: AWS, AWS CLOUD, AMAZON REDSHIFT, EKS


ADDITIONAL GUIDELINES:

  • Interview process: - 2 Technical round + 1 Client round
  • 3 days in office, Hybrid model. 


CORE RESPONSIBILITIES:

  • The MLE will design, build, test, and deploy scalable machine learning systems, optimizing model accuracy and efficiency
  • Model Development: Algorithms and architectures span traditional statistical methods to deep learning along with employing LLMs in modern frameworks.
  • Data Preparation: Prepare, cleanse, and transform data for model training and evaluation.
  • Algorithm Implementation: Implement and optimize machine learning algorithms and statistical models.
  • System Integration: Integrate models into existing systems and workflows.
  • Model Deployment: Deploy models to production environments and monitor performance.
  • Collaboration: Work closely with data scientists, software engineers, and other stakeholders.
  • Continuous Improvement: Identify areas for improvement in model performance and systems.


SKILLS:

  • Programming and Software Engineering: Knowledge of software engineering best practices (version control, testing, CI/CD).
  • Data Engineering: Ability to handle data pipelines, data cleaning, and feature engineering. Proficiency in SQL for data manipulation + Kafka, Chaos search logs, etc. for troubleshooting; Other tech touch points are Scylla DB (like BigTable), OpenSearch, Neo4J graph
  • Model Deployment and Monitoring: MLOps Experience in deploying ML models to production environments.
  • Knowledge of model monitoring and performance evaluation.


REQUIRED EXPERIENCE:

  • Amazon SageMaker: Deep understanding of SageMaker's capabilities for building, training, and deploying ML models; understanding of the Sage maker pipeline with ability to analyze gaps and recommend/implement improvements
  • AWS Cloud Infrastructure: Familiarity with S3, EC2, Lambda and using these services in ML workflows
  • AWS data: Redshift, Glue
  • Containerization and Orchestration: Understanding of Docker and Kubernetes, and their implementation within AWS (EKS, ECS)
Read more
Wissen Technology

at Wissen Technology

4 recruiters
Praffull Shinde
Posted by Praffull Shinde
Pune, Mumbai, Bengaluru (Bangalore)
8 - 14 yrs
Best in industry
Google Cloud Platform (GCP)
Terraform
skill iconKubernetes
DevOps
skill iconPython

JD for Cloud engineer

 

Job Summary:


We are looking for an experienced GCP Cloud Engineer to design, implement, and manage cloud-based solutions on Google Cloud Platform (GCP). The ideal candidate should have expertise in GKE (Google Kubernetes Engine), Cloud Run, Cloud Loadbalancer, Cloud function, Azure DevOps, and Terraform, with a strong focus on automation, security, and scalability.


You will work closely with development, operations, and security teams to ensure robust cloud infrastructure and CI/CD pipelines while optimizing performance and cost.

 

Key Responsibilities:

1. Cloud Infrastructure Design & Management

  • Architect, deploy, and maintain GCP cloud resources via terraform/other automation.
  • Implement Google Cloud Storage, Cloud SQL, filestore,  for data storage and processing needs.
  • Manage and configure Cloud Load Balancers (HTTP(S), TCP/UDP, and SSL Proxy) for high availability and scalability.
  • Optimize resource allocation, monitoring, and cost efficiency across GCP environments.


2. Kubernetes & Container Orchestration

  • Deploy, manage, and optimize workloads on Google Kubernetes Engine (GKE).
  • Work with Helm charts for microservices deployments.
  • Automate scaling, rolling updates, and zero-downtime deployments.

 

3. Serverless & Compute Services

  • Deploy and manage applications on Cloud Run and Cloud Functions for scalable, serverless workloads.
  • Optimize containerized applications running on Cloud Run for cost efficiency and performance.

 

4. CI/CD & DevOps Automation

  • Design, implement, and manage CI/CD pipelines using Azure DevOps.
  • Automate infrastructure deployment using Terraform, Bash and Powershell scripting
  • Integrate security and compliance checks into the DevOps workflow (DevSecOps).

 

 

Required Skills & Qualifications:

Experience: 8+ years in Cloud Engineering, with a focus on GCP.

Cloud Expertise: Strong knowledge of GCP services (GKE, Compute Engine, IAM, VPC, Cloud Storage, Cloud SQL, Cloud Functions).

Kubernetes & Containers: Experience with GKE, Docker, GKE Networking, Helm.

DevOps Tools: Hands-on experience with Azure DevOps for CI/CD pipeline automation.

Infrastructure-as-Code (IaC): Expertise in Terraform for provisioning cloud resources.

Scripting & Automation: Proficiency in Python, Bash, or PowerShell for automation.

Security & Compliance: Knowledge of cloud security principles, IAM, and compliance standards.

Read more
Wissen Technology

at Wissen Technology

4 recruiters
Janane Mohanasankaran
Posted by Janane Mohanasankaran
Bengaluru (Bangalore), Mumbai, Pune
7 - 12 yrs
₹1L - ₹45L / yr
Google Cloud Platform (GCP)
skill iconKubernetes
skill iconDocker
google kubernetes engineer
azure devops
+2 more

Required Skills & Qualifications:

✔ Experience: 4+ years in Cloud Engineering, with a focus on GCP.

✔ Cloud Expertise: Strong knowledge of GCP services (GKE, Compute Engine, IAM, VPC, Cloud Storage, Cloud SQL, Cloud Functions).

✔ Kubernetes & Containers: Experience with GKE, Docker, GKE Networking, Helm.

✔ DevOps Tools: Hands-on experience with Azure DevOps for CI/CD pipeline automation.

✔ Infrastructure-as-Code (IaC): Expertise in Terraform for provisioning cloud resources.

✔ Scripting & Automation: Proficiency in Python, Bash, or PowerShell for automation.

Read more
 Global Digital Transformation Solutions Provider

Global Digital Transformation Solutions Provider

Agency job
via Peak Hire Solutions by Dhara Thakkar
Pune
6 - 12 yrs
₹10L - ₹30L / yr
skill iconAmazon Web Services (AWS)
AWS CloudFormation
Amazon Redshift
skill iconElastic Search
ECS
+11 more

Job Details

Job Title: ML Engineer II - Aws, Aws Cloud

Industry: Technology

Domain - Information technology (IT)

Experience Required: 6-12 years

Employment Type: Full Time

Job Location: Pune

CTC Range: Best in Industry


Job Description:

Core Responsibilities:

? The MLE will design, build, test, and deploy scalable machine learning systems, optimizing model accuracy and efficiency

? Model Development: Algorithms and architectures span traditional statistical methods to deep learning along with employing LLMs in modern frameworks.

? Data Preparation: Prepare, cleanse, and transform data for model training and evaluation.

? Algorithm Implementation: Implement and optimize machine learning algorithms and statistical models.

? System Integration: Integrate models into existing systems and workflows.

? Model Deployment: Deploy models to production environments and monitor performance.

? Collaboration: Work closely with data scientists, software engineers, and other stakeholders.

? Continuous Improvement: Identify areas for improvement in model performance and systems.


Skills:

? Programming and Software Engineering: Knowledge of software engineering best practices (version control, testing, CI/CD).

? Data Engineering: Ability to handle data pipelines, data cleaning, and feature engineering. Proficiency in SQL for data manipulation + Kafka, Chaossearch logs, etc for troubleshooting; Other tech touch points are ScyllaDB (like BigTable), OpenSearch, Neo4J graph

? Model Deployment and Monitoring: MLOps Experience in deploying ML models to production environments.

? Knowledge of model monitoring and performance evaluation.


Required experience:

? Amazon SageMaker: Deep understanding of SageMaker's capabilities for building, training, and deploying ML models; understanding of the Sagemaker pipeline with ability to analyze gaps and recommend/implement improvements

? AWS Cloud Infrastructure: Familiarity with S3, EC2, Lambda and using these services in


ML workflows

? AWS data: Redshift, Glue

? Containerization and Orchestration: Understanding of Docker and Kubernetes, and their implementation within AWS (EKS, ECS)


Skills: Aws, Aws Cloud, Amazon Redshift, Eks


Must-Haves

Aws, Aws Cloud, Amazon Redshift, Eks

NP: Immediate – 30 Days

 

Read more
NeoGenCode Technologies Pvt Ltd
Ritika Verma
Posted by Ritika Verma
Pune
9 - 12 yrs
₹25L - ₹30L / yr
skill iconJava
skill iconSpring Boot
Hibernate (Java)
JPA
skill iconDocker
+4 more

Position-Tech Lead

Experience: 8-10

Job Location: Pune 


We are seeking a highly skilled Tech Lead with strong expertise in Java, microservices architecture, and cloud-native application development. The ideal candidate will bring hands-on leadership experience in designing scalable solutions, guiding development teams, and collaborating with DevOps engineers on OpenShift (OCP) platforms. This role requires a blend of technical leadership, solution design, and delivery ownership.


Key Responsibilities


Lead the design and development of Java / Spring Boot based microservices in a cloud-native environment.

Provide technical leadership to a team of developers, ensuring adherence to coding, security, and architectural best practices.

Collaborate with architects and DevOps engineers to deploy and manage microservices on Red Hat OpenShift (OCP).

Oversee end-to-end delivery including requirement analysis, design, development, code review, testing, and deployment.

Define and implement API specifications, integration patterns, and microservices orchestration.

Work closely with DevOps teams to integrate CI/CD pipelines, containerized deployments, Helm, and GitOps workflows.

Ensure application performance, scalability, and reliability with proactive observability practices (Grafana, Prometheus, etc.).



Required Skills & Qualifications

8-10 years of proven experience in Java application development with at least 4+ years in microservices architecture.

Strong expertise in Spring Boot, REST APIs, JPA/Hibernate, and messaging frameworks (Kafka, RabbitMQ, etc.).

Hands-on experience with containerization (Docker) and orchestration (OpenShift/Kubernetes).

Familiarity with OCP DevOps practices including CI/CD (ArgoCD, Tekton, Jenkins), Helm, and YAML deployments.

Good understanding of observability stacks (Grafana, Prometheus, Loki, Alertmanager) and logging practices.

Solid knowledge of cloud-native design principles, scalability, and fault tolerance.

Exposure to security best practices (OAuth, RBAC, secrets management via Vault or similar). 


Read more
GLOBAL DIGITAL TRANSFORMATION SOLUTIONS PROVIDER

GLOBAL DIGITAL TRANSFORMATION SOLUTIONS PROVIDER

Agency job
via Peak Hire Solutions by Dhara Thakkar
Thiruvananthapuram, Trivandrum, Bengaluru (Bangalore), Mumbai, Navi Mumbai, Ahmedabad, Chennai, Coimbatore, Gurugram, Hyderabad, Kochi (Cochin), Kolkata, Calcutta, Noida, Pune
8 - 12 yrs
₹20L - ₹40L / yr
skill icon.NET
Agile/Scrum
skill iconVue.js
Software Development
API
+21 more

Job Position: Lead II - Software Engineering

Domain: Information technology (IT)

Location: India - Thiruvananthapuram

Salary: Best in Industry

Job Positions: 1

Experience: 8 - 12 Years

Skills: .Net, Sql Azure, Rest Api, Vue.Js

Notice Period: Immediate – 30 Days


Job Summary:

We are looking for a highly skilled Senior .NET Developer with a minimum of 7 years of experience across the full software development lifecycle, including post-live support. The ideal candidate will have a strong background in .NET backend API development, Agile methodologies, and Cloud infrastructure (preferably Azure). You will play a key role in solution design, development, DevOps pipeline enhancement, and mentoring junior engineers.


Key Responsibilities:

  • Design, develop, and maintain scalable and secure .NET backend APIs.
  • Collaborate with product owners and stakeholders to understand requirements and translate them into technical solutions.
  • Lead and contribute to Agile software delivery processes (Scrum, Kanban).
  • Develop and improve CI/CD pipelines and support release cadence targets, using Infrastructure as Code tools (e.g., Terraform).
  • Provide post-live support, troubleshooting, and issue resolution as part of full lifecycle responsibilities.
  • Implement unit and integration testing to ensure code quality and system stability.
  • Work closely with DevOps and cloud engineering teams to manage deployments on Azure (Web Apps, Container Apps, Functions, SQL).
  • Contribute to front-end components when necessary, leveraging HTML, CSS, and JavaScript UI frameworks.
  • Mentor and coach engineers within a co-located or distributed team environment.
  • Maintain best practices in code versioning, testing, and documentation.


Mandatory Skills:

  • 7+ years of .NET development experience, including API design and development
  • Strong experience with Azure Cloud services, including:
  • Web/Container Apps
  • Azure Functions
  • Azure SQL Server
  • Solid understanding of Agile development methodologies (Scrum/Kanban)
  • Experience in CI/CD pipeline design and implementation
  • Proficient in Infrastructure as Code (IaC) – preferably Terraform
  • Strong knowledge of RESTful services and JSON-based APIs
  • Experience with unit and integration testing techniques
  • Source control using Git
  • Strong understanding of HTML, CSS, and cross-browser compatibility


Good-to-Have Skills:

  • Experience with Kubernetes and Docker
  • Knowledge of JavaScript UI frameworks, ideally Vue.js
  • Familiarity with JIRA and Agile project tracking tools
  • Exposure to Database as a Service (DBaaS) and Platform as a Service (PaaS) concepts
  • Experience mentoring or coaching junior developers
  • Strong problem-solving and communication skills
Read more
Wissen Technology
Pune, Mumbai, Bengaluru (Bangalore)
4 - 10 yrs
Best in industry
Google Cloud Platform (GCP)
skill iconPython
skill iconKubernetes
Shell Scripting
SRE Engineer
+1 more

Dear Candidate,


Greetings from Wissen Technology. 

We have an exciting Job opportunity for GCP SRE Engineer Professionals. Please refer to the Job Description below and share your profile if interested.   

 About Wissen Technology:

  • The Wissen Group was founded in the year 2000. Wissen Technology, a part of Wissen Group, was established in the year 2015.
  • Wissen Technology is a specialized technology company that delivers high-end consulting for organizations in the Banking & Finance, Telecom, and Healthcare domains. We help clients build world class products.
  • Our workforce consists of 1000+ highly skilled professionals, with leadership and senior management executives who have graduated from Ivy League Universities like Wharton, MIT, IITs, IIMs, and NITs and with rich work experience in some of the biggest companies in the world.
  • Wissen Technology has grown its revenues by 400% in these five years without any external funding or investments.
  • Globally present with offices US, India, UK, Australia, Mexico, and Canada.
  • We offer an array of services including Application Development, Artificial Intelligence & Machine Learning, Big Data & Analytics, Visualization & Business Intelligence, Robotic Process Automation, Cloud, Mobility, Agile & DevOps, Quality Assurance & Test Automation.
  • Wissen Technology has been certified as a Great Place to Work®.
  • Wissen Technology has been voted as the Top 20 AI/ML vendor by CIO Insider in 2020.
  • Over the years, Wissen Group has successfully delivered $650 million worth of projects for more than 20 of the Fortune 500 companies.
  • The technology and thought leadership that the company commands in the industry is the direct result of the kind of people Wissen has been able to attract. Wissen is committed to providing them the best possible opportunities and careers, which extends to providing the best possible experience and value to our clients.

We have served client across sectors like Banking, Telecom, Healthcare, Manufacturing, and Energy. They include likes of Morgan Stanley, MSCI, State Street Corporation, Flipkart, Swiggy, Trafigura, GE to name a few.



Job Description: 

Please find below details:


Experience - 4+ Years

Location- Bangalore/Mumbai/Pune


Team Responsibilities

The successful candidate shall be part of the S&C – SRE Team. Our team provides a tier 2/3 support to S&C Business. This position involves collaboration with the client facing teams like Client Services, Product and Research teams and Infrastructure/Technology and Application development teams to perform Environment and Application maintenance and support.

 

Resource's key Responsibilities


• Provide Tier 2/3 product technical support.

• Building software to help operations and support activities.

• Manage system\software configurations and troubleshoot environment issues.

• Identify opportunities for optimizing system performance through changes in configuration or suggestions for development.

• Plan, document and deploy software applications on our Unix/Linux/Azure and GCP based systems.

• Collaborate with development and software testing teams throughout the release process.

• Analyze release and deployment processes to identify key areas for automation and optimization.

• Manage hardware and software resources & coordinate maintenance, planned downtimes with

infrastructure group across all the environments. (Production / Non-Production).

• Must spend minimum one week a month as on call support to help with off-hour emergencies and maintenance activities.

 

Required skills and experience

• Bachelor's degree, Computer Science, Engineering or other similar concentration (BE/MCA)

• Master’s degree a plus

• 6-8 years’ experience in Production Support/ Application Management/ Application Development (support/ maintenance) role.

• Excellent problem-solving/troubleshooting skills, fast learner

• Strong knowledge of Unix Administration.

• Strong scripting skills in Shell, Python, Batch is must.

• Strong Database experience – Oracle

• Strong knowledge of Software Development Life Cycle

• Power shell is nice to have

• Software development skillsets in Java or Ruby.

• Worked upon any of the cloud platforms – GCP/Azure/AWS is nice to have




Read more
NeoGenCode Technologies Pvt Ltd
Ritika Verma
Posted by Ritika Verma
Gurugram, Bengaluru (Bangalore), Pune, Mohali, Panchkula, Chennai
6 - 10 yrs
₹10L - ₹25L / yr
Automation
skill iconKubernetes
helm
skill iconDocker
Architecture

Key Responsibilities

Test Architecture & Design

  • Architect test frameworks and infrastructure to validate microservices and distributed systems in multi-cluster, hybrid-cloud environments.
  • Design complex test scenarios simulating production-like workloads, scaling, failure injection, and recovery.
  • Ensure reliability, scalability, and maintainability of test systems.

Automation & Scalability

  • Drive test automation integrated with CI/CD pipelines (e.g., Jenkins, GitHub Actions).
  • Leverage Kubernetes APIs, Helm, and service meshes (Istio/Linkerd) for automation coverage of health, failover, and network resilience.
  • Implement Infrastructure-as-Code (IaC) practices for test infrastructure to ensure repeatability and extensibility.

Technical Expertise

  • Deep knowledge of Kubernetes internals, cluster lifecycle management, Helm, service meshes, and network policies.
  • Strong scripting and automation skills with Python, Pytest, and Bash.
  • Hands-on with observability stacks (Prometheus, Grafana, Jaeger) and performance benchmarking tools (e.g., K6).
  • Experience with cloud platforms (AWS, Azure, GCP) and containerized CI/CD.
  • Solid Linux proficiency: Bash scripting, debugging, networking, PKI management, Docker/containerd, GitOps/Flux, kubectl/Helm, troubleshooting multi-cluster environments.

Required Skills & Qualifications

  • 6+ years in QA, Test Automation, or related engineering roles.
  • Proven experience in architecting test frameworks for distributed/cloud-native systems.
  • Expertise in Kubernetes, Helm, CI/CD, and cloud platforms (AWS/Azure/GCP).
  • Strong Linux fundamentals with scripting and system debugging skills.
  • Excellent problem-solving, troubleshooting, and technical leadership abilities.


Read more
Bengaluru (Bangalore), Mumbai, Delhi, Gurugram, Noida, Ghaziabad, Faridabad, Pune, Hyderabad, Mohali, Dehradun, Panchkula, Chennai
6 - 14 yrs
₹12L - ₹28L / yr
Test Automation (QA)
skill iconKubernetes
helm
skill iconDocker
skill iconAmazon Web Services (AWS)
+13 more

Job Title : Senior QA Automation Architect (Cloud & Kubernetes)

Experience : 6+ Years

Location : India (Multiple Offices)

Shift Timings : 12 PM to 9 PM (Noon Shift)

Working Days : 5 Days WFO (NO Hybrid)


About the Role :

We’re looking for a Senior QA Automation Architect with deep expertise in cloud-native systems, Kubernetes, and automation frameworks.

You’ll design scalable test architectures, enhance automation coverage, and ensure product reliability across hybrid-cloud and distributed environments.


Key Responsibilities :

  • Architect and maintain test automation frameworks for microservices.
  • Integrate automated tests into CI/CD pipelines (Jenkins, GitHub Actions).
  • Ensure reliability, scalability, and observability of test systems.
  • Work closely with DevOps and Cloud teams to streamline automation infrastructure.

Mandatory Skills :

  • Kubernetes, Helm, Docker, Linux
  • Cloud Platforms : AWS / Azure / GCP
  • CI/CD Tools : Jenkins, GitHub Actions
  • Scripting : Python, Pytest, Bash
  • Monitoring & Performance : Prometheus, Grafana, Jaeger, K6
  • IaC Practices : Terraform / Ansible

Good to Have :

  • Experience with Service Mesh (Istio/Linkerd).
  • Container Security or DevSecOps exposure.
Read more
NeoGenCode Technologies Pvt Ltd
Bengaluru (Bangalore), Mumbai, Delhi, Gurugram, Noida, Ghaziabad, Faridabad, Pune
4 - 8 yrs
₹15L - ₹25L / yr
skill iconNodeJS (Node.js)
skill iconReact.js
skill iconJavascript
TypeScript
RESTful APIs
+8 more

About the Role

We’re looking for a passionate Fullstack Product Engineer with a strong JavaScript foundation to work on a high-impact, scalable product. You’ll collaborate closely with product and engineering teams to build intuitive UIs and performant backends using modern technologies.


Responsibilities

  • Build and maintain scalable features across the frontend and backend.
  • Work with tech stacks like Node.js, React.js, Vue.js, and others.
  • Contribute to system design, architecture, and code quality enforcement.
  • Follow modern engineering practices including TDD, CI/CD, and live coding evaluations.
  • Collaborate in code reviews, performance optimizations, and product iterations.


Required Skills

  • 4–6 years of hands-on fullstack development experience.
  • Strong command over JavaScript, Node.js, and React.js.
  • Solid understanding of REST APIs and/or GraphQL.
  • Good grasp of OOP principles, TDD, and writing clean, maintainable code.
  • Experience with CI/CD tools like GitHub Actions, GitLab CI, Jenkins, etc.
  • Familiarity with HTML, CSS, and frontend performance optimization.


Good to Have

  • Exposure to Docker, AWS, Kubernetes, or Terraform.
  • Experience in other backend languages or frameworks.
  • Experience with microservices and scalable system architectures.
Read more
Mumbai, Pune, Hyderabad, Mohali, Panchkula, Bengaluru (Bangalore)
5 - 8 yrs
₹10L - ₹20L / yr
DevOps
skill iconKubernetes
terraforms
opentofu
helm
+7 more

Job Description

We are seeking a highly skilled DevOps / Kubernetes Engineer. The ideal candidate will have strong expertise in container orchestration, infrastructure as code, and GitOps workflows, with hands-on experience in Azure cloud environments. You will be responsible for designing, deploying, and managing modern cloud-native infrastructure and applications at scale.

Key Responsibilities:

· Manage and operate Kubernetes clusters (AKS / K3s) for large-scale applications.

· Implement infrastructure as code using Terraform or OpenTofu for scalable, reliable, and secure infrastructure provisioning.

· Deploy and manage applications using Helm and ArgoCD with GitOps best practices.

· Work with Podman and Docker as container runtimes for development and production environments.

· Collaborate with cross-functional teams to ensure smooth deployment pipelines and CI/CD integrations.

· Optimize infrastructure for cost, performance, and reliability within Azure cloud.

· Troubleshoot, monitor, and maintain system health, scalability, and performance.

Required Skills & Experience:

· Strong hands-on experience with Kubernetes (AKS / K3s) cluster orchestration.

· Proficiency in Terraform or OpenTofu for infrastructure as code.

· Experience with Helm and ArgoCD for application deployment and GitOps.

· Solid understanding of Docker / Podman container runtimes.

· Cloud expertise in Azure with experience deploying and scaling workloads.

· Familiarity with CI/CD pipelines, monitoring, and logging frameworks.

· Knowledge of best practices around cloud security, scalability, and high availability.

Preferred Qualifications:

· Contributions to open-source projects under Apache 2.0 / MPL 2.0 licenses.

· Experience working in global distributed teams across CST/PST time zones.

· Strong problem-solving skills and ability to work independently in a fast-paced environment.


Read more
Deqode

at Deqode

1 recruiter
Apoorva Jain
Posted by Apoorva Jain
Pune
4 - 7 yrs
₹4L - ₹16L / yr
skill iconAmazon Web Services (AWS)
DevOps
skill iconDocker
skill iconKubernetes
skill iconJenkins
+1 more

Job Summary:

We are seeking a highly skilled and proactive DevOps Engineer with 4+ years of experience to join our dynamic team. This role requires strong technical expertise across cloud infrastructure, CI/CD pipelines, container orchestration, and infrastructure as code (IaC). The ideal candidate should also have direct client-facing experience and a proactive approach to managing both internal and external stakeholders.


Key Responsibilities:

  • Collaborate with cross-functional teams and external clients to understand infrastructure requirements and implement DevOps best practices.
  • Design, build, and maintain scalable cloud infrastructure on AWS (EC2, S3, RDS, ECS, etc.).
  • Develop and manage infrastructure using Terraform or CloudFormation.
  • Manage and orchestrate containers using Docker and Kubernetes (EKS).
  • Implement and maintain CI/CD pipelines using Jenkins or GitHub Actions.
  • Write robust automation scripts using Python and Shell scripting.
  • Monitor system performance and availability, and ensure high uptime and reliability.
  • Execute and optimize SQL queries for MSSQL and PostgreSQL databases.
  • Maintain clear documentation and provide technical support to stakeholders and clients.

Required Skills:

  • Minimum 4+ years of experience in a DevOps or related role.
  • Proven experience in client-facing engagements and communication.
  • Strong knowledge of AWS services – EC2, S3, RDS, ECS, etc.
  • Proficiency in Infrastructure as Code using Terraform or CloudFormation.
  • Hands-on experience with Docker and Kubernetes (EKS).
  • Strong experience in setting up and maintaining CI/CD pipelines with Jenkins or GitHub.
  • Solid understanding of SQL and working experience with MSSQL and PostgreSQL.
  • Proficient in Python and Shell scripting.

Preferred Qualifications:

AWS Certifications (e.g., AWS Certified DevOps Engineer) are a plus.

Experience working in Agile/Scrum environments.

Strong problem-solving and analytical skills.


Read more
Foto Owl AI
Pune
1 - 3 yrs
₹5L - ₹6L / yr
SQL
skill iconPython
skill iconDocker
RESTful APIs
FastAPI
+2 more

🚀 We’re Hiring: Senior Python Backend Developer 🚀


📍 Location: Baner, Pune (Work from Office)

💰 Compensation: ₹6 LPA

🕑 Experience Required: Minimum 2 years as a Python Backend Developer



About Us

Foto Owl AI is a fast-growing product-based company headquartered in Baner, Pune.


We specialize in:

⚡ Hyper-personalized fan engagement

🤖 AI-powered real-time photo sharing

📸 Advanced media asset management



What You’ll Do


As a Senior Python Backend Developer, you’ll play a key role in designing, building, and deploying scalable backend systems that power our cutting-edge platforms.


Architect and develop complex, secure, and scalable backend services

Build and maintain APIs & data pipelines for web, mobile, and AI-driven platforms

Optimize SQL & NoSQL databases for high performance

Manage AWS infrastructure (EC2, S3, RDS, Lambda, CloudWatch, etc.)

Implement observability, monitoring, and security best practices

Collaborate cross-functionally with product & AI teams

Mentor junior developers and conduct code reviews

Troubleshoot and resolve production issues with efficiency



What We’re Looking For


✅ Strong expertise in Python backend development

✅ Solid knowledge of Data Structures & Algorithms

✅ Hands-on experience with SQL (PostgreSQL/MySQL) and NoSQL (MongoDB, Redis, etc.)

✅ Proficiency in RESTful APIs & Microservice design

✅ Knowledge of Docker, Kubernetes, and cloud-native systems

✅ Experience managing AWS-based deployments



Why Join Us?


At Foto Owl AI, you’ll be part of a passionate team building world-class media tech products used in sports, events, and fan engagement platforms. If you love scalable backend systems, real-time challenges, and AI-driven products, this is the place for you.

Read more
Deqode

at Deqode

1 recruiter
Apoorva Jain
Posted by Apoorva Jain
Pune
4 - 8 yrs
₹4L - ₹14L / yr
skill iconAmazon Web Services (AWS)
skill iconKubernetes
skill iconDocker
Terraform
Linux/Unix
+3 more

Job Summary:

We are seeking a highly skilled and proactive DevOps Engineer with 4+ years of experience to join our dynamic team. This role requires strong technical expertise across cloud infrastructure, CI/CD pipelines, container orchestration, and infrastructure as code (IaC). The ideal candidate should also have direct client-facing experience and a proactive approach to managing both internal and external stakeholders.


Key Responsibilities:

  • Collaborate with cross-functional teams and external clients to understand infrastructure requirements and implement DevOps best practices.
  • Design, build, and maintain scalable cloud infrastructure on AWS (EC2, S3, RDS, ECS, etc.).
  • Develop and manage infrastructure using Terraform or CloudFormation.
  • Manage and orchestrate containers using Docker and Kubernetes (EKS).
  • Implement and maintain CI/CD pipelines using Jenkins or GitHub Actions.
  • Write robust automation scripts using Python and Shell scripting.
  • Monitor system performance and availability, and ensure high uptime and reliability.
  • Execute and optimize SQLqueries for MSSQL and PostgresQL databases.
  • Maintain clear documentation and provide technical support to stakeholders and clients.


Required Skills:

  • Minimum 4+ years of experience in a DevOps or related role.
  • Proven experience in client-facing engagements and communication.
  • Strong knowledge of AWS services – EC2, S3, RDS, ECS, etc.
  • Proficiency in Infrastructure as Code using Terraform or CloudFormation.
  • Hands-on experience with Docker and Kubernetes (EKS).
  • Strong experience in setting up and maintaining CI/CD pipelines with Jenkins or GitHub.
  • Solid understanding of SQL and working experience with MSSQL and PostgreSQL.
  • Proficient in Python and Shell scripting.


Preferred Qualifications:

  • AWS Certifications (e.g., AWS Certified DevOps Engineer) are a plus.
  • Experience working in Agile/Scrum environments.
  • Strong problem-solving and analytical skills.


Read more
A domestic client 15 years old, in the logitech industry

A domestic client 15 years old, in the logitech industry

Agency job
via Talent Socio Bizcon LLP by Baishali Dhar
Pune, Mumbai
5 - 9 yrs
₹20L - ₹35L / yr
ASP.NET
skill icon.NET
Microservices
skill iconDocker
skill iconKubernetes

Responsibilities:

  • Work with product owners, managers, and customers to explore requirements and translate use-cases into functional requirements.
  • Collaborate with cross-functional teams and architects to design, develop, test, and deploy web applications using ASP. NETCore | Open-source web framework for. NET, . NET Core, and C#.
  • Build scalable, reliable, clean code and unit tests for. NET applications.
  • Help maintain code quality, organization, and automation by performing code reviews, refactoring, and unit testing.
  • Develop integration with third-party APIs and external applications to deliver robust and scalable applications.
  • Maintain services, enhance, optimize, and upgrade existing systems.
  • Contribute to architectural and design discussions and document design decisions.
  • Effectively participate in planning meetings, retrospectives, daily stand-ups, and other meetings as part of the software development process.
  • Contribute to the continuous improvement of development processes and practices.
  • Resolve production issues, participate in production incident analysis by conducting effective troubleshooting and RCA within the SLA.
  • Work with Operations teams on product deployment, issue resolution, and support.
  • Mentor junior developers and assist in their professional growth. Stay updated with the latest technologies and best practices.


Requirements:

  • 5+ years of experience with proficiency in C# language.
  • Bachelor's or master's degree in computer science or a related field.
  • Good working experience in. NET Framework, . NET Core, and ASP. NETCore | Open-source web framework for. NET and C#.
  • Good understanding of OOP and design patterns - SOLID, Integration, REST, Micro-services, and cloud-native designs.
  • Understanding of fundamental design principles behind building and scaling distributed applications.
  • Knack for writing clean, readable, reusable, and testable C# code.
  • Strong knowledge of data structures and collections in C#.
  • Good knowledge of front-end development languages, including JavaScript, HTML5 and CSS.
  • Experience in designing relational DB schema, PL/SQL queries performance tuning.
  • Experience in working in an Agile environment following Scrum/SAFE methodologies.
  • Knowledge of CI/CD, DevOps, containers, and automation frameworks.
  • Experience in developing and deploying on at least one cloud environment.
  • Excellent problem-solving, communication, and collaboration skills.
  • Ability to work independently and effectively in a fast-paced environment.


Read more
Deqode

at Deqode

1 recruiter
purvisha Bhavsar
Posted by purvisha Bhavsar
Pune, Bengaluru (Bangalore), Mumbai, Gurugram, Hyderabad, Chennai
8 - 10 yrs
₹10L - ₹30L / yr
skill iconReact.js
skill icon.NET
Windows Azure
DevOps
skill iconKubernetes
+1 more

🚀 Hiring: Dot net full stack at Deqode

⭐ Experience: 8+ Years

📍 Location: Bangalore | Mumbai | Pune | Gurgaon | Chennai | Hyderabad

⭐ Work Mode:- Hybrid

⏱️ Notice Period: Immediate Joiners

(Only immediate joiners & candidates serving notice period)


We’re looking for an experienced Dotnet Full Stack Developer with strong hands-on skills in ReactJS, .NET Core, and Azure Cloud Services (Azure Functions, Azure SQL, APIM, etc.).


⭐ Must-Have Skills:-

➡️ Design and develop scalable web applications using ReactJS, C#, and .NET Core.

➡️Azure (Functions, App Services, SQL, APIM, Service Bus)

➡️Familiarity with DevOps practices, CI/CD pipelines, Docker, and Kubernetes.

➡️Advanced experience in Entity Framework Core and SQL Server.

➡️Expertise in RESTful API development and microservices.


Read more
Deqode

at Deqode

1 recruiter
Apoorva Jain
Posted by Apoorva Jain
Pune
4 - 6 yrs
₹3L - ₹12L / yr
skill iconAmazon Web Services (AWS)
SQL
PL/SQL
DevOps
skill iconDocker
+5 more

Job Summary:

We are seeking a highly skilled and proactive DevOps Engineer with 4+ years of experience to join our dynamic team. This role requires strong technical expertise across cloud infrastructure, CI/CD pipelines, container orchestration, and infrastructure as code (IaC). The ideal candidate should also have direct client-facing experience and a proactive approach to managing both internal and external stakeholders.

Key Responsibilities:

  • Collaborate with cross-functional teams and external clients to understand infrastructure requirements and implement DevOps best practices.
  • Design, build, and maintain scalable cloud infrastructure on AWS (EC2, S3, RDS, ECS, etc.).
  • Develop and manage infrastructure using Terraform or CloudFormation.
  • Manage and orchestrate containers using Docker and Kubernetes (EKS).
  • Implement and maintain CI/CD pipelines using Jenkins or GitHub Actions.
  • Write robust automation scripts using Python and Shell scripting.
  • Monitor system performance and availability, and ensure high uptime and reliability.
  • Execute and optimize SQL queries for MSSQL and PostgreSQL databases.
  • Maintain clear documentation and provide technical support to stakeholders and clients.

Required Skills:

  • Minimum 4+ years of experience in a DevOps or related role.
  • Proven experience in client-facing engagements and communication.
  • Strong knowledge of AWS services – EC2, S3, RDS, ECS, etc.
  • Proficiency in Infrastructure as Code using Terraform or CloudFormation.
  • Hands-on experience with Docker and Kubernetes (EKS).
  • Strong experience in setting up and maintaining CI/CD pipelines with Jenkins or GitHub.
  • Solid understanding of SQL and working experience with MSSQL and PostgreSQL.
  • Proficient in Python and Shell scripting.

Preferred Qualifications:

  • AWS Certifications (e.g., AWS Certified DevOps Engineer) are a plus.
  • Experience working in Agile/Scrum environments.
  • Strong problem-solving and analytical skills.

Work Mode & Timing:

  • Hybrid – Pune-based candidates preferred.
  • Working hours: 12:30 PM to 9:30 PM IST to align with client time zones.


Read more
NeoGenCode Technologies Pvt Ltd
Pune
5 - 8 yrs
₹7L - ₹11L / yr
skill iconPython
Celery
RESTful APIs
Multithreading
Encryption
+6 more

Job Title : Senior Python Developer – Product Engineering

Experience : 5 to 8 Years

Location : Pune, India (Hybrid – 3-4 days WFO, 1-2 days WFH)

Employment Type : Full-time

Commitment : Minimum 3 years (with end-of-term bonus)

Openings : 2 positions

  • Junior : 3 to 5 Years
  • Senior : 5 to 8 Years

Mandatory Skills : Python 3.x, REST APIs, multithreading, Celery, encryption (OpenSSL/cryptography.io), PostgreSQL/Redis, Docker/K8s, secure coding


Nice to Have : Experience with EFSS/DRM/DLP platforms, delta sync, file systems, LDAP/AD/SIEM integrations


🎯 Roles & Responsibilities :

  • Design and develop backend services for DRM enforcement, file synchronization, and endpoint telemetry.
  • Build scalable Python-based APIs interacting with file systems, agents, and enterprise infra.
  • Implement encryption workflows, secure file handling, delta sync, and file versioning.
  • Integrate with 3rd-party platforms: LDAP, AD, DLP, CASB, SIEM.
  • Collaborate with DevOps to ensure high availability and performance of hybrid deployments.
  • Participate in code reviews, architectural discussions, and mentor junior developers.
  • Troubleshoot production issues and continuously optimize performance.

✅ Required Skills :

  • 5 to 8 years of hands-on experience in Python 3.x development.
  • Expertise in REST APIs, Celery, multithreading, and file I/O.
  • Proficient in encryption libraries (OpenSSL, cryptography.io) and secure coding.
  • Experience with PostgreSQL, Redis, SQLite, and Linux internals.
  • Strong command over Docker, Kubernetes, CI/CD, and Git workflows.
  • Ability to write clean, testable, and scalable code in production environments.

➕ Preferred Skills :

  • Background in DRM, EFSS, DLP, or enterprise security platforms.
  • Familiarity with file diffing, watermarking, or agent-based tools.
  • Knowledge of compliance frameworks (GDPR, DPDP, RBI-CSF) is a plus.
Read more
Wissen Technology

at Wissen Technology

4 recruiters
Anurag Sinha
Posted by Anurag Sinha
Pune, Mumbai, Bengaluru (Bangalore)
5 - 9 yrs
Best in industry
skill iconPython
RESTful APIs
skill iconFlask
skill iconKubernetes
DevOps
+2 more
  • 5+ years of experience
  • FlaskAPI, RestAPI development experience
  • Proficiency in Python programming.
  • Basic knowledge of front-end development.
  • Basic knowledge of Data manipulation and analysis libraries
  • Code versioning and collaboration. (Git)
  • Knowledge for Libraries for extracting data from websites.
  • Knowledge of SQL and NoSQL databases
  • Familiarity with RESTful APIs
  • Familiarity with Cloud (Azure /AWS) technologies


Read more
AI driven consulting firm

AI driven consulting firm

Agency job
via PLEXO HR Solutions by Upashna Kumari
Pune
1 - 3 yrs
₹3L - ₹5L / yr
Google Cloud Platform (GCP)
CI/CD
skill iconKubernetes
Terraform
Linux/Unix

What You’ll Do:

We’re looking for a skilled DevOps Engineer to help us build and maintain reliable, secure, and scalable infrastructure. You will work closely with our development, product, and security teams to streamline deployments, improve performance, and ensure cloud infrastructure resilience.


Responsibilities:

● Deploy, manage, and monitor infrastructure on Google Cloud Platform (GCP)

● Build CI/CD pipelines using Jenkins and integrate them with Git workflows

● Design and manage Kubernetes clusters and helm-based deployments

● Manage infrastructure as code using Terraform

● Set up logging, monitoring, and alerting (Stackdriver, Prometheus, Grafana)

● Ensure security best practices across cloud resources, networks, and secrets

● Automate repetitive operations and improve system reliability

● Collaborate with developers to troubleshoot and resolve issues in staging/production environments


What We’re Looking For:

Required Skills:

● 1–3 years of hands-on experience in a DevOps or SRE role

● Strong knowledge of GCP services (IAM, GKE, Cloud Run, VPC, Cloud Build, etc.)

● Proficiency in Kubernetes (deployment, scaling, troubleshooting)

● Experience with Terraform for infrastructure provisioning

● CI/CD pipeline setup using Jenkins, GitHub Actions, or similar tools

● Understanding of DevSecOps principles and cloud security practices

● Good command over Linux, shell scripting, and basic networking concepts


Nice to have:

● Experience with Docker, Helm, ArgoCD

● Exposure to other cloud platforms (AWS, Azure)

● Familiarity with incident response and disaster recovery planning

● Knowledge of logging and monitoring tools like ELK, Prometheus, Grafana

Read more
Deqode

at Deqode

1 recruiter
Roshni Maji
Posted by Roshni Maji
Bengaluru (Bangalore), Gurugram, Mumbai, Pune
6 - 10 yrs
₹10L - ₹28L / yr
skill iconJava
skill iconReact.js
skill iconJenkins
skill iconDocker
skill iconKubernetes

Job Title: Java Full Stack Developer

Experience: 6+ Years

Locations: Bangalore, Mumbai, Pune, Gurgaon

Work Mode: Hybrid

Notice Period: Immediate Joiners Preferred / Candidates Who Have Completed Their Notice Period


About the Role

We are looking for a highly skilled and experienced Java Full Stack Developer with a strong command over backend technologies and modern frontend frameworks. The ideal candidate will have deep experience with Java, ReactJS, and DevOps tools like Jenkins, Docker, and basic Kubernetes knowledge. You’ll be contributing to complex software solutions across industries, collaborating with cross-functional teams, and deploying production-grade systems in a cloud-native, CI/CD-driven environment.


Key Responsibilities

  • Design and develop scalable web applications using Java (Spring Boot) and ReactJS
  • Collaborate with UX/UI designers and backend developers to implement robust, efficient front-end interfaces
  • Develop and maintain CI/CD pipelines using Jenkins, ensuring high-quality software delivery
  • Containerize applications using Docker and ensure smooth deployment and orchestration using Kubernetes (basic level)
  • Write clean, modular, and testable code and participate in code reviews
  • Troubleshoot and resolve performance, reliability, and functional issues in production
  • Work in Agile teams and participate in daily stand-ups, sprint planning, and retrospective meetings
  • Ensure all security, compliance, and performance standards are met in the development lifecycle


Mandatory Skills

  • Backend: Java, Spring Boot
  • Frontend: ReactJS
  • DevOps Tools: Jenkins, Docker
  • Containers & Orchestration: Basic knowledge of Kubernetes
  • Strong understanding of RESTful services and APIs
  • Familiarity with Git and version control workflows
  • Good understanding of SDLC, Agile/Scrum methodologies


Read more
Wissen Technology

at Wissen Technology

4 recruiters
Shikha Nagar
Posted by Shikha Nagar
Pune, Mumbai, Bengaluru (Bangalore)
8 - 10 yrs
Best in industry
Terraform
Google Cloud Platform (GCP)
skill iconKubernetes
DevOps
SQL Azure

We are looking for an experienced GCP Cloud Engineer to design, implement, and manage cloud-based solutions on Google Cloud Platform (GCP). The ideal candidate should have expertise in GKE (Google Kubernetes Engine), Cloud Run, Cloud Loadbalancer, Cloud function, Azure DevOps, and Terraform, with a strong focus on automation, security, and scalability.

You will work closely with development, operations, and security teams to ensure robust cloud infrastructure and CI/CD pipelines while optimizing performance and cost.


Key Responsibilities:

1. Cloud Infrastructure Design & Management

· Architect, deploy, and maintain GCP cloud resources via terraform/other automation.

· Implement Google Cloud Storage, Cloud SQL, file store, for data storage and processing needs.

· Manage and configure Cloud Load Balancers (HTTP(S), TCP/UDP, and SSL Proxy) for high availability and scalability.

· Optimize resource allocation, monitoring, and cost efficiency across GCP environments.

2. Kubernetes & Container Orchestration

· Deploy, manage, and optimize workloads on Google Kubernetes Engine (GKE).

· Work with Helm charts, Istio, and service meshes for microservices deployments.

· Automate scaling, rolling updates, and zero-downtime deployments.


3. Serverless & Compute Services

· Deploy and manage applications on Cloud Run and Cloud Functions for scalable, serverless workloads.

· Optimize containerized applications running on Cloud Run for cost efficiency and performance.


4. CI/CD & DevOps Automation

· Design, implement, and manage CI/CD pipelines using Azure DevOps.

· Automate infrastructure deployment using Terraform, Bash and Power shell scripting

· Integrate security and compliance checks into the DevOps workflow (DevSecOps).


Required Skills & Qualifications:

✔ Experience: 8+ years in Cloud Engineering, with a focus on GCP.

✔ Cloud Expertise: Strong knowledge of GCP services (GKE, Compute Engine, IAM, VPC, Cloud Storage, Cloud SQL, Cloud Functions).

✔ Kubernetes & Containers: Experience with GKE, Docker, GKE Networking, Helm.

✔ DevOps Tools: Hands-on experience with Azure DevOps for CI/CD pipeline automation.

✔ Infrastructure-as-Code (IaC): Expertise in Terraform for provisioning cloud resources.

✔ Scripting & Automation: Proficiency in Python, Bash, or PowerShell for automation.

✔ Security & Compliance: Knowledge of cloud security principles, IAM, and compliance standards.


Read more
NeoGenCode Technologies Pvt Ltd
Remote, Bengaluru (Bangalore), Mumbai, Gurugram, Pune, Hyderabad, Chennai, Coimbatore
5 - 12 yrs
₹15L - ₹35L / yr
Temporal.io
skill iconNodeJS (Node.js)
skill iconJava
skill iconReact.js
keycloak
+7 more

Job Title : Senior Consultant (Java / NodeJS + Temporal)

Experience : 5 to 12 Years

Location : Bengaluru, Chennai, Hyderabad, Pune, Mumbai, Gurugram, Coimbatore

Work Mode : Remote (Must be open to travel for occasional team meetups)

Notice Period : Immediate Joiners or Serving Notice

Interview Process :

  • R1 : Tech Interview (60 mins)
  • R2 : Technical Interview
  • R3 : (Optional) Interview with Client

Job Summary :

We are seeking a Senior Backend Consultant with strong hands-on expertise in Temporal (BPM/Workflow Engine) and either Node.js or Java.

The ideal candidate will have experience in designing and developing microservices and process-driven applications, as well as orchestrating complex workflows using Temporal.io.

You will work on high-scale systems, collaborating closely with cross-functional teams.


Mandatory Skills :

Temporal.io, Node.js (or Java), React.js, Keycloak IAM, PostgreSQL, Terraform, Kubernetes, Azure, Jest, OpenAPI


Key Responsibilities :

  • Design and implement scalable backend services using Node.js or Java.
  • Build and manage complex workflow orchestrations using Temporal.io.
  • Integrate with IAM solutions like Keycloak for role-based access control.
  • Work with React (v17+), TypeScript, and component-driven frontend design.
  • Use PostgreSQL for structured data persistence and optimized queries.
  • Manage infrastructure using Terraform and orchestrate via Kubernetes.
  • Leverage Azure Services like Blob Storage, API Gateway, and AKS.
  • Write and maintain API documentation using Swagger/Postman/Insomnia.
  • Conduct unit and integration testing using Jest.
  • Participate in code reviews and contribute to architectural decisions.

Must-Have Skills :

  • Temporal.io – BPMN modeling, external task workers, Operate, Tasklist
  • Node.js + TypeScript (preferred) or strong Java experience
  • React.js (v17+) and component-driven UI development
  • Keycloak IAM, PostgreSQL, and modern API design
  • Infrastructure automation with Terraform, Kubernetes
  • Experience in using GitFlow, OpenAPI, Jest for testing

Nice-to-Have Skills :

  • Blockchain integration experience for secure KYC/identity flows
  • Custom Camunda Connectors or exporter plugin development
  • CI/CD experience using Azure DevOps or GitHub Actions
  • Identity-based task completion authorization enforcement
Read more
Client based at Pune location.

Client based at Pune location.

Agency job
Pune
5 - 10 yrs
₹15L - ₹25L / yr
Cloud Developer
skill iconAmazon Web Services (AWS)
large scale financial tracking system
grpc
cloudflare
+8 more

Minimum requirements

5+ years of industry software engineering experience (does not include internships nor includes co-ops)

Strong coding skills in any programming language (we understand new languages can be learned on the job so our interview process is language agnostic)

Strong collaboration skills, can work across workstreams within your team and contribute to your peers’ success

Have the ability to thrive on a high level of autonomy, responsibility, and think of yourself as entrepreneurial

Interest in working as a generalist across varying technologies and stacks to solve problems and delight both internal and external users

Preferred Qualifications

Experience with large-scale financial tracking systems

Good understanding and practical knowledge in cloud based services (e.g. gRPC, GraphQL, Docker/Kubernetes, cloud services such as AWS, etc.)

Read more
Deqode

at Deqode

1 recruiter
Naincy Jain
Posted by Naincy Jain
Bengaluru (Bangalore), Mumbai, Delhi, Gurugram, Noida, Pune, Indore, Jaipur, Kolkata, Hyderabad
4 - 6 yrs
₹3L - ₹30L / yr
DevOps
Terraform
skill iconKubernetes
skill iconAmazon Web Services (AWS)
AWS Lambda
+1 more

Required Skills:


  • Experience in systems administration, SRE or DevOps focused role
  • Experience in handling production support (on-call)
  • Good understanding of the Linux operating system and networking concepts.
  • Demonstrated competency with the following AWS services: ECS, EC2, EBS, EKS, S3, RDS, ELB, IAM, Lambda.
  • Experience with Docker containers and containerization concepts
  • Experience with managing and scaling Kubernetes clusters in a production environment
  • Experience building scalable infrastructure in AWS with Terraform.
  • Strong knowledge of Protocol-level such as HTTP/HTTPS, SMTP, DNS, and LDAP
  • Experience monitoring production systems
  • Expertise in leveraging Automation / DevOps principles, experience with operational tools, and able to apply best practices for infrastructure and software deployment (Ansible).
  • HAProxy, Nginx, SSH, MySQL configuration and operation experience
  • Ability to work seamlessly with software developers, QA, project managers, and business development
  • Ability to produce and maintain written documentation


Read more
NeoGenCode Technologies Pvt Ltd
Akshay Patil
Posted by Akshay Patil
Bengaluru (Bangalore), Pune, Hyderabad, Chennai, Kolkata
8 - 15 yrs
₹25L - ₹45L / yr
skill iconJava
skill iconSpring Boot
Microservices
skill iconLeadership
Team leadership
+11 more

Job Title : Lead Java Developer (Backend)

Experience Required : 8 to 15 Years

Open Positions : 5

Location : Any major metro city (Bengaluru, Pune, Chennai, Kolkata, Hyderabad)

Work Mode : Open to Remote / Hybrid / Onsite

Notice Period : Immediate Joiner/30 Days or Less


About the Role :

  • We are looking for experienced Lead Java Developers who bring not only strong backend development skills but also a product-oriented mindset and leadership capability.
  • This is an opportunity to be part of high-impact digital transformation initiatives that go beyond writing code—you’ll help shape future-ready platforms and drive meaningful change.
  • This role is embedded within a forward-thinking digital engineering team that thrives on co-innovation, lean delivery, and end-to-end ownership of platforms and products.


Key Responsibilities :

  • Design, develop, and implement scalable backend systems using Java and Spring Boot.
  • Collaborate with product managers, designers, and engineers to build intuitive and reliable digital products.
  • Advocate and implement engineering best practices : SOLID principles, OOP, clean code, CI/CD, TDD/BDD.
  • Lead Agile-based development cycles with a focus on speed, quality, and customer outcomes.
  • Guide and mentor team members, fostering technical excellence and ownership.
  • Utilize cloud platforms and DevOps tools to ensure performance and reliability of applications.

What We’re Looking For :

  • Proven experience in Java backend development (Spring Boot, Microservices).
  • 8+ Years of hands-on engineering experience with at least 2+ years in a Lead role.
  • Familiarity with cloud platforms such as AWS, Azure, or GCP.
  • Good understanding of containerization and orchestration tools like Docker and Kubernetes.
  • Exposure to DevOps and Infrastructure as Code practices.
  • Strong problem-solving skills and the ability to design solutions from first principles.
  • Prior experience in product-based or startup environments is a big plus.

Ideal Candidate Profile :

  • A tech enthusiast with a passion for clean code and scalable architecture.
  • Someone who thrives in collaborative, transparent, and feedback-driven environments.
  • A leader who takes ownership beyond individual deliverables to drive overall team and project success.

Interview Process

  1. Initial Technical Screening (via platform partner)
  2. Technical Interview with Engineering Team
  3. Client-facing Final Round

Additional Info :

  • Targeting profiles from product/startup backgrounds.
  • Strong preference for candidates with under 1 month of notice period.
  • Interviews will be fast-tracked for qualified profiles.
Read more
NeoGenCode Technologies Pvt Ltd
Akshay Patil
Posted by Akshay Patil
Bengaluru (Bangalore), Pune, Chennai
10 - 20 yrs
₹30L - ₹60L / yr
skill iconJava
skill iconSpring Boot
Microservices
Apache Kafka
skill iconAmazon Web Services (AWS)
+8 more

📍 Position : Java Architect

📅 Experience : 10 to 15 Years

🧑‍💼 Open Positions : 3+

📍 Work Location : Bangalore, Pune, Chennai

💼 Work Mode : Hybrid

📅 Notice Period : Immediate joiners preferred; up to 1 month maximum

🔧 Core Responsibilities :

  • Lead architecture design and development for scalable enterprise-level applications.
  • Own and manage all aspects of technical development and delivery.
  • Define and enforce best coding practices, architectural guidelines, and development standards.
  • Plan and estimate the end-to-end technical scope of projects.
  • Conduct code reviews, ensure CI/CD, and implement TDD/BDD methodologies.
  • Mentor and lead individual contributors and small development teams.
  • Collaborate with cross-functional teams, including DevOps, Product, and QA.
  • Engage in high-level and low-level design (HLD/LLD), solutioning, and cloud-native transformations.

🛠️ Required Technical Skills :

  • Strong hands-on expertise in Java, Spring Boot, Microservices architecture
  • Experience with Kafka or similar messaging/event streaming platforms
  • Proficiency in cloud platformsAWS and Azure (must-have)
  • Exposure to frontend technologies (nice-to-have)
  • Solid understanding of HLD, system architecture, and design patterns
  • Good grasp of DevOps concepts, Docker, Kubernetes, and Infrastructure as Code (IaC)
  • Agile/Lean development, Pair Programming, and Continuous Integration practices
  • Polyglot mindset is a plus (Scala, Golang, Python, etc.)

🚀 Ideal Candidate Profile :

  • Currently working in a product-based environment
  • Already functioning as an Architect or Principal Engineer
  • Proven track record as an Individual Contributor (IC)
  • Strong engineering fundamentals with a passion for scalable software systems
  • No compromise on code quality, craftsmanship, and best practices

🧪 Interview Process :

  1. Round 1: Technical pairing round
  2. Rounds 2 & 3: Technical rounds with panel (code pairing + architecture)
  3. Final Round: HR and offer discussion
Read more
Deqode

at Deqode

1 recruiter
purvisha Bhavsar
Posted by purvisha Bhavsar
Pune
5 - 8 yrs
₹5L - ₹15L / yr
DevOps
skill iconKubernetes
Terraform
skill iconJenkins
skill iconDocker

🚀 Hiring: Azure DevOps Engineer – Immediate Joiners Only! 🚀

📍 Location: Pune (Hybrid)

💼 Experience: 5+ Years

🕒 Mode of Work: Hybrid

Are you a proactive and skilled Azure DevOps Engineer looking for your next challenge? We are hiring immediate joiners to join our dynamic team! If you are passionate about CI/CD, cloud automation, and SRE best practices, we want to hear from you.

🔹 Key Skills Required:

Cloud Expertise: Proficiency in any cloud (Azure preferred)

CI/CD Pipelines: Hands-on experience in designing and managing pipelines

Containers & IaC: Strong knowledge of Docker, Terraform, Kubernetes

Incident Management: Quick issue resolution and RCA

SRE & Observability: Experience with SLI/SLO/SLA, monitoring, tracing, logging

Programming: Proficiency in Python, Golang

Performance Optimization: Identifying and resolving system bottlenecks


Read more
Jio Tesseract
TARUN MISHRA
Posted by TARUN MISHRA
Delhi, Gurugram, Noida, Ghaziabad, Faridabad, Bengaluru (Bangalore), Pune, Hyderabad, Mumbai, Navi Mumbai
5 - 40 yrs
₹8.5L - ₹75L / yr
Microservices
Architecture
API
NOSQL Databases
skill iconMongoDB
+33 more

JioTesseract, a digital arm of Reliance Industries, is India's leading and largest AR/VR organization with the mission to democratize mixed reality for India and the world. We make products at the cross of hardware, software, content and services with focus on making India the leader in spatial computing. We specialize in creating solutions in AR, VR and AI, with some of our notable products such as JioGlass, JioDive, 360 Streaming, Metaverse, AR/VR headsets for consumers and enterprise space.


Mon-Fri, In office role with excellent perks and benefits!


Key Responsibilities:

1. Design, develop, and maintain backend services and APIs using Node.js or Python, or Java.

2. Build and implement scalable and robust microservices and integrate API gateways.

3. Develop and optimize NoSQL database structures and queries (e.g., MongoDB, DynamoDB).

4. Implement real-time data pipelines using Kafka.

5. Collaborate with front-end developers to ensure seamless integration of backend services.

6. Write clean, reusable, and efficient code following best practices, including design patterns.

7. Troubleshoot, debug, and enhance existing systems for improved performance.


Mandatory Skills:

1. Proficiency in at least one backend technology: Node.js or Python, or Java.


2. Strong experience in:

i. Microservices architecture,

ii. API gateways,

iii. NoSQL databases (e.g., MongoDB, DynamoDB),

iv. Kafka

v. Data structures (e.g., arrays, linked lists, trees).


3. Frameworks:

i. If Java : Spring framework for backend development.

ii. If Python: FastAPI/Django frameworks for AI applications.

iii. If Node: Express.js for Node.js development.


Good to Have Skills:

1. Experience with Kubernetes for container orchestration.

2. Familiarity with in-memory databases like Redis or Memcached.

3. Frontend skills: Basic knowledge of HTML, CSS, JavaScript, or frameworks like React.js.

Read more
Adesso

Adesso

Agency job
via HashRoot by Maheswari M
Kochi (Cochin), Chennai, Pune
4 - 7 yrs
₹4L - ₹20L / yr
skill iconJava
06692
RESTful APIs
skill iconKubernetes
Azure devops
+3 more

We are seeking a skilled Full Stack (FSE) Developer with expertise in Java, Spring Boot development to join our dynamic team. In this role, you will be responsible for development of critical banking application across various business domains. You will collaborate closely with cross-functional teams to ensure high-quality solutions are developed, maintained, and continuously improved.

Responsibilities:

Development of business-critical banking applications.

Develop new features for banking applications using FSE technologies.

Ensure code quality through proper testing, reviews, and adherence to coding standards.

Collaborate with design, backend, and other teams to deliver seamless user experiences.

Troubleshoot, debug, and optimize performance issues.

Participate in agile development processes and contribute to continuous improvement initiatives.

Requirements:

Bachelors/Masters degree in computer science, Software Engineering, or a related field.

4 – 6 years of relevant experience in application development.

Solid experience in:

Java, Spring Boot.

APIs / REST.

Kubernetes / OpenShift. 

Azure DevOps.

JMS, Message Queues.

Nice to have knowledge in: 

Quarkus.

Apache Camel.

Soft skills / Personality:

Excellent English communication skills / proactive communication.

Candidate needs to have Self-dependent working style.

Candidate needs to have problem solving skills (Strong analytical skills to identify and solve complex issues).

Candidate needs to show high Adaptability (Flexibility in adjusting to different working environments and practices).

Candidate needs to be quick in Critical thinking (Evaluating information and making informed decisions).

Candidate needs to have Team collaboration (Ability to work collaboratively with a distributed team) character.

Candidate needs to have Cultural awareness ability.

Candidate needs to be Initiative (Proactively seeking solutions and improvements).

Good to have knowledge about Co Banking systems.

Good to have Banking domain knowledge.

Experience in customer facing is an advantage.

Skills & Requirements

Java, Spring Boot, APIs/REST, Kubernetes, OpenShift, Azure DevOps, JMS, Message Queues, Quarkus, Apache Camel, Excellent English communication, Proactive communication, Self-dependent working, Problem-solving, Analytical skills, Adaptability, Critical thinking, Team collaboration, Cultural awareness, Initiative, Co-banking systems knowledge, Banking domain knowledge, Customer-facing experience.

Read more
OnActive
Mansi Gupta
Posted by Mansi Gupta
Gurugram, Pune, Bengaluru (Bangalore), Chennai, Bhopal, Hyderabad, Jaipur
5 - 8 yrs
₹6L - ₹12L / yr
skill iconPython
Spark
SQL
AWS CloudFormation
skill iconMachine Learning (ML)
+3 more

Level of skills and experience:


5 years of hands-on experience in using Python, Spark,Sql.

Experienced in AWS Cloud usage and management.

Experience with Databricks (Lakehouse, ML, Unity Catalog, MLflow).

Experience using various ML models and frameworks such as XGBoost, Lightgbm, Torch.

Experience with orchestrators such as Airflow and Kubeflow.

Familiarity with containerization and orchestration technologies (e.g., Docker, Kubernetes).

Fundamental understanding of Parquet, Delta Lake and other data file formats.

Proficiency on an IaC tool such as Terraform, CDK or CloudFormation.

Strong written and verbal English communication skill and proficient in communication with non-technical stakeholderst

Read more
Nyteco

at Nyteco

2 candid answers
1 video
Simran Thind
Posted by Simran Thind
Pune
2 - 3 yrs
₹5L - ₹7.5L / yr
skill iconKubernetes
Troubleshooting
Terraform
Ansible

Company Overview:

Davis Index is a leading market intelligence platform and publication that specializes in providing accurate and up-to-date price benchmarks for ferrous and non-ferrous scrap, as well as primary metals. Our dedicated team of reporters, analysts, and data specialists publishes and processes over 1,400 proprietary price indexes, metals futures prices, and other reference data. In addition, we offer market intelligence, news, and analysis through an industry-leading technology platform. With a global presence across the Americas, Asia, Europe, and Africa, our team of over 50 professionals works tirelessly to deliver essential market insights to our valued clients.


Job Overview:

We are seeking a skilled Cloud Engineer to join our team. The ideal candidate will have a strong foundation in cloud technologies and a knack for automating infrastructure processes. You will be responsible for deploying and managing cloud-based solutions while ensuring optimal performance and reliability.


Key Responsibilities:

  • Design, deploy, and manage cloud infrastructure solutions.
  • Automate infrastructure setup and management using Terraform or Ansible.
  • Manage and maintain Kubernetes clusters for containerized applications.
  • Work with Linux systems for server management and troubleshooting.
  • Configure load balancers to route traffic efficiently.
  • Set up and manage database instances along with failover replicas.


Required Skills and Qualifications:

  • Minimum of Cloud Practitioner Certification.
  • Proficiency with Linux systems.
  • Hands-on experience with Kubernetes.
  • Expertise in writing automation scripts using Terraform or Ansible.
  • Strong understanding of cloud computing concepts.


Application Process: Candidates are encouraged to apply with their resumes and examples of relevant automation scripts they have written in Terraform or Ansible.


Read more
Wissen Technology

at Wissen Technology

4 recruiters
Tony Tom
Posted by Tony Tom
Pune
5 - 10 yrs
Best in industry
MariaDB
skill iconKubernetes
MySQL
TIDB
skill iconAmazon Web Services (AWS)
+2 more
  • TIDB (Good to have)
  • Kubernetes( Must to have)
  • MySQL(Must to have)
  • Maria DB(Must to have)
  • Looking candidate who has more exposure into Reliability over maintenance
Read more
Pune
4 - 7 yrs
₹18L - ₹30L / yr
Large Language Models (LLM)
skill iconPython
skill iconDocker
Retrieval Augmented Generation (RAG)
SQL
+7 more

Job Description

Phonologies is seeking a Senior Data Engineer to lead data engineering efforts for developing and deploying generative AI and large language models (LLMs). The ideal candidate will excel in building data pipelines, fine-tuning models, and optimizing infrastructure to support scalable AI systems for enterprise applications.


Role & Responsibilities

  • Data Pipeline Management: Design and manage pipelines for AI model training, ensuring efficient data ingestion, storage, and transformation for real-time deployment.
  • LLM Fine-Tuning & Model Lifecycle: Fine-tune LLMs on domain-specific data, and oversee the model lifecycle using tools like MLFlow and Weights & Biases.
  • Scalable Infrastructure: Optimize infrastructure for large-scale data processing and real-time LLM performance, leveraging containerization and orchestration in hybrid/cloud environments.
  • Data Management: Ensure data quality, security, and compliance, with workflows for handling sensitive and proprietary datasets.
  • Continuous Improvement & MLOps: Apply MLOps/LLMOps practices for automation, versioning, and lifecycle management, while refining tools and processes for scalability and performance.
  • Collaboration: Work with data scientists, engineers, and product teams to integrate AI solutions and communicate technical capabilities to business stakeholders.


Preferred Candidate Profile

  • Experience: 5+ years in data engineering, focusing on AI/ML infrastructure, LLM fine-tuning, and deployment.
  • Technical Skills: Advanced proficiency in Python, SQL, and distributed data tools.
  • Model Management: Hands-on experience with MLFlow, Weights & Biases, and model lifecycle management.
  • AI & NLP Expertise: Familiarity with LLMs (e.g., GPT, BERT) and NLP frameworks like Hugging Face Transformers.
  • Cloud & Infrastructure: Strong skills with AWS, Azure, Google Cloud, Docker, and Kubernetes.
  • MLOps/LLMOps: Expertise in versioning, CI/CD, and automating AI workflows.
  • Collaboration & Communication: Proven ability to work with cross-functional teams and explain technical concepts to non-technical stakeholders.
  • Education: Degree in Computer Science, Data Engineering, or related field.

Perks and Benefits

  • Competitive Compensation: INR 20L to 30L per year.
  • Innovative Work Environment for Personal Growth: Work with cutting-edge AI and data engineering tools in a collaborative setting, for continuous learning in data engineering and AI.


Read more
Hipla By InVentry
Ravi Gondaliya
Posted by Ravi Gondaliya
Pune
3 - 5 yrs
₹7L - ₹10L / yr
skill iconDocker
skill iconKubernetes
DevOps
skill iconAmazon Web Services (AWS)
Windows Azure
+3 more

Role & Responsiblities

  • DevOps Engineer will be working with implementation and management of DevOps tools and technologies.
  • Create and support advanced pipelines using Gitlab.
  • Create and support advanced container and serverless environments.
  • Deploy Cloud infrastructure using Terraform and cloud formation templates.
  • Implement deployments to OpenShift Container Platform, Amazon ECS and EKS
  • Troubleshoot containerized builds and deployments
  • Implement processes and automations for migrating between OpenShift, AKS and EKS
  • Implement CI/CD automations.

Required Skillsets

  • 3-5 years of cloud-based architecture software engineering experience.
  • Deep understanding of Kubernetes and its architecture.
  • Mastery of cloud security engineering tools, techniques, and procedures.
  • Experience with AWS services such as Amazon S3, EKS, ECS, DynamoDB, AWS Lambda, API Gateway, etc.
  • Experience with designing and supporting infrastructure via Infrastructure-as-Code in AWS, via CDK, CloudFormation Templates, Terraform or other toolset.
  • Experienced with tools like Jenkins, Github, Puppet or other similar toolset.
  • Experienced with monitoring functions like cloudwatch, newrelic, graphana, splunk, etc,
  • Excellence in verbal and written communication, and in working collaboratively with a variety of colleagues and clients in a remote development environment.
  • Proven track record in cloud computing systems and enterprise architecture and security
Read more
codersbrain

at codersbrain

1 recruiter
Tanuj Uppal
Posted by Tanuj Uppal
Bengaluru (Bangalore), Mumbai, Pune, Chennai, Noida, Gurugram, Ahmedabad
8 - 15 yrs
₹10L - ₹15L / yr
skill iconJenkins
prometheus
Terraform
skill iconKubernetes
Splunk
+1 more

Devops Engineer(Permanent)


Experience: 8 to 12 yrs

Location: Remote for 2-3 months (Any Mastek Location- Chennai/Mumbai/Pune/Noida/Gurgaon/Ahmedabad/Bangalore)

Max Salary = 28 LPA (including 10% variable)

Notice Period: Immediate/ max 10days

Mandatory Skills: Either Splunk/Datadog, Gitlab, Retail Domain




· Bachelor’s degree in Computer Science/Information Technology, or in a related technical field or equivalent technology experience.

· 10+ years’ experience in software development

· 8+ years of experience in DevOps

· Mandatory Skills: Either Splunk/Datadog,Gitalb,EKS,Retail domain experience

· Experience with the following Cloud Native tools: Git, Jenkins, Grafana, Prometheus, Ansible, Artifactory, Vault, Splunk, Consul, Terraform, Kubernetes

· Working knowledge of Containers, i.e., Docker Kubernetes, ideally with experience transitioning an organization through its adoption

· Demonstrable experience with configuration, orchestration, and automation tools such as Jenkins, Puppet, Ansible, Maven, and Ant to provide full stack integration

· Strong working knowledge of enterprise platforms, tools and principles including Web Services, Load Balancers, Shell Scripting, Authentication, IT Security, and Performance Tuning

· Demonstrated understanding of system resiliency, redundancy, failovers, and disaster recovery

· Experience working with a variety of vendor APIs including cloud, physical and logical infrastructure devices

· Strong working knowledge of Cloud offerings & Cloud DevOps Services (EC2, ECS, IAM, Lambda, Cloud services, AWS CodeBuild, CodeDeploy, Code Pipeline etc or Azure DevOps, API management, PaaS)

· Experience managing and deploying Infrastructure as Code, using tools like Terraform Helm charts etc.

· Manage and maintain standards for Devops tools used by the team

Read more
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Find more jobs
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort