50+ Kubernetes Jobs in Pune | Kubernetes Job openings in Pune
Apply to 50+ Kubernetes Jobs in Pune on CutShort.io. Explore the latest Kubernetes Job opportunities across top companies like Google, Amazon & Adobe.
Your Mission: The Role
solving for better.
You are a reliability-owning, hands-on solver. Not just a "break-fix engineer."
As a DRI (directly responsible individual) for our clients' most critical systems, you’ll be the go-to expert within the squad that ensures their environments are secure, reliable, and optimized 24/7. You will deliver measurable impact – improved uptime, faster response times, and real cost savings. Not just closed tickets. Not just alerts. Real outcomes you engineer yourself.
You will lead the charge on technical execution, from complex troubleshooting and root cause analysis to engineering proactive, automated solutions. This role is about building the future of reliable cloud operations and shipping it into today's production environments.
Your Responsibilities
what you will wake up to solve.
This isn’t a “manage tickets” role. You are the architect, the executioner and the DRI for our Cloud Managed Services GTM, deploying solutions that turn operational noise into hardened outcomes. Here’s how you’ll make your mark:
- Own Service Reliability: You will be the go-to technical expert for 24/7 cloud operations and incident management. You'll ensure strict adherence to SLOs by getting your hands dirty, leading high-stakes troubleshooting to deliver a superior client experience.
- Engineer the Blueprint: You'll translate client needs into scalable, automated, and secure cloud architectures. You will write and maintain the operational playbooks and Infrastructure as Code (IaC) that your squad uses every day.
- Automate with Intelligence: You'll lead the charge from the keyboard to futurify our operations. You'll embed AI-driven automation, predictive monitoring, and AIOps into core processes to eliminate toil and preempt incidents.
- Drive FinOps & Impact: You'll own the technical execution of the FinOps framework. You will continuously analyze, configure, and optimize cloud spend for clients through hands-on engineering.
- Be the Expert in the Room: You'll share your knowledge through internal demos, documentation, and technical deep dives, representing the deep expertise that turns operational complexity into business resilience.
- Mentor & Elevate: You will be a technical mentor for your peers. Through code reviews and collaborative problem-solving, you'll help build a high-performing squad that lives the “Always Hardened” mindset.
Experience & Relevance
We are looking for future technology leaders, not just coders. We value raw intelligence, analytical rigor, and an obsessive passion for technology over any prior experience.
- Cloud Operations Pedigree: 3+ years of experience in AWS cloud infrastructure, with a significant portion in a cloud managed services.
- Commercial Acumen: Proven track record of building and scaling a net-new managed services business.
- Client-Facing Tech Acumen: 2+ years of experience in a client-facing technical role, acting as the trusted advisor for cloud operations, security, and reliability.
Functional Skills:
- Service Delivery Mindset: A deep understanding of MSP business models, SLAs, and the importance of client satisfaction in an operational context.
- Client Engagement: Ability to ask appropriate questions to get to the heart of an operational issue and win trust with stakeholders.
- Cross-Functional Catalyst: Thrive in multi-disciplinary teams, bringing together operations, security, and development teams.
- Repository builder: Creates reusable frameworks, IaC modules, and operational playbooks for scale.
Join the ‘real solvers’
ready to futurify?
If you are excited by the possibilities of what an AI-native engineering-led, modern tech consultancy can do to futurify businesses, apply here and experience the ‘Art of the possible’. Don’t Just Send a Resume. Send a Statement.
Position: Microsoft .NET Full Stack Developer
Experience: 4–6 Years
Open Positions: 10
Location: PAN India (Final Round – Face-to-Face Interview)
Budget: Up to 15 LPA
Notice Period: Immediate joiners preferred
Key Responsibilities:
· Work on highly distributed and scalable system architecture
· Design, develop, test, and maintain high-quality software solutions
· Ensure performance, security, and maintainability of applications
· Collaborate with cross-functional teams and stakeholders
· Perform system testing and resolve technical issues
Required Skills:
· Strong experience in ASP.NET, C#, .NET Core, MVC
· Hands-on experience with SQL Server / PostgreSQL
· Experience in Angular / React (Frontend technologies)
· Knowledge of microservices architecture & RESTful APIs
· Familiarity with CQRS pattern
· Exposure to AWS / Docker / Kubernetes
· Experience with CI/CD pipelines (Azure DevOps, Jenkins)
· Knowledge of Node.js is an added advantage
· Understanding of Agile methodology
· Good exposure to cybersecurity and compliance
Technology Stack:
· Microsoft .NET technologies (primary)
· Cloud platforms: AWS (SaaS/PaaS/IaaS)
· Databases: MSSQL, MongoDB, PostgreSQL
· Caching: Redis, Memcached
· Messaging queues: RabbitMQ, Kafka, SQS
Key Responsibilities:
• Work on highly distributed and scalable system architecture
• Design, develop, test, and maintain high-quality software solutions
• Ensure performance, security, and maintainability of applications
• Collaborate with cross-functional teams and stakeholders
• Perform system testing and resolve technical issues
Required Skills:
• Strong experience in ASP.NET, C#, .NET Core, MVC
• Hands-on experience with SQL Server / PostgreSQL
• Experience in Angular / React (Frontend technologies)
• Knowledge of microservices architecture & RESTful APIs
• Familiarity with CQRS pattern
• Exposure to AWS / Docker / Kubernetes
• Experience with CI/CD pipelines (Azure DevOps, Jenkins)
• Knowledge of Node.js is an added advantage
• Understanding of Agile methodology
• Good exposure to cybersecurity and compliance
Technology Stack:
• Microsoft .NET technologies (primary)
• Cloud platforms: AWS (SaaS/PaaS/IaaS)
• Databases: MSSQL, MongoDB, PostgreSQL
• Caching: Redis, Memcached
• Messaging queues: RabbitMQ, Kafka, SQS
Lead Cloud Reliability Engineer
Job Responsibilities
● Lead and manage the Cloud Reliability teams to provide strong Managed Services support to end-customers.
● Isolate, troubleshoot and resolve issues reported by CMS clients in their cloud environment
● Drive the communication with the customer providing details about the issue, current steps, next plan of action, ETA
● Gather client's requirements related to use of specic cloud services and provide assistance in seing them up and resolving issues
● Create SOPs and knowledge articles for use by the L1 teams to resolve common issues
● Identify recurring issues, perform root cause analysis and propose/implement preventive actions
● Follow change management procedure to identify, record and implement changes
● Plan and deploy OS, security patches in Windows/Linux environment and upgrade k8s clusters
● Identify the recurring manual activities and contribute to automation
● Provide technical guidance and educate team members on development and operations. Monitor metrics and develop ways to improve.
● System troubleshooting and problem-solving across plaorm and application domains. Ability to use a wide variety of open-source technologies and cloud services.
● Build, maintain, and monitor conguration standards.
● Ensuring critical system security through using best-in-class cloud security solutions.
Qualifications
● 4-7 years experience in Cloud Infrastructure and Operations domains and IT operational experience preferably in a global enterprise environment.
● Specialize in one or two cloud deployment platforms: AWS, GCP
● Hands on experience with AWS/GCP services (EKS, ECS, EC2, VPC, RDS, Lambda, GKE, Compute Engine)
● Understanding of one or more programming languages (Python, JavaScript, Ruby, Java, .Net)
● Logging and Monitoring tools (ELK, Stackdriver, CloudWatch)
● Knowledge on Conguration Management tools such as Ansible, Terraform, Puppet, Chef
● Experience working with deployment and orchestration technologies (such as Docker, Kubernetes, Mesos)
● Good analytical, communication, problem solving, and learning skills.
● Knowledge on programming against cloud plaorms such as Google Cloud Platform and lean development methodologies.
● Strong service aitude and a commitment to quality.
● Willingness to work in shifts.
About the role
We are looking for talented Senior Backend Engineers (5+ years of experience) to join our team and take ownership of different parts of our stack. You will be working alongside a team of Engineers locally and directly with the U.S. Engineering team on all aspects of product/application development. You will leverage your experiences and abilities to inform decisions across product development and technology. You will help us build the foundation of our 2nd Headquarters in Pune: its culture, its processes, and its practices. There are a ton of interesting problems to solve, so come hungry. If your colleagues describe you as curious, driven, kind, and creative you are a culture fit.
What Success Looks Like
- You write, review and ship code in production. Your employer or client's success depends on the software you build
- You use Generative AI tools on a daily basis to enhance the quality and efficacy of your software and non-software deliverables
- You are a self-starter and enjoy working with minimal supervision
- You evaluate and make technical architecture decisions with a long-term view, optimizing for speed, quality, and safety
- You take pride in the product you create and the code that you write
- Your team can rely on you to get them out of a sticky situation in production
- You can work well on a team of sales executives, designers and engineers in an in-person environment
- You are passionate about the enterprise software development lifecycle and feel strongly about improving it
- You are a first principles engineer who exercises curiosity about the technologies you work with
- You can learn quickly about technologies, software and code that you are not familiar with, often from rudimentary documentation
- You take ownership of the code that you write, and you help the team operate with everything that you build, throughout its lifecycle
- You communicate openly and solicit feedback on important decisions, keeping the team aligned on your rationale
- You exercise an optimistic mindset and are willing to go the extra mile to make things work
Areas of Ownership
Our hiring process is designed for you to demonstrate a generalist set of capabilities, with a specialization in Backend Technologies.
Required Technical Experience (MUST HAVE):
- Expertise in Python -
- Deep hands-on experience with Terraform -
- Proficiency in Kubernetes -
- Experience with cloud platforms (GCP strongly preferred, AWS/Azure acceptable) -
Additional experience with some of the following:
- Backend Frameworks and Technologies (Node.js, NuxtJS, Express.js)
- Programming languages (JavaScript, TypeScript, Java, C++, Go)
- RPCs (REST, gRPC or GraphQL)
- Databases (SQL, NoSQL, Postgres, MongoDB, or Firebase)
- CI/CD (Jenkins, CircleCI, GitLab or similar)
- Source code versioning tools such as Git or Perforce
- Microservices architecture
Ways to stand out
- Familiarity with AI Platforms
- Extensive experience with building enterprise-scale applications with >99% SLAs
- Deep expertise across the full required stack: Python, Terraform, Kubernetes, and GCP
You'll Get...
- Competitive Salary
- Medical Insurance Benefits
- Employer Provident Fund contributions with Gratuity after 5 years of service
- Company-sponsored US onsite trips for high performers, based on business requirements
- Potential international transfer support for top performers, based on business requirements
- Technology (hardware, software, trainings, etc.) equipment and/or allowance
- The opportunity to re-shape an entire industry
- Beautiful office environment
- Meal allowance and/or food provision on site
Culture
Who we are: Our Co-Founder and CTO is a Serial Gen AI Inventor who grew up in Pune, India, is a BITS Pilani graduate, and worked at NVIDIA's Pune office for 6 years. There, he was promoted 5 times in 6 years and was transferred to the NVIDIA Headquarters in Santa Clara, California. After making significant contributions to NVIDIA, he proceeded to attend Harvard for his dual Masters in Engineering and MBA from HBS. Our other Co-Founder/CEO is a successful Serial Entrepreneur who has built multiple companies. As a team, we work very hard, have a curious mind-set, and believe in a low-ego high output approach.
Virtual Hiring Drive Site Reliability Engineer (SRE)
Date: 25th April 2026, Saturday (Single-Day Drive)
Mode: 100% Virtual - All interview rounds on the same day
Experience: 3 to 7 Years
Note : We are looking for quick joiners who can join us within 30 days.
About the Role
We are looking for a Site Reliability Engineer who understands the realities of running production systems at scale. If building reliable, scalable, and observable systems excites you, you'll enjoy working with us.
At One2N, we solve One-to-N problems where proof of concept is already built and the real challenge lies in scalability, maintainability, performance, and reliability.
You will work closely with startups and mid-sized clients, helping them architect production-grade infrastructure and observability systems.
Key Responsibilities
- Design and build platform engineering solutions with a self-serve model
- Architect and optimize observability systems (metrics, logs, traces)
- Implement monitoring, logging, alerting & dashboards
- Build and optimize CI/CD pipelines
- Automate repetitive operational and infrastructure tasks (IaC-first approach)
- Improve Developer Experience (DX)
- Guide teams on SRE best practices & on-call processes
- Participate in code reviews and mentor engineers
- Contribute to cloud-native and platform engineering initiatives
Must-Have Skills
- 3 - 7 years experience in DevOps / SRE / Platform Engineering
- Strong hands-on with Kubernetes on AWS
- Expertise in observability tools like Datadog / Honeycomb / ELK / Grafana / Prometheus
- Experience with Docker & Microservices architecture
- Infrastructure as Code using Terraform / Pulumi
- Strong Linux troubleshooting skill
- Programming knowledge in Golang / Python / Java
- Automation & scripting expertise
✅ Mandatory Skills
- Strong programming experience in C++ (C++11/14/17)
- Hands-on experience with Kubernetes (K8s)Application-level understanding
- Experience with StatefulSets & DaemonSets
- Good understanding of Linux systems
- Experience in multithreading and concurrency
- Strong problem-solving and debugging skills
⭐ Good to Have
- Experience in Microservices architecture
- Knowledge of Docker / containerization
- Basic knowledge of Python (for scripting/automation)
- Exposure to Distributed Systems
- Familiarity with CI/CD pipelines
- Experience with cloud platforms (AWS / Azure / GCP)
Job Title: Lead Software Engineer
Experience: 4 - 12 yr
Department: Software
Reports To: Senior Software Engineer / Software Architect
Purpose of the Role
The incumbent will be responsible for designing and developing robust software solutions for products in the domains of Warehouse Automation, Industrial Automation, Robotics, and IoT. The role includes defining software architecture, ensuring scalability and performance, and mentoring the development team to drive technical excellence and innovation.
Technical Skills Required
- Proven experience in designing, developing, and deploying high-volume, scalable applications.
- Expertise in distributed systems, microservices, and central system architectures.
- Programming & Frameworks: Proficiency in Java 17+.
- Experience with frameworks such as Spring, Hibernate, Kubernetes, and RESTful APIs.
- Knowledge of JPA, MS SQL, and database modelling/design.
- Hands-on experience with GCP, AWS, or Azure for cloud architecture.
- Familiarity with virtualization and containerization technologies.
- Strong skills in data modelling and database design.
- Knowledge of secure coding practices.
- Tech stack: Java, MSSQL, MySQL, Spring Boot, Redis, Data Structures, Linux, basics of Kubernetes.
Behavioural Skills Required
- Attention to Detail (Proficient)
- Problem Solving
- Decision Making
- Collaborative approach
- Adaptability to a volatile environment
- Accountability
- Good Leadership skills
Job Responsibilities
- Understand requirements and define database and application structure under guidance of Software Architect.
- Write high-quality, scalable, and efficient code.
- Prepare Functional Requirement Documents (FRD) based on inputs from BA team.
- Guide junior and mid-level developers and provide technical support.
- Collaborate to identify and fix technical issues in UAT/Production.
- Work closely to meet project deadlines.
- Take ownership of product implementations at customer sites.
- Hands-on development for assigned modules/products.
- Handle application performance in production.
- Work with customers to understand automation requirements.
- Review and merge code changes from the team.
- Conduct sprint meetings, demos, and resolve development roadblocks.
- Optimize code for performance and efficiency.
About NonStop io Technologies:
NonStop io Technologies is a value-driven company with a strong focus on process-oriented software engineering. We specialize in Product Development and have a decade's worth of experience in building web and mobile applications across various domains. NonStop io Technologies follows core principles that guide its operations and believes in staying invested in a product's vision for the long term. We are a small but proud group of individuals who believe in the 'givers gain' philosophy and strive to provide value in order to seek value. We are committed to and specialize in building cutting-edge technology products and serving as trusted technology partners for startups and enterprises. We pride ourselves on fostering innovation, learning, and community engagement. Join us to work on impactful projects in a collaborative and vibrant environment.
We're seeking an experienced Senior Backend Engineer to join our team. As a senior backend engineer, you will be responsible for designing, developing, and deploying scalable backends for the products we build at NonStop. This includes APIs, databases, and server-side logic.
Responsibilities:
● Design, develop, and deploy backend systems, including APIs, databases, and server-side logic
● Write clean, efficient, and well-documented code that adheres to industry standards and best practices
● Code Quality: Ensure code quality through code reviews, adherence to best practices, and continuous improvement
● Mentorship: Guide and mentor team members, fostering growth and innovation
● Collaboration: Work closely with stakeholders to align technical goals with business objectives
● Problem-Solving: Analyze and resolve technical challenges promptly ● Innovation: Stay updated with the latest technology trends and integrate them into solutions
Requirements:
● At least 7+ years of experience building scalable and reliable backend systems
● Strong expertise in NodeJS/NestJS, Express, PostgreSQL
● Experience with microservices architecture and distributed systems
● Proficiency in database design (SQL and NoSQL)
● Knowledge of cloud platforms (AWS, Azure, or GCP) and CI/CD pipelines
● Deep understanding of design patterns, data structures, and algorithms
● Hands-on experience with containerization technologies like Docker and orchestration tools like Kubernetes
● Exceptional communication and leadership skills
● Strong understanding of object-oriented programming principles and design patterns
● Familiarity with automated testing frameworks and methodologies
● Excellent problem-solving skills and attention to detail
● Strong communication skills and ability to effectively lead and maintain a collaborative team environment
Senior Software Engineer
Challenge convention and work on cutting edge technology that is transforming the way our customers manage their physical, virtual and cloud computing environments. Virtual Instruments seeks highly talented people to join our growing team, where your contributions will impact the development and delivery of our product roadmap. Our award-winning Virtana Platform provides the only real-time, system-wide, enterprise scale solution for providing visibility into performance, health and utilization metrics, translating into improved performance and availability while lowering the total cost of the infrastructure supporting mission-critical applications.
We are seeking an individual with expert knowledge in Systems Management and/or Systems Monitoring Software, Observability platforms and/or Performance Management Software and Solutions with insight into integrated infrastructure platforms like Cisco UCS, infrastructure providers like Nutanix, VMware, EMC & NetApp and public cloud platforms like Google Cloud and AWS to expand the depth and breadth of Virtana Products.
Work Location: Pune/ Chennai
Job Type: Hybrid
Role Responsibilities:
- The engineer will be primarily responsible for architecture, design and development of software solutions for the Virtana Platform
- Partner and work closely with cross functional teams and with other engineers and product managers to architect, design and implement new features and solutions for the Virtana Platform.
- Communicate effectively across the departments and R&D organization having differing levels of technical knowledge.
- Work closely with UX Design, Quality Assurance, DevOps and Documentation teams. Assist with functional and system test design and deployment automation
- Provide customers with complex and end-to-end application support, problem diagnosis and problem resolution
- Learn new technologies quickly and leverage 3rd party libraries and tools as necessary to expedite delivery
Required Qualifications:
- Minimum of 7+ years of progressive experience with back-end development in a Client Server Application development environment focused on Systems Management, Systems Monitoring and Performance Management Software.
- Deep experience in public cloud environment using Kubernetes and other distributed managed services like Kafka etc (Google Cloud and/or AWS)
- Experience with CI/CD and cloud-based software development and delivery
- Deep experience with integrated infrastructure platforms and experience working with one or more data collection technologies like SNMP, REST, OTEL, WMI, WBEM.
- Minimum of 6 years of development experience with one or more of these high level languages like GO, Python, Java. Deep experience with one of these languages is required.
- Bachelor’s or Master’s degree in computer science, Computer Engineering or equivalent
- Highly effective verbal and written communication skills and ability to lead and participate in multiple projects
- Well versed with identifying opportunities and risks in a fast-paced environment and ability to adjust to changing business priorities
- Must be results-focused, team-oriented and with a strong work ethic
Desired Qualifications:
- Prior experience with other virtualization platforms like OpenShift is a plus
- Prior experience as a contributor to engineering and integration efforts with strong attention to detail and exposure to Open-Source software is a plus
- Demonstrated ability as a lead engineer who can architect, design and code with strong communication and teaming skills
- Deep development experience with the development of Systems, Network and performance Management Software and/or Solutions is a plus
About Virtana: Virtana delivers the industry’s only broadest and deepest Observability Platform that allows organizations to monitor infrastructure, de-risk cloud migrations, and reduce cloud costs by 25% or more.
Over 200 Global 2000 enterprise customers, such as AstraZeneca, Dell, Salesforce, Geico, Costco, Nasdaq, and Boeing, have valued Virtana’s software solutions for over a decade.
Our modular platform for hybrid IT digital operations includes Infrastructure Performance Monitoring and Management (IPM), Artificial Intelligence for IT Operations (AIOps), Cloud Cost Management (Fin Ops), and Workload Placement Readiness Solutions. Virtana is simplifying the complexity of hybrid IT environments with a single cloud-agnostic platform across all the categories listed above. The $30B IT Operations Management (ITOM) Software market is ripe for disruption, and Virtana is uniquely positioned for success.

Global Digital Transformation Solutions Provider
Job Details
- Job Title: Lead Software Engineer - Java, Python, API Development
- Industry: Global digital transformation solutions provider
- Domain - Information technology (IT)
- Experience Required: 8-10 years
- Employment Type: Full Time
- Job Location: Pune & Trivandrum/ Thiruvananthapuram
- CTC Range: Best in Industry
Job Description
Job Summary
We are seeking a Lead Software Engineer with strong hands-on expertise in Java and Python to design, build, and optimize scalable backend applications and APIs. The ideal candidate will bring deep experience in cloud technologies, large-scale data processing, and leading the design of high-performance, reliable backend systems.
Key Responsibilities
- Design, develop, and maintain backend services and APIs using Java and Python
- Build and optimize Java-based APIs for large-scale data processing
- Ensure high performance, scalability, and reliability of backend systems
- Architect and manage backend services on cloud platforms (AWS, GCP, or Azure)
- Collaborate with cross-functional teams to deliver production-ready solutions
- Lead technical design discussions and guide best practices
Requirements
- 8+ years of experience in backend software development
- Strong proficiency in Java and Python
- Proven experience building scalable APIs and data-driven applications
- Hands-on experience with cloud services and distributed systems
- Solid understanding of databases, microservices, and API performance optimization
Nice to Have
- Experience with Spring Boot, Flask, or FastAPI
- Familiarity with Docker, Kubernetes, and CI/CD pipelines
- Exposure to Kafka, Spark, or other big data tools
Skills
Java, Python, API Development, Data Processing, AWS Backend
Skills: Java, API development, Data Processing, AWS backend, Python,
Must-Haves
Java (8+ years), Python (8+ years), API Development (8+ years), Cloud Services (AWS/GCP/Azure), Database & Microservices
8+ years of experience in backend software development
Strong proficiency in Java and Python
Proven experience building scalable APIs and data-driven applications
Hands-on experience with cloud services and distributed systems
Solid understanding of databases, microservices, and API performance optimization
Mandatory Skills: Java API AND AWS
******
Notice period - 0 to 15 days only
Job stability is mandatory
Location: Pune, Trivandrum

Global Digital Transformation Solutions Provider
JOB DETAILS:
* Job Title: Specialist I - DevOps Engineering
* Industry: Global Digital Transformation Solutions Provider
* Salary: Best in Industry
* Experience: 7-10 years
* Location: Bengaluru (Bangalore), Chennai, Hyderabad, Kochi (Cochin), Noida, Pune, Thiruvananthapuram
Job Description
Job Summary:
As a DevOps Engineer focused on Perforce to GitHub migration, you will be responsible for executing seamless and large-scale source control migrations. You must be proficient with GitHub Enterprise and Perforce, possess strong scripting skills (Python/Shell), and have a deep understanding of version control concepts.
The ideal candidate is a self-starter, a problem-solver, and thrives on challenges while ensuring smooth transitions with minimal disruption to development workflows.
Key Responsibilities:
- Analyze and prepare Perforce repositories — clean workspaces, merge streams, and remove unnecessary files.
- Handle large files efficiently using Git Large File Storage (LFS) for files exceeding GitHub’s 100MB size limit.
- Use git-p4 fusion (Python-based tool) to clone and migrate Perforce repositories incrementally, ensuring data integrity.
- Define migration scope — determine how much history to migrate and plan the repository structure.
- Manage branch renaming and repository organization for optimized post-migration workflows.
- Collaborate with development teams to determine migration points and finalize migration strategies.
- Troubleshoot issues related to file sizes, Python compatibility, network connectivity, or permissions during migration.
Required Qualifications:
- Strong knowledge of Git/GitHub and preferably Perforce (Helix Core) — understanding of differences, workflows, and integrations.
- Hands-on experience with P4-Fusion.
- Familiarity with cloud platforms (AWS, Azure) and containerization technologies (Docker, Kubernetes).
- Proficiency in migration tools such as git-p4 fusion — installation, configuration, and troubleshooting.
- Ability to identify and manage large files using Git LFS to meet GitHub repository size limits.
- Strong scripting skills in Python and Shell for automating migration and restructuring tasks.
- Experience in planning and executing source control migrations — defining scope, branch mapping, history retention, and permission translation.
- Familiarity with CI/CD pipeline integration to validate workflows post-migration.
- Understanding of source code management (SCM) best practices, including version history and repository organization in GitHub.
- Excellent communication and collaboration skills for cross-team coordination and migration planning.
- Proven practical experience in repository migration, large file management, and history preservation during Perforce to GitHub transitions.
Skills: Github, Kubernetes, Perforce, Perforce (Helix Core), Devops Tools
Must-Haves
Git/GitHub (advanced), Perforce (Helix Core) (advanced), Python/Shell scripting (strong), P4-Fusion (hands-on experience), Git LFS (proficient)
Role: DevOps Engineer
Experience: 7+ Years
Location: Pune / Trivandrum
Work Mode: Hybrid
𝐊𝐞𝐲 𝐑𝐞𝐬𝐩𝐨𝐧𝐬𝐢𝐛𝐢𝐥𝐢𝐭𝐢𝐞𝐬:
- Drive CI/CD pipelines for microservices and cloud architectures
- Design and operate cloud-native platforms (AWS/Azure)
- Manage Kubernetes/OpenShift clusters and containerized applications
- Develop automated pipelines and infrastructure scripts
- Collaborate with cross-functional teams on DevOps best practices
- Mentor development teams on continuous delivery and reliability
- Handle incident management, troubleshooting, and root cause analysis
𝐌𝐚𝐧𝐝𝐚𝐭𝐨𝐫𝐲 𝐒𝐤𝐢𝐥𝐥𝐬:
- 7+ years in DevOps/SRE roles
- Strong experience with AWS or Azure
- Hands-on with Docker, Kubernetes, and/or OpenShift
- Proficiency in Jenkins, Git, Maven, JIRA
- Strong scripting skills (Shell, Python, Perl, Ruby, JavaScript)
- Solid networking knowledge and troubleshooting skills
- Excellent communication and collaboration abilities
𝐏𝐫𝐞𝐟𝐞𝐫𝐫𝐞𝐝 𝐒𝐤𝐢𝐥𝐥𝐬:
- Experience with Helm, monitoring tools (Splunk, Grafana, New Relic, Datadog)
- Knowledge of Microservices and SOA architectures
- Familiarity with database technologies

Global digital transformation solutions provider.
Job Description
We are seeking a highly skilled Site Reliability Engineer (SRE) with strong expertise in Google Cloud Platform (GCP) and CI/CD automation to lead cloud infrastructure initiatives. The ideal candidate will design and implement robust CI/CD pipelines, automate deployments, ensure platform reliability, and drive continuous improvement in cloud operations and DevOps practices.
Key Responsibilities:
- Design, develop, and optimize end-to-end CI/CD pipelines using Jenkins, with a strong focus on Declarative Pipeline syntax.
- Automate deployment, scaling, and management of applications across various GCP services including GKE, Cloud Run, Compute Engine, Cloud SQL, Cloud Storage, VPC, and Cloud Functions.
- Collaborate closely with development and DevOps teams to ensure seamless integration of applications into the CI/CD pipeline and GCP environment.
- Implement and manage monitoring, logging, and ing solutions to maintain visibility, reliability, and performance of cloud infrastructure and applications.
- Ensure compliance with security best practices and organizational policies across GCP environments.
- Document processes, configurations, and architectural decisions to maintain operational transparency.
- Stay updated with the latest GCP services, DevOps, and SRE best practices to enhance infrastructure efficiency and reliability.
Mandatory Skills:
- Google Cloud Platform (GCP) – Hands-on experience with core GCP compute, networking, and storage services.
- Jenkins – Expertise in Declarative Pipeline creation and optimization.
- CI/CD – Strong understanding of automated build, test, and deployment workflows.
- Solid understanding of SRE principles including automation, scalability, observability, and system reliability.
- Familiarity with containerization and orchestration tools (Docker, Kubernetes – GKE).
- Proficiency in scripting languages such as Shell, Python, or Groovy for automation tasks.
Preferred Skills:
- Experience with Terraform, Ansible, or Cloud Deployment Manager for Infrastructure as Code (IaC).
- Exposure to monitoring and observability tools like Stackdriver, Prometheus, or Grafana.
- Knowledge of multi-cloud or hybrid environments (AWS experience is a plus).
- GCP certification (Professional Cloud DevOps Engineer / Cloud Architect) preferred.
Skills
Gcp, Jenkins, CICD Aws,
Nice to Haves
Experience with Terraform, Ansible, or Cloud Deployment Manager for Infrastructure as Code (IaC).
Exposure to monitoring and observability tools like Stackdriver, Prometheus, or Grafana.
Knowledge of multi-cloud or hybrid environments (AWS experience is a plus).
GCP certification (Professional Cloud DevOps Engineer / Cloud Architect) preferred.
******
Notice period - 0 to 15days only
Location – Pune, Trivandrum, Kochi, Chennai
Job Description – Full Stack Developer (React + Node.js)
Experience: 5–8 Years
Location: Pune
Work Mode: WFO
Employment Type: Full-time
About the Role
We are looking for an experienced Full Stack Developer with strong hands-on expertise in React and Node.js to join our engineering team. The ideal candidate should have solid experience building scalable applications, working with production systems, and collaborating in high-performance tech environments.
Key Responsibilities
- Design, develop, and maintain scalable full-stack applications using React and Node.js.
- Collaborate with cross-functional teams to define, design, and deliver new features.
- Write clean, maintainable, and efficient code following OOP/FP and SOLID principles.
- Work with relational databases such as PostgreSQL or MySQL.
- Deploy and manage applications in cloud environments (preferably GCP or AWS).
- Optimize application performance, troubleshoot issues, and ensure high availability in production systems.
- Utilize containerization tools like Docker for efficient development and deployment workflows.
- Integrate third-party services and APIs, including AI APIs and tools.
- Contribute to improving development processes, documentation, and best practices.
Required Skills
- Strong experience with React.js (frontend).
- Solid hands-on experience with Node.js (backend).
- Good understanding of relational databases: PostgreSQL / MySQL.
- Experience working in production environments and debugging live systems.
- Strong understanding of OOP or Functional Programming, and clean coding standards.
- Knowledge of Docker or other containerization tools.
- Experience with cloud platforms (GCP or AWS).
- Excellent written and verbal communication skills.
Good to Have
- Experience with Golang or Elixir.
- Familiarity with Kubernetes, RabbitMQ, Redis, etc.
- Contributions to open-source projects.
- Previous experience working with AI APIs or machine learning tools.

Global Digital Transformation Solutions Provider
Core Responsibilities:
- The MLE will design, build, test, and deploy scalable machine learning systems, optimizing model accuracy and efficiency
- Model Development: Algorithms and architectures span traditional statistical methods to deep learning along with employing LLMs in modern frameworks.
- Data Preparation: Prepare, cleanse, and transform data for model training and evaluation.
- Algorithm Implementation: Implement and optimize machine learning algorithms and statistical models.
- System Integration: Integrate models into existing systems and workflows.
- Model Deployment: Deploy models to production environments and monitor performance.
- Collaboration: Work closely with data scientists, software engineers, and other stakeholders.
- Continuous Improvement: Identify areas for improvement in model performance and systems.
Skills:
- Programming and Software Engineering: Knowledge of software engineering best practices (version control, testing, CI/CD).
- Data Engineering: Ability to handle data pipelines, data cleaning, and feature engineering. Proficiency in SQL for data manipulation + Kafka, Chaossearch logs, etc for troubleshooting; Other tech touch points are ScyllaDB (like BigTable), OpenSearch, Neo4J graph
- Model Deployment and Monitoring: MLOps Experience in deploying ML models to production environments.
- Knowledge of model monitoring and performance evaluation.
Required experience:
- Amazon SageMaker: Deep understanding of SageMaker's capabilities for building, training, and deploying ML models; understanding of the Sagemaker pipeline with ability to analyze gaps and recommend/implement improvements
- AWS Cloud Infrastructure: Familiarity with S3, EC2, Lambda and using these services in ML workflows
- AWS data: Redshift, Glue
- Containerization and Orchestration: Understanding of Docker and Kubernetes, and their implementation within AWS (EKS, ECS)
Skills: Aws, Aws Cloud, Amazon Redshift, Eks
Must-Haves
Amazon SageMaker, AWS Cloud Infrastructure (S3, EC2, Lambda), Docker and Kubernetes (EKS, ECS), SQL, AWS data (Redshift, Glue)
Skills : Machine Learning, MLOps, AWS Cloud, Redshift OR Glue, Kubernetes, Sage maker
******
Notice period - 0 to 15 days only
Location : Pune & Hyderabad only
Company Overview:
Virtana delivers the industry’s only unified platform for Hybrid Cloud Performance, Capacity and Cost Management. Our platform provides unparalleled, real-time visibility into the performance, utilization, and cost of infrastructure across the hybrid cloud – empowering customers to manage their mission critical applications across physical, virtual, and cloud computing environments. Our SaaS platform allows organizations to easily manage and optimize their spend in the public cloud, assure resources are performing properly through real-time monitoring, and provide the unique ability to plan migrations across the hybrid cloud.
As we continue to expand our portfolio, we are seeking a highly skilled and hands-on Staff Software Engineer in backend technologies to contribute to the futuristic development of our sophisticated monitoring products.
Position Overview:
As a Staff Software Engineer specializing in backend technologies for Storage and Network monitoring in an AI enabled Data center as well as Cloud, you will play a critical role in designing, developing, and delivering high-quality features within aggressive timelines. Your expertise in microservices-based streaming architectures and strong hands-on development skills are essential to solve complex problems related to large-scale data processing. Proficiency in backend technologies such as Java, Python is crucial.
Work Location: Pune
Job Type: Hybrid
Key Responsibilities:
- Hands-on Development: Actively participate in the design, development, and delivery of high-quality features, demonstrating strong hands-on expertise in backend technologies like Java, Python, Go or related languages.
- Microservices and Streaming Architectures: Design and implement microservices-based streaming architectures to efficiently process and analyze large volumes of data, ensuring real-time insights and optimal performance.
- Agile Development: Collaborate within an agile development environment to deliver features on aggressive schedules, maintaining a high standard of quality in code, design, and architecture.
- Feature Ownership: Take ownership of features from inception to deployment, ensuring they meet product requirements and align with the overall product vision.
- Problem Solving and Optimization: Tackle complex technical challenges related to data processing, storage, and real-time monitoring, and optimize backend systems for high throughput and low latency.
- Code Reviews and Best Practices: Conduct code reviews, provide constructive feedback, and promote best practices to maintain a high-quality and maintainable codebase.
- Collaboration and Communication: Work closely with cross-functional teams, including UI/UX designers, product managers, and QA engineers, to ensure smooth integration and alignment with product goals.
- Documentation: Create and maintain technical documentation, including system architecture, design decisions, and API documentation, to facilitate knowledge sharing and onboarding.
Qualifications:
- Bachelor's or Master's degree in Computer Science, Software Engineering, or a related field.
- 8+ years of hands-on experience in backend development, demonstrating expertise in Java, Python or related technologies.
- Strong domain knowledge in Storage and Networking, with exposure to monitoring technologies and practices.
- Experience is handling the large data-lakes with purpose-built data stores (Vector databases, no-SQL, Graph, Time-series).
- Practical knowledge of OO design patterns and Frameworks like Spring, Hibernate.
- Extensive experience with cloud platforms such as AWS, Azure or GCP and development expertise on Kubernetes, Docker, etc.
- Solid experience designing and delivering features with high quality on aggressive schedules.
- Proven experience in microservices-based streaming architectures, particularly in handling large amounts of data for storage and networking monitoring.
- Familiarity with performance optimization techniques and principles for backend systems.
- Excellent problem-solving and critical-thinking abilities.
- Outstanding communication and collaboration skills.
Why Join Us:
- Opportunity to be a key contributor in the development of a leading performance monitoring company specializing in AI-powered Storage and Network monitoring.
- Collaborative and innovative work environment.
- Competitive salary and benefits package.
- Professional growth and development opportunities.
- Chance to work on cutting-edge technology and products that make a real impact.
If you are a hands-on technologist with a proven track record of designing and delivering high-quality features on aggressive schedules and possess strong expertise in microservices-based streaming architectures, we invite you to apply and help us redefine the future of performance monitoring.

Global Digital Transformation Solutions Provider
MUST-HAVES:
- Machine Learning + Aws + (EKS OR ECS OR Kubernetes) + (Redshift AND Glue) + Sage maker
- Notice period - 0 to 15 days only
- Hybrid work mode- 3 days office, 2 days at home
SKILLS: AWS, AWS CLOUD, AMAZON REDSHIFT, EKS
ADDITIONAL GUIDELINES:
- Interview process: - 2 Technical round + 1 Client round
- 3 days in office, Hybrid model.
CORE RESPONSIBILITIES:
- The MLE will design, build, test, and deploy scalable machine learning systems, optimizing model accuracy and efficiency
- Model Development: Algorithms and architectures span traditional statistical methods to deep learning along with employing LLMs in modern frameworks.
- Data Preparation: Prepare, cleanse, and transform data for model training and evaluation.
- Algorithm Implementation: Implement and optimize machine learning algorithms and statistical models.
- System Integration: Integrate models into existing systems and workflows.
- Model Deployment: Deploy models to production environments and monitor performance.
- Collaboration: Work closely with data scientists, software engineers, and other stakeholders.
- Continuous Improvement: Identify areas for improvement in model performance and systems.
SKILLS:
- Programming and Software Engineering: Knowledge of software engineering best practices (version control, testing, CI/CD).
- Data Engineering: Ability to handle data pipelines, data cleaning, and feature engineering. Proficiency in SQL for data manipulation + Kafka, Chaos search logs, etc. for troubleshooting; Other tech touch points are Scylla DB (like BigTable), OpenSearch, Neo4J graph
- Model Deployment and Monitoring: MLOps Experience in deploying ML models to production environments.
- Knowledge of model monitoring and performance evaluation.
REQUIRED EXPERIENCE:
- Amazon SageMaker: Deep understanding of SageMaker's capabilities for building, training, and deploying ML models; understanding of the Sage maker pipeline with ability to analyze gaps and recommend/implement improvements
- AWS Cloud Infrastructure: Familiarity with S3, EC2, Lambda and using these services in ML workflows
- AWS data: Redshift, Glue
- Containerization and Orchestration: Understanding of Docker and Kubernetes, and their implementation within AWS (EKS, ECS)
JD for Cloud engineer
Job Summary:
We are looking for an experienced GCP Cloud Engineer to design, implement, and manage cloud-based solutions on Google Cloud Platform (GCP). The ideal candidate should have expertise in GKE (Google Kubernetes Engine), Cloud Run, Cloud Loadbalancer, Cloud function, Azure DevOps, and Terraform, with a strong focus on automation, security, and scalability.
You will work closely with development, operations, and security teams to ensure robust cloud infrastructure and CI/CD pipelines while optimizing performance and cost.
Key Responsibilities:
1. Cloud Infrastructure Design & Management
- Architect, deploy, and maintain GCP cloud resources via terraform/other automation.
- Implement Google Cloud Storage, Cloud SQL, filestore, for data storage and processing needs.
- Manage and configure Cloud Load Balancers (HTTP(S), TCP/UDP, and SSL Proxy) for high availability and scalability.
- Optimize resource allocation, monitoring, and cost efficiency across GCP environments.
2. Kubernetes & Container Orchestration
- Deploy, manage, and optimize workloads on Google Kubernetes Engine (GKE).
- Work with Helm charts for microservices deployments.
- Automate scaling, rolling updates, and zero-downtime deployments.
3. Serverless & Compute Services
- Deploy and manage applications on Cloud Run and Cloud Functions for scalable, serverless workloads.
- Optimize containerized applications running on Cloud Run for cost efficiency and performance.
4. CI/CD & DevOps Automation
- Design, implement, and manage CI/CD pipelines using Azure DevOps.
- Automate infrastructure deployment using Terraform, Bash and Powershell scripting
- Integrate security and compliance checks into the DevOps workflow (DevSecOps).
Required Skills & Qualifications:
✔ Experience: 8+ years in Cloud Engineering, with a focus on GCP.
✔ Cloud Expertise: Strong knowledge of GCP services (GKE, Compute Engine, IAM, VPC, Cloud Storage, Cloud SQL, Cloud Functions).
✔ Kubernetes & Containers: Experience with GKE, Docker, GKE Networking, Helm.
✔ DevOps Tools: Hands-on experience with Azure DevOps for CI/CD pipeline automation.
✔ Infrastructure-as-Code (IaC): Expertise in Terraform for provisioning cloud resources.
✔ Scripting & Automation: Proficiency in Python, Bash, or PowerShell for automation.
✔ Security & Compliance: Knowledge of cloud security principles, IAM, and compliance standards.
Required Skills & Qualifications:
✔ Experience: 4+ years in Cloud Engineering, with a focus on GCP.
✔ Cloud Expertise: Strong knowledge of GCP services (GKE, Compute Engine, IAM, VPC, Cloud Storage, Cloud SQL, Cloud Functions).
✔ Kubernetes & Containers: Experience with GKE, Docker, GKE Networking, Helm.
✔ DevOps Tools: Hands-on experience with Azure DevOps for CI/CD pipeline automation.
✔ Infrastructure-as-Code (IaC): Expertise in Terraform for provisioning cloud resources.
✔ Scripting & Automation: Proficiency in Python, Bash, or PowerShell for automation.

Global Digital Transformation Solutions Provider
Job Details
- Job Title: ML Engineer II - Aws, Aws Cloud
- Industry: Technology
- Domain - Information technology (IT)
- Experience Required: 6-12 years
- Employment Type: Full Time
- Job Location: Pune
- CTC Range: Best in Industry
Job Description:
Core Responsibilities:
? The MLE will design, build, test, and deploy scalable machine learning systems, optimizing model accuracy and efficiency
? Model Development: Algorithms and architectures span traditional statistical methods to deep learning along with employing LLMs in modern frameworks.
? Data Preparation: Prepare, cleanse, and transform data for model training and evaluation.
? Algorithm Implementation: Implement and optimize machine learning algorithms and statistical models.
? System Integration: Integrate models into existing systems and workflows.
? Model Deployment: Deploy models to production environments and monitor performance.
? Collaboration: Work closely with data scientists, software engineers, and other stakeholders.
? Continuous Improvement: Identify areas for improvement in model performance and systems.
Skills:
? Programming and Software Engineering: Knowledge of software engineering best practices (version control, testing, CI/CD).
? Data Engineering: Ability to handle data pipelines, data cleaning, and feature engineering. Proficiency in SQL for data manipulation + Kafka, Chaossearch logs, etc for troubleshooting; Other tech touch points are ScyllaDB (like BigTable), OpenSearch, Neo4J graph
? Model Deployment and Monitoring: MLOps Experience in deploying ML models to production environments.
? Knowledge of model monitoring and performance evaluation.
Required experience:
? Amazon SageMaker: Deep understanding of SageMaker's capabilities for building, training, and deploying ML models; understanding of the Sagemaker pipeline with ability to analyze gaps and recommend/implement improvements
? AWS Cloud Infrastructure: Familiarity with S3, EC2, Lambda and using these services in
ML workflows
? AWS data: Redshift, Glue
? Containerization and Orchestration: Understanding of Docker and Kubernetes, and their implementation within AWS (EKS, ECS)
Skills: Aws, Aws Cloud, Amazon Redshift, Eks
Must-Haves
Aws, Aws Cloud, Amazon Redshift, Eks
NP: Immediate – 30 Days
Position-Tech Lead
Experience: 8-10
Job Location: Pune
We are seeking a highly skilled Tech Lead with strong expertise in Java, microservices architecture, and cloud-native application development. The ideal candidate will bring hands-on leadership experience in designing scalable solutions, guiding development teams, and collaborating with DevOps engineers on OpenShift (OCP) platforms. This role requires a blend of technical leadership, solution design, and delivery ownership.
Key Responsibilities
Lead the design and development of Java / Spring Boot based microservices in a cloud-native environment.
Provide technical leadership to a team of developers, ensuring adherence to coding, security, and architectural best practices.
Collaborate with architects and DevOps engineers to deploy and manage microservices on Red Hat OpenShift (OCP).
Oversee end-to-end delivery including requirement analysis, design, development, code review, testing, and deployment.
Define and implement API specifications, integration patterns, and microservices orchestration.
Work closely with DevOps teams to integrate CI/CD pipelines, containerized deployments, Helm, and GitOps workflows.
Ensure application performance, scalability, and reliability with proactive observability practices (Grafana, Prometheus, etc.).
Required Skills & Qualifications
8-10 years of proven experience in Java application development with at least 4+ years in microservices architecture.
Strong expertise in Spring Boot, REST APIs, JPA/Hibernate, and messaging frameworks (Kafka, RabbitMQ, etc.).
Hands-on experience with containerization (Docker) and orchestration (OpenShift/Kubernetes).
Familiarity with OCP DevOps practices including CI/CD (ArgoCD, Tekton, Jenkins), Helm, and YAML deployments.
Good understanding of observability stacks (Grafana, Prometheus, Loki, Alertmanager) and logging practices.
Solid knowledge of cloud-native design principles, scalability, and fault tolerance.
Exposure to security best practices (OAuth, RBAC, secrets management via Vault or similar).

GLOBAL DIGITAL TRANSFORMATION SOLUTIONS PROVIDER
Job Position: Lead II - Software Engineering
Domain: Information technology (IT)
Location: India - Thiruvananthapuram
Salary: Best in Industry
Job Positions: 1
Experience: 8 - 12 Years
Skills: .Net, Sql Azure, Rest Api, Vue.Js
Notice Period: Immediate – 30 Days
Job Summary:
We are looking for a highly skilled Senior .NET Developer with a minimum of 7 years of experience across the full software development lifecycle, including post-live support. The ideal candidate will have a strong background in .NET backend API development, Agile methodologies, and Cloud infrastructure (preferably Azure). You will play a key role in solution design, development, DevOps pipeline enhancement, and mentoring junior engineers.
Key Responsibilities:
- Design, develop, and maintain scalable and secure .NET backend APIs.
- Collaborate with product owners and stakeholders to understand requirements and translate them into technical solutions.
- Lead and contribute to Agile software delivery processes (Scrum, Kanban).
- Develop and improve CI/CD pipelines and support release cadence targets, using Infrastructure as Code tools (e.g., Terraform).
- Provide post-live support, troubleshooting, and issue resolution as part of full lifecycle responsibilities.
- Implement unit and integration testing to ensure code quality and system stability.
- Work closely with DevOps and cloud engineering teams to manage deployments on Azure (Web Apps, Container Apps, Functions, SQL).
- Contribute to front-end components when necessary, leveraging HTML, CSS, and JavaScript UI frameworks.
- Mentor and coach engineers within a co-located or distributed team environment.
- Maintain best practices in code versioning, testing, and documentation.
Mandatory Skills:
- 7+ years of .NET development experience, including API design and development
- Strong experience with Azure Cloud services, including:
- Web/Container Apps
- Azure Functions
- Azure SQL Server
- Solid understanding of Agile development methodologies (Scrum/Kanban)
- Experience in CI/CD pipeline design and implementation
- Proficient in Infrastructure as Code (IaC) – preferably Terraform
- Strong knowledge of RESTful services and JSON-based APIs
- Experience with unit and integration testing techniques
- Source control using Git
- Strong understanding of HTML, CSS, and cross-browser compatibility
Good-to-Have Skills:
- Experience with Kubernetes and Docker
- Knowledge of JavaScript UI frameworks, ideally Vue.js
- Familiarity with JIRA and Agile project tracking tools
- Exposure to Database as a Service (DBaaS) and Platform as a Service (PaaS) concepts
- Experience mentoring or coaching junior developers
- Strong problem-solving and communication skills
Dear Candidate,
Greetings from Wissen Technology.
We have an exciting Job opportunity for GCP SRE Engineer Professionals. Please refer to the Job Description below and share your profile if interested.
About Wissen Technology:
- The Wissen Group was founded in the year 2000. Wissen Technology, a part of Wissen Group, was established in the year 2015.
- Wissen Technology is a specialized technology company that delivers high-end consulting for organizations in the Banking & Finance, Telecom, and Healthcare domains. We help clients build world class products.
- Our workforce consists of 1000+ highly skilled professionals, with leadership and senior management executives who have graduated from Ivy League Universities like Wharton, MIT, IITs, IIMs, and NITs and with rich work experience in some of the biggest companies in the world.
- Wissen Technology has grown its revenues by 400% in these five years without any external funding or investments.
- Globally present with offices US, India, UK, Australia, Mexico, and Canada.
- We offer an array of services including Application Development, Artificial Intelligence & Machine Learning, Big Data & Analytics, Visualization & Business Intelligence, Robotic Process Automation, Cloud, Mobility, Agile & DevOps, Quality Assurance & Test Automation.
- Wissen Technology has been certified as a Great Place to Work®.
- Wissen Technology has been voted as the Top 20 AI/ML vendor by CIO Insider in 2020.
- Over the years, Wissen Group has successfully delivered $650 million worth of projects for more than 20 of the Fortune 500 companies.
- The technology and thought leadership that the company commands in the industry is the direct result of the kind of people Wissen has been able to attract. Wissen is committed to providing them the best possible opportunities and careers, which extends to providing the best possible experience and value to our clients.
We have served client across sectors like Banking, Telecom, Healthcare, Manufacturing, and Energy. They include likes of Morgan Stanley, MSCI, State Street Corporation, Flipkart, Swiggy, Trafigura, GE to name a few.
Job Description:
Please find below details:
Experience - 4+ Years
Location- Bangalore/Mumbai/Pune
Team Responsibilities
The successful candidate shall be part of the S&C – SRE Team. Our team provides a tier 2/3 support to S&C Business. This position involves collaboration with the client facing teams like Client Services, Product and Research teams and Infrastructure/Technology and Application development teams to perform Environment and Application maintenance and support.
Resource's key Responsibilities
• Provide Tier 2/3 product technical support.
• Building software to help operations and support activities.
• Manage system\software configurations and troubleshoot environment issues.
• Identify opportunities for optimizing system performance through changes in configuration or suggestions for development.
• Plan, document and deploy software applications on our Unix/Linux/Azure and GCP based systems.
• Collaborate with development and software testing teams throughout the release process.
• Analyze release and deployment processes to identify key areas for automation and optimization.
• Manage hardware and software resources & coordinate maintenance, planned downtimes with
infrastructure group across all the environments. (Production / Non-Production).
• Must spend minimum one week a month as on call support to help with off-hour emergencies and maintenance activities.
Required skills and experience
• Bachelor's degree, Computer Science, Engineering or other similar concentration (BE/MCA)
• Master’s degree a plus
• 6-8 years’ experience in Production Support/ Application Management/ Application Development (support/ maintenance) role.
• Excellent problem-solving/troubleshooting skills, fast learner
• Strong knowledge of Unix Administration.
• Strong scripting skills in Shell, Python, Batch is must.
• Strong Database experience – Oracle
• Strong knowledge of Software Development Life Cycle
• Power shell is nice to have
• Software development skillsets in Java or Ruby.
• Worked upon any of the cloud platforms – GCP/Azure/AWS is nice to have
Key Responsibilities
Test Architecture & Design
- Architect test frameworks and infrastructure to validate microservices and distributed systems in multi-cluster, hybrid-cloud environments.
- Design complex test scenarios simulating production-like workloads, scaling, failure injection, and recovery.
- Ensure reliability, scalability, and maintainability of test systems.
Automation & Scalability
- Drive test automation integrated with CI/CD pipelines (e.g., Jenkins, GitHub Actions).
- Leverage Kubernetes APIs, Helm, and service meshes (Istio/Linkerd) for automation coverage of health, failover, and network resilience.
- Implement Infrastructure-as-Code (IaC) practices for test infrastructure to ensure repeatability and extensibility.
Technical Expertise
- Deep knowledge of Kubernetes internals, cluster lifecycle management, Helm, service meshes, and network policies.
- Strong scripting and automation skills with Python, Pytest, and Bash.
- Hands-on with observability stacks (Prometheus, Grafana, Jaeger) and performance benchmarking tools (e.g., K6).
- Experience with cloud platforms (AWS, Azure, GCP) and containerized CI/CD.
- Solid Linux proficiency: Bash scripting, debugging, networking, PKI management, Docker/containerd, GitOps/Flux, kubectl/Helm, troubleshooting multi-cluster environments.
Required Skills & Qualifications
- 6+ years in QA, Test Automation, or related engineering roles.
- Proven experience in architecting test frameworks for distributed/cloud-native systems.
- Expertise in Kubernetes, Helm, CI/CD, and cloud platforms (AWS/Azure/GCP).
- Strong Linux fundamentals with scripting and system debugging skills.
- Excellent problem-solving, troubleshooting, and technical leadership abilities.
Job Title : Senior QA Automation Architect (Cloud & Kubernetes)
Experience : 6+ Years
Location : India (Multiple Offices)
Shift Timings : 12 PM to 9 PM (Noon Shift)
Working Days : 5 Days WFO (NO Hybrid)
About the Role :
We’re looking for a Senior QA Automation Architect with deep expertise in cloud-native systems, Kubernetes, and automation frameworks.
You’ll design scalable test architectures, enhance automation coverage, and ensure product reliability across hybrid-cloud and distributed environments.
Key Responsibilities :
- Architect and maintain test automation frameworks for microservices.
- Integrate automated tests into CI/CD pipelines (Jenkins, GitHub Actions).
- Ensure reliability, scalability, and observability of test systems.
- Work closely with DevOps and Cloud teams to streamline automation infrastructure.
Mandatory Skills :
- Kubernetes, Helm, Docker, Linux
- Cloud Platforms : AWS / Azure / GCP
- CI/CD Tools : Jenkins, GitHub Actions
- Scripting : Python, Pytest, Bash
- Monitoring & Performance : Prometheus, Grafana, Jaeger, K6
- IaC Practices : Terraform / Ansible
Good to Have :
- Experience with Service Mesh (Istio/Linkerd).
- Container Security or DevSecOps exposure.
About the Role
We’re looking for a passionate Fullstack Product Engineer with a strong JavaScript foundation to work on a high-impact, scalable product. You’ll collaborate closely with product and engineering teams to build intuitive UIs and performant backends using modern technologies.
Responsibilities
- Build and maintain scalable features across the frontend and backend.
- Work with tech stacks like Node.js, React.js, Vue.js, and others.
- Contribute to system design, architecture, and code quality enforcement.
- Follow modern engineering practices including TDD, CI/CD, and live coding evaluations.
- Collaborate in code reviews, performance optimizations, and product iterations.
Required Skills
- 4–6 years of hands-on fullstack development experience.
- Strong command over JavaScript, Node.js, and React.js.
- Solid understanding of REST APIs and/or GraphQL.
- Good grasp of OOP principles, TDD, and writing clean, maintainable code.
- Experience with CI/CD tools like GitHub Actions, GitLab CI, Jenkins, etc.
- Familiarity with HTML, CSS, and frontend performance optimization.
Good to Have
- Exposure to Docker, AWS, Kubernetes, or Terraform.
- Experience in other backend languages or frameworks.
- Experience with microservices and scalable system architectures.
Job Description
We are seeking a highly skilled DevOps / Kubernetes Engineer. The ideal candidate will have strong expertise in container orchestration, infrastructure as code, and GitOps workflows, with hands-on experience in Azure cloud environments. You will be responsible for designing, deploying, and managing modern cloud-native infrastructure and applications at scale.
Key Responsibilities:
· Manage and operate Kubernetes clusters (AKS / K3s) for large-scale applications.
· Implement infrastructure as code using Terraform or OpenTofu for scalable, reliable, and secure infrastructure provisioning.
· Deploy and manage applications using Helm and ArgoCD with GitOps best practices.
· Work with Podman and Docker as container runtimes for development and production environments.
· Collaborate with cross-functional teams to ensure smooth deployment pipelines and CI/CD integrations.
· Optimize infrastructure for cost, performance, and reliability within Azure cloud.
· Troubleshoot, monitor, and maintain system health, scalability, and performance.
Required Skills & Experience:
· Strong hands-on experience with Kubernetes (AKS / K3s) cluster orchestration.
· Proficiency in Terraform or OpenTofu for infrastructure as code.
· Experience with Helm and ArgoCD for application deployment and GitOps.
· Solid understanding of Docker / Podman container runtimes.
· Cloud expertise in Azure with experience deploying and scaling workloads.
· Familiarity with CI/CD pipelines, monitoring, and logging frameworks.
· Knowledge of best practices around cloud security, scalability, and high availability.
Preferred Qualifications:
· Contributions to open-source projects under Apache 2.0 / MPL 2.0 licenses.
· Experience working in global distributed teams across CST/PST time zones.
· Strong problem-solving skills and ability to work independently in a fast-paced environment.
Job Summary:
We are seeking a highly skilled and proactive DevOps Engineer with 4+ years of experience to join our dynamic team. This role requires strong technical expertise across cloud infrastructure, CI/CD pipelines, container orchestration, and infrastructure as code (IaC). The ideal candidate should also have direct client-facing experience and a proactive approach to managing both internal and external stakeholders.
Key Responsibilities:
- Collaborate with cross-functional teams and external clients to understand infrastructure requirements and implement DevOps best practices.
- Design, build, and maintain scalable cloud infrastructure on AWS (EC2, S3, RDS, ECS, etc.).
- Develop and manage infrastructure using Terraform or CloudFormation.
- Manage and orchestrate containers using Docker and Kubernetes (EKS).
- Implement and maintain CI/CD pipelines using Jenkins or GitHub Actions.
- Write robust automation scripts using Python and Shell scripting.
- Monitor system performance and availability, and ensure high uptime and reliability.
- Execute and optimize SQL queries for MSSQL and PostgreSQL databases.
- Maintain clear documentation and provide technical support to stakeholders and clients.
Required Skills:
- Minimum 4+ years of experience in a DevOps or related role.
- Proven experience in client-facing engagements and communication.
- Strong knowledge of AWS services – EC2, S3, RDS, ECS, etc.
- Proficiency in Infrastructure as Code using Terraform or CloudFormation.
- Hands-on experience with Docker and Kubernetes (EKS).
- Strong experience in setting up and maintaining CI/CD pipelines with Jenkins or GitHub.
- Solid understanding of SQL and working experience with MSSQL and PostgreSQL.
- Proficient in Python and Shell scripting.
Preferred Qualifications:
AWS Certifications (e.g., AWS Certified DevOps Engineer) are a plus.
Experience working in Agile/Scrum environments.
Strong problem-solving and analytical skills.
🚀 We’re Hiring: Senior Python Backend Developer 🚀
📍 Location: Baner, Pune (Work from Office)
💰 Compensation: ₹6 LPA
🕑 Experience Required: Minimum 2 years as a Python Backend Developer
About Us
Foto Owl AI is a fast-growing product-based company headquartered in Baner, Pune.
We specialize in:
⚡ Hyper-personalized fan engagement
🤖 AI-powered real-time photo sharing
📸 Advanced media asset management
What You’ll Do
As a Senior Python Backend Developer, you’ll play a key role in designing, building, and deploying scalable backend systems that power our cutting-edge platforms.
Architect and develop complex, secure, and scalable backend services
Build and maintain APIs & data pipelines for web, mobile, and AI-driven platforms
Optimize SQL & NoSQL databases for high performance
Manage AWS infrastructure (EC2, S3, RDS, Lambda, CloudWatch, etc.)
Implement observability, monitoring, and security best practices
Collaborate cross-functionally with product & AI teams
Mentor junior developers and conduct code reviews
Troubleshoot and resolve production issues with efficiency
What We’re Looking For
✅ Strong expertise in Python backend development
✅ Solid knowledge of Data Structures & Algorithms
✅ Hands-on experience with SQL (PostgreSQL/MySQL) and NoSQL (MongoDB, Redis, etc.)
✅ Proficiency in RESTful APIs & Microservice design
✅ Knowledge of Docker, Kubernetes, and cloud-native systems
✅ Experience managing AWS-based deployments
Why Join Us?
At Foto Owl AI, you’ll be part of a passionate team building world-class media tech products used in sports, events, and fan engagement platforms. If you love scalable backend systems, real-time challenges, and AI-driven products, this is the place for you.
Job Summary:
We are seeking a highly skilled and proactive DevOps Engineer with 4+ years of experience to join our dynamic team. This role requires strong technical expertise across cloud infrastructure, CI/CD pipelines, container orchestration, and infrastructure as code (IaC). The ideal candidate should also have direct client-facing experience and a proactive approach to managing both internal and external stakeholders.
Key Responsibilities:
- Collaborate with cross-functional teams and external clients to understand infrastructure requirements and implement DevOps best practices.
- Design, build, and maintain scalable cloud infrastructure on AWS (EC2, S3, RDS, ECS, etc.).
- Develop and manage infrastructure using Terraform or CloudFormation.
- Manage and orchestrate containers using Docker and Kubernetes (EKS).
- Implement and maintain CI/CD pipelines using Jenkins or GitHub Actions.
- Write robust automation scripts using Python and Shell scripting.
- Monitor system performance and availability, and ensure high uptime and reliability.
- Execute and optimize SQLqueries for MSSQL and PostgresQL databases.
- Maintain clear documentation and provide technical support to stakeholders and clients.
Required Skills:
- Minimum 4+ years of experience in a DevOps or related role.
- Proven experience in client-facing engagements and communication.
- Strong knowledge of AWS services – EC2, S3, RDS, ECS, etc.
- Proficiency in Infrastructure as Code using Terraform or CloudFormation.
- Hands-on experience with Docker and Kubernetes (EKS).
- Strong experience in setting up and maintaining CI/CD pipelines with Jenkins or GitHub.
- Solid understanding of SQL and working experience with MSSQL and PostgreSQL.
- Proficient in Python and Shell scripting.
Preferred Qualifications:
- AWS Certifications (e.g., AWS Certified DevOps Engineer) are a plus.
- Experience working in Agile/Scrum environments.
- Strong problem-solving and analytical skills.

A domestic client 15 years old, in the logitech industry
Responsibilities:
- Work with product owners, managers, and customers to explore requirements and translate use-cases into functional requirements.
- Collaborate with cross-functional teams and architects to design, develop, test, and deploy web applications using ASP. NETCore | Open-source web framework for. NET, . NET Core, and C#.
- Build scalable, reliable, clean code and unit tests for. NET applications.
- Help maintain code quality, organization, and automation by performing code reviews, refactoring, and unit testing.
- Develop integration with third-party APIs and external applications to deliver robust and scalable applications.
- Maintain services, enhance, optimize, and upgrade existing systems.
- Contribute to architectural and design discussions and document design decisions.
- Effectively participate in planning meetings, retrospectives, daily stand-ups, and other meetings as part of the software development process.
- Contribute to the continuous improvement of development processes and practices.
- Resolve production issues, participate in production incident analysis by conducting effective troubleshooting and RCA within the SLA.
- Work with Operations teams on product deployment, issue resolution, and support.
- Mentor junior developers and assist in their professional growth. Stay updated with the latest technologies and best practices.
Requirements:
- 5+ years of experience with proficiency in C# language.
- Bachelor's or master's degree in computer science or a related field.
- Good working experience in. NET Framework, . NET Core, and ASP. NETCore | Open-source web framework for. NET and C#.
- Good understanding of OOP and design patterns - SOLID, Integration, REST, Micro-services, and cloud-native designs.
- Understanding of fundamental design principles behind building and scaling distributed applications.
- Knack for writing clean, readable, reusable, and testable C# code.
- Strong knowledge of data structures and collections in C#.
- Good knowledge of front-end development languages, including JavaScript, HTML5 and CSS.
- Experience in designing relational DB schema, PL/SQL queries performance tuning.
- Experience in working in an Agile environment following Scrum/SAFE methodologies.
- Knowledge of CI/CD, DevOps, containers, and automation frameworks.
- Experience in developing and deploying on at least one cloud environment.
- Excellent problem-solving, communication, and collaboration skills.
- Ability to work independently and effectively in a fast-paced environment.
🚀 Hiring: Dot net full stack at Deqode
⭐ Experience: 8+ Years
📍 Location: Bangalore | Mumbai | Pune | Gurgaon | Chennai | Hyderabad
⭐ Work Mode:- Hybrid
⏱️ Notice Period: Immediate Joiners
(Only immediate joiners & candidates serving notice period)
We’re looking for an experienced Dotnet Full Stack Developer with strong hands-on skills in ReactJS, .NET Core, and Azure Cloud Services (Azure Functions, Azure SQL, APIM, etc.).
⭐ Must-Have Skills:-
➡️ Design and develop scalable web applications using ReactJS, C#, and .NET Core.
➡️Azure (Functions, App Services, SQL, APIM, Service Bus)
➡️Familiarity with DevOps practices, CI/CD pipelines, Docker, and Kubernetes.
➡️Advanced experience in Entity Framework Core and SQL Server.
➡️Expertise in RESTful API development and microservices.
Job Summary:
We are seeking a highly skilled and proactive DevOps Engineer with 4+ years of experience to join our dynamic team. This role requires strong technical expertise across cloud infrastructure, CI/CD pipelines, container orchestration, and infrastructure as code (IaC). The ideal candidate should also have direct client-facing experience and a proactive approach to managing both internal and external stakeholders.
Key Responsibilities:
- Collaborate with cross-functional teams and external clients to understand infrastructure requirements and implement DevOps best practices.
- Design, build, and maintain scalable cloud infrastructure on AWS (EC2, S3, RDS, ECS, etc.).
- Develop and manage infrastructure using Terraform or CloudFormation.
- Manage and orchestrate containers using Docker and Kubernetes (EKS).
- Implement and maintain CI/CD pipelines using Jenkins or GitHub Actions.
- Write robust automation scripts using Python and Shell scripting.
- Monitor system performance and availability, and ensure high uptime and reliability.
- Execute and optimize SQL queries for MSSQL and PostgreSQL databases.
- Maintain clear documentation and provide technical support to stakeholders and clients.
Required Skills:
- Minimum 4+ years of experience in a DevOps or related role.
- Proven experience in client-facing engagements and communication.
- Strong knowledge of AWS services – EC2, S3, RDS, ECS, etc.
- Proficiency in Infrastructure as Code using Terraform or CloudFormation.
- Hands-on experience with Docker and Kubernetes (EKS).
- Strong experience in setting up and maintaining CI/CD pipelines with Jenkins or GitHub.
- Solid understanding of SQL and working experience with MSSQL and PostgreSQL.
- Proficient in Python and Shell scripting.
Preferred Qualifications:
- AWS Certifications (e.g., AWS Certified DevOps Engineer) are a plus.
- Experience working in Agile/Scrum environments.
- Strong problem-solving and analytical skills.
Work Mode & Timing:
- Hybrid – Pune-based candidates preferred.
- Working hours: 12:30 PM to 9:30 PM IST to align with client time zones.
Job Title : Senior Python Developer – Product Engineering
Experience : 5 to 8 Years
Location : Pune, India (Hybrid – 3-4 days WFO, 1-2 days WFH)
Employment Type : Full-time
Commitment : Minimum 3 years (with end-of-term bonus)
Openings : 2 positions
- Junior : 3 to 5 Years
- Senior : 5 to 8 Years
Mandatory Skills : Python 3.x, REST APIs, multithreading, Celery, encryption (OpenSSL/cryptography.io), PostgreSQL/Redis, Docker/K8s, secure coding
Nice to Have : Experience with EFSS/DRM/DLP platforms, delta sync, file systems, LDAP/AD/SIEM integrations
🎯 Roles & Responsibilities :
- Design and develop backend services for DRM enforcement, file synchronization, and endpoint telemetry.
- Build scalable Python-based APIs interacting with file systems, agents, and enterprise infra.
- Implement encryption workflows, secure file handling, delta sync, and file versioning.
- Integrate with 3rd-party platforms: LDAP, AD, DLP, CASB, SIEM.
- Collaborate with DevOps to ensure high availability and performance of hybrid deployments.
- Participate in code reviews, architectural discussions, and mentor junior developers.
- Troubleshoot production issues and continuously optimize performance.
✅ Required Skills :
- 5 to 8 years of hands-on experience in Python 3.x development.
- Expertise in REST APIs, Celery, multithreading, and file I/O.
- Proficient in encryption libraries (OpenSSL, cryptography.io) and secure coding.
- Experience with PostgreSQL, Redis, SQLite, and Linux internals.
- Strong command over Docker, Kubernetes, CI/CD, and Git workflows.
- Ability to write clean, testable, and scalable code in production environments.
➕ Preferred Skills :
- Background in DRM, EFSS, DLP, or enterprise security platforms.
- Familiarity with file diffing, watermarking, or agent-based tools.
- Knowledge of compliance frameworks (GDPR, DPDP, RBI-CSF) is a plus.
- 5+ years of experience
- FlaskAPI, RestAPI development experience
- Proficiency in Python programming.
- Basic knowledge of front-end development.
- Basic knowledge of Data manipulation and analysis libraries
- Code versioning and collaboration. (Git)
- Knowledge for Libraries for extracting data from websites.
- Knowledge of SQL and NoSQL databases
- Familiarity with RESTful APIs
- Familiarity with Cloud (Azure /AWS) technologies
What You’ll Do:
We’re looking for a skilled DevOps Engineer to help us build and maintain reliable, secure, and scalable infrastructure. You will work closely with our development, product, and security teams to streamline deployments, improve performance, and ensure cloud infrastructure resilience.
Responsibilities:
● Deploy, manage, and monitor infrastructure on Google Cloud Platform (GCP)
● Build CI/CD pipelines using Jenkins and integrate them with Git workflows
● Design and manage Kubernetes clusters and helm-based deployments
● Manage infrastructure as code using Terraform
● Set up logging, monitoring, and alerting (Stackdriver, Prometheus, Grafana)
● Ensure security best practices across cloud resources, networks, and secrets
● Automate repetitive operations and improve system reliability
● Collaborate with developers to troubleshoot and resolve issues in staging/production environments
What We’re Looking For:
Required Skills:
● 1–3 years of hands-on experience in a DevOps or SRE role
● Strong knowledge of GCP services (IAM, GKE, Cloud Run, VPC, Cloud Build, etc.)
● Proficiency in Kubernetes (deployment, scaling, troubleshooting)
● Experience with Terraform for infrastructure provisioning
● CI/CD pipeline setup using Jenkins, GitHub Actions, or similar tools
● Understanding of DevSecOps principles and cloud security practices
● Good command over Linux, shell scripting, and basic networking concepts
Nice to have:
● Experience with Docker, Helm, ArgoCD
● Exposure to other cloud platforms (AWS, Azure)
● Familiarity with incident response and disaster recovery planning
● Knowledge of logging and monitoring tools like ELK, Prometheus, Grafana
Job Title: Java Full Stack Developer
Experience: 6+ Years
Locations: Bangalore, Mumbai, Pune, Gurgaon
Work Mode: Hybrid
Notice Period: Immediate Joiners Preferred / Candidates Who Have Completed Their Notice Period
About the Role
We are looking for a highly skilled and experienced Java Full Stack Developer with a strong command over backend technologies and modern frontend frameworks. The ideal candidate will have deep experience with Java, ReactJS, and DevOps tools like Jenkins, Docker, and basic Kubernetes knowledge. You’ll be contributing to complex software solutions across industries, collaborating with cross-functional teams, and deploying production-grade systems in a cloud-native, CI/CD-driven environment.
Key Responsibilities
- Design and develop scalable web applications using Java (Spring Boot) and ReactJS
- Collaborate with UX/UI designers and backend developers to implement robust, efficient front-end interfaces
- Develop and maintain CI/CD pipelines using Jenkins, ensuring high-quality software delivery
- Containerize applications using Docker and ensure smooth deployment and orchestration using Kubernetes (basic level)
- Write clean, modular, and testable code and participate in code reviews
- Troubleshoot and resolve performance, reliability, and functional issues in production
- Work in Agile teams and participate in daily stand-ups, sprint planning, and retrospective meetings
- Ensure all security, compliance, and performance standards are met in the development lifecycle
Mandatory Skills
- Backend: Java, Spring Boot
- Frontend: ReactJS
- DevOps Tools: Jenkins, Docker
- Containers & Orchestration: Basic knowledge of Kubernetes
- Strong understanding of RESTful services and APIs
- Familiarity with Git and version control workflows
- Good understanding of SDLC, Agile/Scrum methodologies
We are looking for an experienced GCP Cloud Engineer to design, implement, and manage cloud-based solutions on Google Cloud Platform (GCP). The ideal candidate should have expertise in GKE (Google Kubernetes Engine), Cloud Run, Cloud Loadbalancer, Cloud function, Azure DevOps, and Terraform, with a strong focus on automation, security, and scalability.
You will work closely with development, operations, and security teams to ensure robust cloud infrastructure and CI/CD pipelines while optimizing performance and cost.
Key Responsibilities:
1. Cloud Infrastructure Design & Management
· Architect, deploy, and maintain GCP cloud resources via terraform/other automation.
· Implement Google Cloud Storage, Cloud SQL, file store, for data storage and processing needs.
· Manage and configure Cloud Load Balancers (HTTP(S), TCP/UDP, and SSL Proxy) for high availability and scalability.
· Optimize resource allocation, monitoring, and cost efficiency across GCP environments.
2. Kubernetes & Container Orchestration
· Deploy, manage, and optimize workloads on Google Kubernetes Engine (GKE).
· Work with Helm charts, Istio, and service meshes for microservices deployments.
· Automate scaling, rolling updates, and zero-downtime deployments.
3. Serverless & Compute Services
· Deploy and manage applications on Cloud Run and Cloud Functions for scalable, serverless workloads.
· Optimize containerized applications running on Cloud Run for cost efficiency and performance.
4. CI/CD & DevOps Automation
· Design, implement, and manage CI/CD pipelines using Azure DevOps.
· Automate infrastructure deployment using Terraform, Bash and Power shell scripting
· Integrate security and compliance checks into the DevOps workflow (DevSecOps).
Required Skills & Qualifications:
✔ Experience: 8+ years in Cloud Engineering, with a focus on GCP.
✔ Cloud Expertise: Strong knowledge of GCP services (GKE, Compute Engine, IAM, VPC, Cloud Storage, Cloud SQL, Cloud Functions).
✔ Kubernetes & Containers: Experience with GKE, Docker, GKE Networking, Helm.
✔ DevOps Tools: Hands-on experience with Azure DevOps for CI/CD pipeline automation.
✔ Infrastructure-as-Code (IaC): Expertise in Terraform for provisioning cloud resources.
✔ Scripting & Automation: Proficiency in Python, Bash, or PowerShell for automation.
✔ Security & Compliance: Knowledge of cloud security principles, IAM, and compliance standards.
Job Title : Senior Consultant (Java / NodeJS + Temporal)
Experience : 5 to 12 Years
Location : Bengaluru, Chennai, Hyderabad, Pune, Mumbai, Gurugram, Coimbatore
Work Mode : Remote (Must be open to travel for occasional team meetups)
Notice Period : Immediate Joiners or Serving Notice
Interview Process :
- R1 : Tech Interview (60 mins)
- R2 : Technical Interview
- R3 : (Optional) Interview with Client
Job Summary :
We are seeking a Senior Backend Consultant with strong hands-on expertise in Temporal (BPM/Workflow Engine) and either Node.js or Java.
The ideal candidate will have experience in designing and developing microservices and process-driven applications, as well as orchestrating complex workflows using Temporal.io.
You will work on high-scale systems, collaborating closely with cross-functional teams.
Mandatory Skills :
Temporal.io, Node.js (or Java), React.js, Keycloak IAM, PostgreSQL, Terraform, Kubernetes, Azure, Jest, OpenAPI
Key Responsibilities :
- Design and implement scalable backend services using Node.js or Java.
- Build and manage complex workflow orchestrations using Temporal.io.
- Integrate with IAM solutions like Keycloak for role-based access control.
- Work with React (v17+), TypeScript, and component-driven frontend design.
- Use PostgreSQL for structured data persistence and optimized queries.
- Manage infrastructure using Terraform and orchestrate via Kubernetes.
- Leverage Azure Services like Blob Storage, API Gateway, and AKS.
- Write and maintain API documentation using Swagger/Postman/Insomnia.
- Conduct unit and integration testing using Jest.
- Participate in code reviews and contribute to architectural decisions.
Must-Have Skills :
- Temporal.io – BPMN modeling, external task workers, Operate, Tasklist
- Node.js + TypeScript (preferred) or strong Java experience
- React.js (v17+) and component-driven UI development
- Keycloak IAM, PostgreSQL, and modern API design
- Infrastructure automation with Terraform, Kubernetes
- Experience in using GitFlow, OpenAPI, Jest for testing
Nice-to-Have Skills :
- Blockchain integration experience for secure KYC/identity flows
- Custom Camunda Connectors or exporter plugin development
- CI/CD experience using Azure DevOps or GitHub Actions
- Identity-based task completion authorization enforcement
Minimum requirements
5+ years of industry software engineering experience (does not include internships nor includes co-ops)
Strong coding skills in any programming language (we understand new languages can be learned on the job so our interview process is language agnostic)
Strong collaboration skills, can work across workstreams within your team and contribute to your peers’ success
Have the ability to thrive on a high level of autonomy, responsibility, and think of yourself as entrepreneurial
Interest in working as a generalist across varying technologies and stacks to solve problems and delight both internal and external users
Preferred Qualifications
Experience with large-scale financial tracking systems
Good understanding and practical knowledge in cloud based services (e.g. gRPC, GraphQL, Docker/Kubernetes, cloud services such as AWS, etc.)
Required Skills:
- Experience in systems administration, SRE or DevOps focused role
- Experience in handling production support (on-call)
- Good understanding of the Linux operating system and networking concepts.
- Demonstrated competency with the following AWS services: ECS, EC2, EBS, EKS, S3, RDS, ELB, IAM, Lambda.
- Experience with Docker containers and containerization concepts
- Experience with managing and scaling Kubernetes clusters in a production environment
- Experience building scalable infrastructure in AWS with Terraform.
- Strong knowledge of Protocol-level such as HTTP/HTTPS, SMTP, DNS, and LDAP
- Experience monitoring production systems
- Expertise in leveraging Automation / DevOps principles, experience with operational tools, and able to apply best practices for infrastructure and software deployment (Ansible).
- HAProxy, Nginx, SSH, MySQL configuration and operation experience
- Ability to work seamlessly with software developers, QA, project managers, and business development
- Ability to produce and maintain written documentation
Job Title : Lead Java Developer (Backend)
Experience Required : 8 to 15 Years
Open Positions : 5
Location : Any major metro city (Bengaluru, Pune, Chennai, Kolkata, Hyderabad)
Work Mode : Open to Remote / Hybrid / Onsite
Notice Period : Immediate Joiner/30 Days or Less
About the Role :
- We are looking for experienced Lead Java Developers who bring not only strong backend development skills but also a product-oriented mindset and leadership capability.
- This is an opportunity to be part of high-impact digital transformation initiatives that go beyond writing code—you’ll help shape future-ready platforms and drive meaningful change.
- This role is embedded within a forward-thinking digital engineering team that thrives on co-innovation, lean delivery, and end-to-end ownership of platforms and products.
Key Responsibilities :
- Design, develop, and implement scalable backend systems using Java and Spring Boot.
- Collaborate with product managers, designers, and engineers to build intuitive and reliable digital products.
- Advocate and implement engineering best practices : SOLID principles, OOP, clean code, CI/CD, TDD/BDD.
- Lead Agile-based development cycles with a focus on speed, quality, and customer outcomes.
- Guide and mentor team members, fostering technical excellence and ownership.
- Utilize cloud platforms and DevOps tools to ensure performance and reliability of applications.
What We’re Looking For :
- Proven experience in Java backend development (Spring Boot, Microservices).
- 8+ Years of hands-on engineering experience with at least 2+ years in a Lead role.
- Familiarity with cloud platforms such as AWS, Azure, or GCP.
- Good understanding of containerization and orchestration tools like Docker and Kubernetes.
- Exposure to DevOps and Infrastructure as Code practices.
- Strong problem-solving skills and the ability to design solutions from first principles.
- Prior experience in product-based or startup environments is a big plus.
Ideal Candidate Profile :
- A tech enthusiast with a passion for clean code and scalable architecture.
- Someone who thrives in collaborative, transparent, and feedback-driven environments.
- A leader who takes ownership beyond individual deliverables to drive overall team and project success.
Interview Process
- Initial Technical Screening (via platform partner)
- Technical Interview with Engineering Team
- Client-facing Final Round
Additional Info :
- Targeting profiles from product/startup backgrounds.
- Strong preference for candidates with under 1 month of notice period.
- Interviews will be fast-tracked for qualified profiles.
📍 Position : Java Architect
📅 Experience : 10 to 15 Years
🧑💼 Open Positions : 3+
📍 Work Location : Bangalore, Pune, Chennai
💼 Work Mode : Hybrid
📅 Notice Period : Immediate joiners preferred; up to 1 month maximum
🔧 Core Responsibilities :
- Lead architecture design and development for scalable enterprise-level applications.
- Own and manage all aspects of technical development and delivery.
- Define and enforce best coding practices, architectural guidelines, and development standards.
- Plan and estimate the end-to-end technical scope of projects.
- Conduct code reviews, ensure CI/CD, and implement TDD/BDD methodologies.
- Mentor and lead individual contributors and small development teams.
- Collaborate with cross-functional teams, including DevOps, Product, and QA.
- Engage in high-level and low-level design (HLD/LLD), solutioning, and cloud-native transformations.
🛠️ Required Technical Skills :
- Strong hands-on expertise in Java, Spring Boot, Microservices architecture
- Experience with Kafka or similar messaging/event streaming platforms
- Proficiency in cloud platforms – AWS and Azure (must-have)
- Exposure to frontend technologies (nice-to-have)
- Solid understanding of HLD, system architecture, and design patterns
- Good grasp of DevOps concepts, Docker, Kubernetes, and Infrastructure as Code (IaC)
- Agile/Lean development, Pair Programming, and Continuous Integration practices
- Polyglot mindset is a plus (Scala, Golang, Python, etc.)
🚀 Ideal Candidate Profile :
- Currently working in a product-based environment
- Already functioning as an Architect or Principal Engineer
- Proven track record as an Individual Contributor (IC)
- Strong engineering fundamentals with a passion for scalable software systems
- No compromise on code quality, craftsmanship, and best practices
🧪 Interview Process :
- Round 1: Technical pairing round
- Rounds 2 & 3: Technical rounds with panel (code pairing + architecture)
- Final Round: HR and offer discussion
🚀 Hiring: Azure DevOps Engineer – Immediate Joiners Only! 🚀
📍 Location: Pune (Hybrid)
💼 Experience: 5+ Years
🕒 Mode of Work: Hybrid
Are you a proactive and skilled Azure DevOps Engineer looking for your next challenge? We are hiring immediate joiners to join our dynamic team! If you are passionate about CI/CD, cloud automation, and SRE best practices, we want to hear from you.
🔹 Key Skills Required:
✅ Cloud Expertise: Proficiency in any cloud (Azure preferred)
✅ CI/CD Pipelines: Hands-on experience in designing and managing pipelines
✅ Containers & IaC: Strong knowledge of Docker, Terraform, Kubernetes
✅ Incident Management: Quick issue resolution and RCA
✅ SRE & Observability: Experience with SLI/SLO/SLA, monitoring, tracing, logging
✅ Programming: Proficiency in Python, Golang
✅ Performance Optimization: Identifying and resolving system bottlenecks
JioTesseract, a digital arm of Reliance Industries, is India's leading and largest AR/VR organization with the mission to democratize mixed reality for India and the world. We make products at the cross of hardware, software, content and services with focus on making India the leader in spatial computing. We specialize in creating solutions in AR, VR and AI, with some of our notable products such as JioGlass, JioDive, 360 Streaming, Metaverse, AR/VR headsets for consumers and enterprise space.
Mon-Fri, In office role with excellent perks and benefits!
Key Responsibilities:
1. Design, develop, and maintain backend services and APIs using Node.js or Python, or Java.
2. Build and implement scalable and robust microservices and integrate API gateways.
3. Develop and optimize NoSQL database structures and queries (e.g., MongoDB, DynamoDB).
4. Implement real-time data pipelines using Kafka.
5. Collaborate with front-end developers to ensure seamless integration of backend services.
6. Write clean, reusable, and efficient code following best practices, including design patterns.
7. Troubleshoot, debug, and enhance existing systems for improved performance.
Mandatory Skills:
1. Proficiency in at least one backend technology: Node.js or Python, or Java.
2. Strong experience in:
i. Microservices architecture,
ii. API gateways,
iii. NoSQL databases (e.g., MongoDB, DynamoDB),
iv. Kafka
v. Data structures (e.g., arrays, linked lists, trees).
3. Frameworks:
i. If Java : Spring framework for backend development.
ii. If Python: FastAPI/Django frameworks for AI applications.
iii. If Node: Express.js for Node.js development.
Good to Have Skills:
1. Experience with Kubernetes for container orchestration.
2. Familiarity with in-memory databases like Redis or Memcached.
3. Frontend skills: Basic knowledge of HTML, CSS, JavaScript, or frameworks like React.js.
We are seeking a skilled Full Stack (FSE) Developer with expertise in Java, Spring Boot development to join our dynamic team. In this role, you will be responsible for development of critical banking application across various business domains. You will collaborate closely with cross-functional teams to ensure high-quality solutions are developed, maintained, and continuously improved.
Responsibilities:
Development of business-critical banking applications.
Develop new features for banking applications using FSE technologies.
Ensure code quality through proper testing, reviews, and adherence to coding standards.
Collaborate with design, backend, and other teams to deliver seamless user experiences.
Troubleshoot, debug, and optimize performance issues.
Participate in agile development processes and contribute to continuous improvement initiatives.
Requirements:
Bachelors/Masters degree in computer science, Software Engineering, or a related field.
4 – 6 years of relevant experience in application development.
Solid experience in:
Java, Spring Boot.
APIs / REST.
Kubernetes / OpenShift.
Azure DevOps.
JMS, Message Queues.
Nice to have knowledge in:
Quarkus.
Apache Camel.
Soft skills / Personality:
Excellent English communication skills / proactive communication.
Candidate needs to have Self-dependent working style.
Candidate needs to have problem solving skills (Strong analytical skills to identify and solve complex issues).
Candidate needs to show high Adaptability (Flexibility in adjusting to different working environments and practices).
Candidate needs to be quick in Critical thinking (Evaluating information and making informed decisions).
Candidate needs to have Team collaboration (Ability to work collaboratively with a distributed team) character.
Candidate needs to have Cultural awareness ability.
Candidate needs to be Initiative (Proactively seeking solutions and improvements).
Good to have knowledge about Co Banking systems.
Good to have Banking domain knowledge.
Experience in customer facing is an advantage.
Skills & Requirements
Java, Spring Boot, APIs/REST, Kubernetes, OpenShift, Azure DevOps, JMS, Message Queues, Quarkus, Apache Camel, Excellent English communication, Proactive communication, Self-dependent working, Problem-solving, Analytical skills, Adaptability, Critical thinking, Team collaboration, Cultural awareness, Initiative, Co-banking systems knowledge, Banking domain knowledge, Customer-facing experience.
Level of skills and experience:
5 years of hands-on experience in using Python, Spark,Sql.
Experienced in AWS Cloud usage and management.
Experience with Databricks (Lakehouse, ML, Unity Catalog, MLflow).
Experience using various ML models and frameworks such as XGBoost, Lightgbm, Torch.
Experience with orchestrators such as Airflow and Kubeflow.
Familiarity with containerization and orchestration technologies (e.g., Docker, Kubernetes).
Fundamental understanding of Parquet, Delta Lake and other data file formats.
Proficiency on an IaC tool such as Terraform, CDK or CloudFormation.
Strong written and verbal English communication skill and proficient in communication with non-technical stakeholderst
Company Overview:
Davis Index is a leading market intelligence platform and publication that specializes in providing accurate and up-to-date price benchmarks for ferrous and non-ferrous scrap, as well as primary metals. Our dedicated team of reporters, analysts, and data specialists publishes and processes over 1,400 proprietary price indexes, metals futures prices, and other reference data. In addition, we offer market intelligence, news, and analysis through an industry-leading technology platform. With a global presence across the Americas, Asia, Europe, and Africa, our team of over 50 professionals works tirelessly to deliver essential market insights to our valued clients.
Job Overview:
We are seeking a skilled Cloud Engineer to join our team. The ideal candidate will have a strong foundation in cloud technologies and a knack for automating infrastructure processes. You will be responsible for deploying and managing cloud-based solutions while ensuring optimal performance and reliability.
Key Responsibilities:
- Design, deploy, and manage cloud infrastructure solutions.
- Automate infrastructure setup and management using Terraform or Ansible.
- Manage and maintain Kubernetes clusters for containerized applications.
- Work with Linux systems for server management and troubleshooting.
- Configure load balancers to route traffic efficiently.
- Set up and manage database instances along with failover replicas.
Required Skills and Qualifications:
- Minimum of Cloud Practitioner Certification.
- Proficiency with Linux systems.
- Hands-on experience with Kubernetes.
- Expertise in writing automation scripts using Terraform or Ansible.
- Strong understanding of cloud computing concepts.
Application Process: Candidates are encouraged to apply with their resumes and examples of relevant automation scripts they have written in Terraform or Ansible.
- TIDB (Good to have)
- Kubernetes( Must to have)
- MySQL(Must to have)
- Maria DB(Must to have)
- Looking candidate who has more exposure into Reliability over maintenance













