50+ Docker Jobs in Pune | Docker Job openings in Pune
Apply to 50+ Docker Jobs in Pune on CutShort.io. Explore the latest Docker Job opportunities across top companies like Google, Amazon & Adobe.
About NonStop io Technologies:
NonStop io Technologies is a value-driven company with a strong focus on process-oriented software engineering. We specialize in Product Development and have a decade's worth of experience in building web and mobile applications across various domains. NonStop io Technologies follows core principles that guide its operations and believes in staying invested in a product's vision for the long term. We are a small but proud group of individuals who believe in the 'givers gain' philosophy and strive to provide value in order to seek value. We are committed to and specialize in building cutting-edge technology products and serving as trusted technology partners for startups and enterprises. We pride ourselves on fostering innovation, learning, and community engagement. Join us to work on impactful projects in a collaborative and vibrant environment.
Brief Description:
We are looking for a skilled and proactive DevOps Engineer to join our growing engineering team. The ideal candidate will have hands-on experience in building, automating, and managing scalable infrastructure and CI CD pipelines. You will work closely with development, QA, and product teams to ensure reliable deployments, performance, and system security.
Roles and Responsibilities:
● Design, implement, and manage CI CD pipelines for multiple environments
● Automate infrastructure provisioning using Infrastructure as Code tools
● Manage and optimize cloud infrastructure on AWS, Azure, or GCP
● Monitor system performance, availability, and security
● Implement logging, monitoring, and alerting solutions
● Collaborate with development teams to streamline release processes
● Troubleshoot production issues and ensure high availability
● Implement containerization and orchestration solutions such as Docker and Kubernetes
● Enforce DevOps best practices across the engineering lifecycle
● Ensure security compliance and data protection standards are maintained
Requirements:
● 4 to 7 years of experience in DevOps or Site Reliability Engineering
● Strong experience with cloud platforms such as AWS, Azure, or GCP - Relevant Certifications will be a great advantage
● Hands-on experience with CI CD tools like Jenkins, GitHub Actions, GitLab CI, or Azure DevOps
● Experience working in microservices architecture
● Exposure to DevSecOps practices
● Experience in cost optimization and performance tuning in cloud environments
● Experience with Infrastructure as Code tools such as Terraform, CloudFormation, or ARM
● Strong knowledge of containerization using Docker
● Experience with Kubernetes in production environments
● Good understanding of Linux systems and shell scripting
● Experience with monitoring tools such as Prometheus, Grafana, ELK, or Datadog
● Strong troubleshooting and debugging skills
● Understanding of networking concepts and security best practices
Why Join Us?
● Opportunity to work on a cutting-edge healthcare product
● A collaborative and learning-driven environment
● Exposure to AI and software engineering innovations
● Excellent work ethic and culture
If you're passionate about technology and want to work on impactful projects, we'd love to hear from you!
• Strong hands-on experience with AWS services.
• Expertise in Terraform and IaC principles.
• Experience building CI/CD pipelines and working with Git.
• Proficiency with Docker and Kubernetes.
• Solid understanding of Linux administration, networking fundamentals, and IAM.
• Familiarity with monitoring and observability tools (CloudWatch, Prometheus, Grafana, ELK, Datadog).
• Knowledge of security and compliance tools (Trivy, SonarQube, Checkov, Snyk).
• Scripting experience in Bash, Python, or PowerShell.
• Exposure to GCP, Azure, or multi-cloud architectures is a plus.
About the role:
We are looking for a Staff Site Reliability Engineer who can operate at a staff level across multiple teams and clients. If you care about designing reliable platforms, influencing system architecture, and raising reliability standards across teams, you’ll enjoy working at One2N.
At One2N, you will work with our startups and enterprise clients, solving One-to-N scale problems where the proof of concept is already established and the focus is on scalability, maintainability, and long-term reliability. In this role, you will drive reliability, observability, and infrastructure architecture across systems, influencing design decisions, defining best practices, and guiding teams to build resilient, production-grade systems.
Key responsibilities:
- Own and drive reliability and infrastructure strategy across multiple products or client engagements
- Design and evolve platform engineering and self-serve infrastructure patterns used by product engineering teams
- Lead architecture discussions around observability, scalability, availability, and cost efficiency.
- Define and standardize monitoring, alerting, SLOs/SLIs, and incident management practices.
- Build and review production-grade CI/CD and IaC systems used across teams
- Act as an escalation point for complex production issues and incident retrospectives.
- Partner closely with engineering leads, product teams, and clients to influence system design decisions early.
- Mentor young engineers through design reviews, technical guidance, and best practices.
- Improve Developer Experience (DX) by reducing cognitive load, toil, and operational friction.
- Help teams mature their on-call processes, reliability culture, and operational ownership.
- Stay ahead of trends in cloud-native infrastructure, observability, and platform engineering, and bring relevant ideas into practice
About you:
- 9+ years of experience in SRE, DevOps, or software engineering roles
- Strong experience designing and operating Kubernetes-based systems on AWS at scale
- Deep hands-on expertise in observability and telemetry, including tools like OpenTelemetry, Datadog, Grafana, Prometheus, ELK, Honeycomb, or similar.
- Proven experience with infrastructure as code (Terraform, Pulumi) and cloud architecture design.
- Strong understanding of distributed systems, microservices, and containerized workloads.
- Ability to write and review production-quality code (Golang, Python, Java, or similar)
- Solid Linux fundamentals and experience debugging complex system-level issues
- Experience driving cross-team technical initiatives.
- Excellent analytical and problem-solving skills, keen attention to detail, and a passion for continuous improvement.
- Strong written, communication, and collaboration skills, with the ability to work effectively in a fast-paced, agile environment.
Nice to have:
- Experience working in consulting or multi-client environments.
- Exposure to cost optimization, or large-scale AWS account management
- Experience building internal platforms or shared infrastructure used by multiple teams.
- Prior experience influencing or defining engineering standards across organizations.
JOB DETAILS:
* Job Title: Specialist I - DevOps Engineering
* Industry: Global Digital Transformation Solutions Provider
* Salary: Best in Industry
* Experience: 7-10 years
* Location: Bengaluru (Bangalore), Chennai, Hyderabad, Kochi (Cochin), Noida, Pune, Thiruvananthapuram
Job Description
Job Summary:
As a DevOps Engineer focused on Perforce to GitHub migration, you will be responsible for executing seamless and large-scale source control migrations. You must be proficient with GitHub Enterprise and Perforce, possess strong scripting skills (Python/Shell), and have a deep understanding of version control concepts.
The ideal candidate is a self-starter, a problem-solver, and thrives on challenges while ensuring smooth transitions with minimal disruption to development workflows.
Key Responsibilities:
- Analyze and prepare Perforce repositories — clean workspaces, merge streams, and remove unnecessary files.
- Handle large files efficiently using Git Large File Storage (LFS) for files exceeding GitHub’s 100MB size limit.
- Use git-p4 fusion (Python-based tool) to clone and migrate Perforce repositories incrementally, ensuring data integrity.
- Define migration scope — determine how much history to migrate and plan the repository structure.
- Manage branch renaming and repository organization for optimized post-migration workflows.
- Collaborate with development teams to determine migration points and finalize migration strategies.
- Troubleshoot issues related to file sizes, Python compatibility, network connectivity, or permissions during migration.
Required Qualifications:
- Strong knowledge of Git/GitHub and preferably Perforce (Helix Core) — understanding of differences, workflows, and integrations.
- Hands-on experience with P4-Fusion.
- Familiarity with cloud platforms (AWS, Azure) and containerization technologies (Docker, Kubernetes).
- Proficiency in migration tools such as git-p4 fusion — installation, configuration, and troubleshooting.
- Ability to identify and manage large files using Git LFS to meet GitHub repository size limits.
- Strong scripting skills in Python and Shell for automating migration and restructuring tasks.
- Experience in planning and executing source control migrations — defining scope, branch mapping, history retention, and permission translation.
- Familiarity with CI/CD pipeline integration to validate workflows post-migration.
- Understanding of source code management (SCM) best practices, including version history and repository organization in GitHub.
- Excellent communication and collaboration skills for cross-team coordination and migration planning.
- Proven practical experience in repository migration, large file management, and history preservation during Perforce to GitHub transitions.
Skills: Github, Kubernetes, Perforce, Perforce (Helix Core), Devops Tools
Must-Haves
Git/GitHub (advanced), Perforce (Helix Core) (advanced), Python/Shell scripting (strong), P4-Fusion (hands-on experience), Git LFS (proficient)
Role Overview:
Challenge convention and work on cutting edge technology that is transforming the way our customers manage their physical, virtual and cloud computing environments. Virtual Instruments seeks highly talented people to join our growing team, where your contributions will impact the development and delivery of our product roadmap. Our award-winning Virtana Platform provides the only real-time, system-wide, enterprise scale solution for providing visibility into performance, health and utilization metrics, translating into improved performance and availability while lowering the total cost of the infrastructure supporting mission-critical applications.
We are seeking an individual with knowledge in Systems Management and/or Systems Monitoring Software and/or Performance Management Software and Solutions with insight into integrated infrastructure platforms like Cisco UCS, infrastructure providers like Nutanix, VMware, EMC & NetApp and public cloud platforms like Google Cloud and AWS to expand the depth and breadth of Virtana Products.
Work Location: Pune/ Chennai
Job Type: Hybrid
Role Responsibilities:
- The engineer will be primarily responsible for design and development of software solutions for the Virtana Platform
- Partner and work closely with team leads, architects and engineering managers to design and implement new integrations and solutions for the Virtana Platform.
- Communicate effectively with people having differing levels of technical knowledge.
- Work closely with Quality Assurance and DevOps teams assisting with functional and system testing design and deployment
- Provide customers with complex application support, problem diagnosis and problem resolution
Required Qualifications:
- Minimum of 4+ years of experience in a Web Application centric Client Server Application development environment focused on Systems Management, Systems Monitoring and Performance Management Software.
- Able to understand and comprehend integrated infrastructure platforms and experience working with one or more data collection technologies like SNMP, REST, OTEL, WMI, WBEM.
- Minimum of 4 years of development experience with one of these high level languages like Python, Java, GO is required.
- Bachelor's (B.E, B.Tech) or Master's degree (M.E, M.Tech. MCA) in computer science, Computer Engineering or equivalent
- 2 years of development experience in public cloud environment using Kubernetes etc (Google Cloud and/or AWS)
Desired Qualifications:
- Prior experience with other virtualization platforms like OpenShift is a plus
- Prior experience as a contributor to engineering and integration efforts with strong attention to detail and exposure to Open-Source software is a plus
- Demonstrated ability as a strong technical engineer who can design and code with strong communication skills
- Firsthand development experience with the development of Systems, Network and performance Management Software and/or Solutions is a plus
- Ability to use a variety of debugging tools, simulators and test harnesses is a plus
About Virtana:
Virtana delivers the industry's only broadest and deepest Observability Platform that allows organizations to monitor infrastructure, de-risk cloud migrations, and reduce cloud costs by 25% or more.
Over 200 Global 2000 enterprise customers, such as AstraZeneca, Dell, Salesforce, Geico, Costco, Nasdaq, and Boeing, have valued Virtana's software solutions for over a decade.
Our modular platform for hybrid IT digital operations includes Infrastructure Performance Monitoring and Management (IPM), Artificial Intelligence for IT Operations (AIOps), Cloud Cost Management (Fin Ops), and Workload Placement Readiness Solutions. Virtana is simplifying the complexity of hybrid IT environments with a single cloud-agnostic platform across all the categories listed above. The $30BIT Operations Management (ITOM) Software market is ripe for disruption, and Virtana is uniquely positioned for success.
Job Location: Kharadi, Pune
Job Type: Full-Time
About Us:
NonStop io Technologies is a value-driven company with a strong focus on process-oriented software engineering. We specialize in Product Development and have 10 years of experience in building web and mobile applications across various domains. NonStop io Technologies follows core principles that guide their operations and believe in staying invested in a product's vision for the long term. We are a small but proud group of individuals who believe in the "givers gain" philosophy and strive to provide value in order to seek value. We are committed to delivering top-notch solutions to our clients and are looking for a talented Web UI Developer to join our dynamic team.
Qualifications:
- Strong Experience in JavaScript and React
- Experience in building multi-tier SaaS applications with exposure to micro-services, caching, pub-sub, and messaging technologies
- Experience with design patterns
- Familiarity with UI components library (such as material-UI or Bootstrap) and RESTful APIs
- Experience with web frontend technologies such as HTML5, CSS3, LESS, Bootstrap
- A strong foundation in computer science, with competencies in data structures, algorithms, and software design
- Bachelor's / Master's Degree in CS
- Experience in GIT in mandatory
- Exposure to AWS, Docker, and CI/CD systems like Jenkins is a plus
Role: DevOps Engineer
Experience: 7+ Years
Location: Pune / Trivandrum
Work Mode: Hybrid
𝐊𝐞𝐲 𝐑𝐞𝐬𝐩𝐨𝐧𝐬𝐢𝐛𝐢𝐥𝐢𝐭𝐢𝐞𝐬:
- Drive CI/CD pipelines for microservices and cloud architectures
- Design and operate cloud-native platforms (AWS/Azure)
- Manage Kubernetes/OpenShift clusters and containerized applications
- Develop automated pipelines and infrastructure scripts
- Collaborate with cross-functional teams on DevOps best practices
- Mentor development teams on continuous delivery and reliability
- Handle incident management, troubleshooting, and root cause analysis
𝐌𝐚𝐧𝐝𝐚𝐭𝐨𝐫𝐲 𝐒𝐤𝐢𝐥𝐥𝐬:
- 7+ years in DevOps/SRE roles
- Strong experience with AWS or Azure
- Hands-on with Docker, Kubernetes, and/or OpenShift
- Proficiency in Jenkins, Git, Maven, JIRA
- Strong scripting skills (Shell, Python, Perl, Ruby, JavaScript)
- Solid networking knowledge and troubleshooting skills
- Excellent communication and collaboration abilities
𝐏𝐫𝐞𝐟𝐞𝐫𝐫𝐞𝐝 𝐒𝐤𝐢𝐥𝐥𝐬:
- Experience with Helm, monitoring tools (Splunk, Grafana, New Relic, Datadog)
- Knowledge of Microservices and SOA architectures
- Familiarity with database technologies
Job Title : Java Backend Developer
Experience : 3 – 8 Years
Location : Pune (Onsite) (Pune candidates Only)
Notice Period : Immediate to 15 Days (or serving NP whose LWD is near)
About the Role :
We are seeking an experienced Java Backend Developer with strong hands-on skills in backend microservices development, API design, cloud platforms, observability, and CI/CD.
The ideal candidate will contribute to building scalable, secure, and reliable applications while working closely with cross-functional teams.
Mandatory Skills : Java 8 / Java 17, Spring Boot 3.x, REST APIs, Hibernate / JPA, MySQL, MongoDB, Prometheus / Grafana / Spring Actuators, AWS, Docker, Jenkins / GitHub Actions, GitHub, Windows 7 / Linux.
Key Responsibilities :
- Design, develop, and maintain backend microservices and REST APIs
- Implement data persistence using relational and NoSQL databases
- Ensure performance, scalability, and security of backend systems
- Integrate observability and monitoring tools for production environments
- Work within CI/CD pipelines and containerized deployments
- Collaborate with DevOps, QA, and product teams for feature delivery
- Troubleshoot, optimize, and improve existing modules and services
Mandatory Skills :
- Languages & Frameworks : Java 8, Java 17, Spring Boot 3.x, REST APIs, Hibernate, JPA
- Databases : MySQL, MongoDB
- Observability : Prometheus, Grafana, Spring Actuators
- Cloud Technologies : AWS
- Containerization Tools : Docker
- CI/CD Tools : Jenkins, GitHub Actions
- Version Control : GitHub
- Operating Systems : Windows 7, Linux
Nice to Have :
- Strong analytical and debugging abilities
- Experience working in Agile/Scrum environments
- Good communication and collaborative skills
What You’ll Do:
As a Sr. Data Scientist, you will work closely across DeepIntent Data Science teams located in New York, India, and Bosnia. The role will focus on building predictive models, implementing data-driven solutions to maximize ad effectiveness. You will also lead efforts in generating analyses and insights related to the measurement of campaign outcomes, Rx, patient journey, and supporting the evolution of the DeepIntent product suite. Activities in this position include developing and deploying models in production, reading campaign results, analyzing medical claims, clinical, demographic and clickstream data, performing analysis and creating actionable insights, summarizing, and presenting results and recommended actions to internal stakeholders and external clients, as needed.
- Explore ways to create better predictive models.
- Analyze medical claims, clinical, demographic and clickstream data to produce and present actionable insights.
- Explore ways of using inference, statistical, and machine learning techniques to improve the performance of existing algorithms and decision heuristics.
- Design and deploy new iterations of production-level code.
- Contribute posts to our upcoming technical blog.
Who You Are:
- Bachelor’s degree in a STEM field, such as Statistics, Mathematics, Engineering, Biostatistics, Econometrics, Economics, Finance, or Data Science.
- 5+ years of working experience as a Data Scientist or Researcher in digital marketing, consumer advertisement, telecom, or other areas requiring customer-level predictive analytics.
- Advanced proficiency in performing statistical analysis in Python, including relevant libraries, is required.
- Experience working with data processing, transformation and building model pipelines using tools such as Spark, Airflow, and Docker.
- You have an understanding of the ad-tech ecosystem, digital marketing and advertising data and campaigns or familiarity with the US healthcare patient and provider systems (e.g. medical claims, medications).
- You have varied and hands-on predictive machine learning experience (deep learning, boosting algorithms, inference…).
- You are interested in translating complex quantitative results into meaningful findings and interpretable deliverables, and communicating with less technical audiences orally and in writing.
- You can write production level code, work with Git repositories.
- Active Kaggle participant.
- Working experience with SQL.
- Familiar with medical and healthcare data (medical claims, Rx, preferred).
- Conversant with cloud technologies such as AWS or Google Cloud.

Global digital transformation solutions provider.
Job Description
We are seeking a highly skilled Site Reliability Engineer (SRE) with strong expertise in Google Cloud Platform (GCP) and CI/CD automation to lead cloud infrastructure initiatives. The ideal candidate will design and implement robust CI/CD pipelines, automate deployments, ensure platform reliability, and drive continuous improvement in cloud operations and DevOps practices.
Key Responsibilities:
- Design, develop, and optimize end-to-end CI/CD pipelines using Jenkins, with a strong focus on Declarative Pipeline syntax.
- Automate deployment, scaling, and management of applications across various GCP services including GKE, Cloud Run, Compute Engine, Cloud SQL, Cloud Storage, VPC, and Cloud Functions.
- Collaborate closely with development and DevOps teams to ensure seamless integration of applications into the CI/CD pipeline and GCP environment.
- Implement and manage monitoring, logging, and ing solutions to maintain visibility, reliability, and performance of cloud infrastructure and applications.
- Ensure compliance with security best practices and organizational policies across GCP environments.
- Document processes, configurations, and architectural decisions to maintain operational transparency.
- Stay updated with the latest GCP services, DevOps, and SRE best practices to enhance infrastructure efficiency and reliability.
Mandatory Skills:
- Google Cloud Platform (GCP) – Hands-on experience with core GCP compute, networking, and storage services.
- Jenkins – Expertise in Declarative Pipeline creation and optimization.
- CI/CD – Strong understanding of automated build, test, and deployment workflows.
- Solid understanding of SRE principles including automation, scalability, observability, and system reliability.
- Familiarity with containerization and orchestration tools (Docker, Kubernetes – GKE).
- Proficiency in scripting languages such as Shell, Python, or Groovy for automation tasks.
Preferred Skills:
- Experience with Terraform, Ansible, or Cloud Deployment Manager for Infrastructure as Code (IaC).
- Exposure to monitoring and observability tools like Stackdriver, Prometheus, or Grafana.
- Knowledge of multi-cloud or hybrid environments (AWS experience is a plus).
- GCP certification (Professional Cloud DevOps Engineer / Cloud Architect) preferred.
Skills
Gcp, Jenkins, CICD Aws,
Nice to Haves
Experience with Terraform, Ansible, or Cloud Deployment Manager for Infrastructure as Code (IaC).
Exposure to monitoring and observability tools like Stackdriver, Prometheus, or Grafana.
Knowledge of multi-cloud or hybrid environments (AWS experience is a plus).
GCP certification (Professional Cloud DevOps Engineer / Cloud Architect) preferred.
******
Notice period - 0 to 15days only
Location – Pune, Trivandrum, Kochi, Chennai
Job Description – Full Stack Developer (React + Node.js)
Experience: 5–8 Years
Location: Pune
Work Mode: WFO
Employment Type: Full-time
About the Role
We are looking for an experienced Full Stack Developer with strong hands-on expertise in React and Node.js to join our engineering team. The ideal candidate should have solid experience building scalable applications, working with production systems, and collaborating in high-performance tech environments.
Key Responsibilities
- Design, develop, and maintain scalable full-stack applications using React and Node.js.
- Collaborate with cross-functional teams to define, design, and deliver new features.
- Write clean, maintainable, and efficient code following OOP/FP and SOLID principles.
- Work with relational databases such as PostgreSQL or MySQL.
- Deploy and manage applications in cloud environments (preferably GCP or AWS).
- Optimize application performance, troubleshoot issues, and ensure high availability in production systems.
- Utilize containerization tools like Docker for efficient development and deployment workflows.
- Integrate third-party services and APIs, including AI APIs and tools.
- Contribute to improving development processes, documentation, and best practices.
Required Skills
- Strong experience with React.js (frontend).
- Solid hands-on experience with Node.js (backend).
- Good understanding of relational databases: PostgreSQL / MySQL.
- Experience working in production environments and debugging live systems.
- Strong understanding of OOP or Functional Programming, and clean coding standards.
- Knowledge of Docker or other containerization tools.
- Experience with cloud platforms (GCP or AWS).
- Excellent written and verbal communication skills.
Good to Have
- Experience with Golang or Elixir.
- Familiarity with Kubernetes, RabbitMQ, Redis, etc.
- Contributions to open-source projects.
- Previous experience working with AI APIs or machine learning tools.
Senior Software Engineer
Challenge convention and work on cutting edge technology that is transforming the way our customers manage their physical, virtual and cloud computing environments. Virtual Instruments seeks highly talented people to join our growing team, where your contributions will impact the development and delivery of our product roadmap. Our award-winning Virtana Platform provides the only real-time, system-wide, enterprise scale solution for providing visibility into performance, health and utilization metrics, translating into improved performance and availability while lowering the total cost of the infrastructure supporting mission-critical applications.
We are seeking an individual with expert knowledge in Systems Management and/or Systems Monitoring Software, Observability platforms and/or Performance Management Software and Solutions with insight into integrated infrastructure platforms like Cisco UCS, infrastructure providers like Nutanix, VMware, EMC & NetApp and public cloud platforms like Google Cloud and AWS to expand the depth and breadth of Virtana Products.
Work Location: Pune/ Chennai
Job Type: Hybrid
Role Responsibilities:
- The engineer will be primarily responsible for architecture, design and development of software solutions for the Virtana Platform
- Partner and work closely with cross functional teams and with other engineers and product managers to architect, design and implement new features and solutions for the Virtana Platform.
- Communicate effectively across the departments and R&D organization having differing levels of technical knowledge.
- Work closely with UX Design, Quality Assurance, DevOps and Documentation teams. Assist with functional and system test design and deployment automation
- Provide customers with complex and end-to-end application support, problem diagnosis and problem resolution
- Learn new technologies quickly and leverage 3rd party libraries and tools as necessary to expedite delivery
Required Qualifications:
- Minimum of 7+ years of progressive experience with back-end development in a Client Server Application development environment focused on Systems Management, Systems Monitoring and Performance Management Software.
- Deep experience in public cloud environment using Kubernetes and other distributed managed services like Kafka etc (Google Cloud and/or AWS)
- Experience with CI/CD and cloud-based software development and delivery
- Deep experience with integrated infrastructure platforms and experience working with one or more data collection technologies like SNMP, REST, OTEL, WMI, WBEM.
- Minimum of 6 years of development experience with one or more of these high level languages like GO, Python, Java. Deep experience with one of these languages is required.
- Bachelor’s or Master’s degree in computer science, Computer Engineering or equivalent
- Highly effective verbal and written communication skills and ability to lead and participate in multiple projects
- Well versed with identifying opportunities and risks in a fast-paced environment and ability to adjust to changing business priorities
- Must be results-focused, team-oriented and with a strong work ethic
Desired Qualifications:
- Prior experience with other virtualization platforms like OpenShift is a plus
- Prior experience as a contributor to engineering and integration efforts with strong attention to detail and exposure to Open-Source software is a plus
- Demonstrated ability as a lead engineer who can architect, design and code with strong communication and teaming skills
- Deep development experience with the development of Systems, Network and performance Management Software and/or Solutions is a plus
About Virtana: Virtana delivers the industry’s only broadest and deepest Observability Platform that allows organizations to monitor infrastructure, de-risk cloud migrations, and reduce cloud costs by 25% or more.
Over 200 Global 2000 enterprise customers, such as AstraZeneca, Dell, Salesforce, Geico, Costco, Nasdaq, and Boeing, have valued Virtana’s software solutions for over a decade.
Our modular platform for hybrid IT digital operations includes Infrastructure Performance Monitoring and Management (IPM), Artificial Intelligence for IT Operations (AIOps), Cloud Cost Management (Fin Ops), and Workload Placement Readiness Solutions. Virtana is simplifying the complexity of hybrid IT environments with a single cloud-agnostic platform across all the categories listed above. The $30B IT Operations Management (ITOM) Software market is ripe for disruption, and Virtana is uniquely positioned for success.
Core Responsibilities:
- The MLE will design, build, test, and deploy scalable machine learning systems, optimizing model accuracy and efficiency
- Model Development: Algorithms and architectures span traditional statistical methods to deep learning along with employing LLMs in modern frameworks.
- Data Preparation: Prepare, cleanse, and transform data for model training and evaluation.
- Algorithm Implementation: Implement and optimize machine learning algorithms and statistical models.
- System Integration: Integrate models into existing systems and workflows.
- Model Deployment: Deploy models to production environments and monitor performance.
- Collaboration: Work closely with data scientists, software engineers, and other stakeholders.
- Continuous Improvement: Identify areas for improvement in model performance and systems.
Skills:
- Programming and Software Engineering: Knowledge of software engineering best practices (version control, testing, CI/CD).
- Data Engineering: Ability to handle data pipelines, data cleaning, and feature engineering. Proficiency in SQL for data manipulation + Kafka, Chaossearch logs, etc for troubleshooting; Other tech touch points are ScyllaDB (like BigTable), OpenSearch, Neo4J graph
- Model Deployment and Monitoring: MLOps Experience in deploying ML models to production environments.
- Knowledge of model monitoring and performance evaluation.
Required experience:
- Amazon SageMaker: Deep understanding of SageMaker's capabilities for building, training, and deploying ML models; understanding of the Sagemaker pipeline with ability to analyze gaps and recommend/implement improvements
- AWS Cloud Infrastructure: Familiarity with S3, EC2, Lambda and using these services in ML workflows
- AWS data: Redshift, Glue
- Containerization and Orchestration: Understanding of Docker and Kubernetes, and their implementation within AWS (EKS, ECS)
Skills: Aws, Aws Cloud, Amazon Redshift, Eks
Must-Haves
Amazon SageMaker, AWS Cloud Infrastructure (S3, EC2, Lambda), Docker and Kubernetes (EKS, ECS), SQL, AWS data (Redshift, Glue)
Skills : Machine Learning, MLOps, AWS Cloud, Redshift OR Glue, Kubernetes, Sage maker
******
Notice period - 0 to 15 days only
Location : Pune & Hyderabad only
Job Description
Experience: 5 - 9 years
Location: Bangalore/Pune/Hyderabad
Work Mode: Hybrid(3 Days WFO)
Senior Cloud Infrastructure Engineer for Data Platform
The ideal candidate will play a critical role in designing, implementing, and maintaining cloud infrastructure and CI/CD pipelines to support scalable, secure, and efficient data and analytics solutions. This role requires a strong understanding of cloud-native technologies, DevOps best practices, and hands-on experience with Azure and Databricks.
Key Responsibilities:
Cloud Infrastructure Design & Management
Architect, deploy, and manage scalable and secure cloud infrastructure on Microsoft Azure.
Implement best practices for Azure Resource Management, including resource groups, virtual networks, and storage accounts.
Optimize cloud costs and ensure high availability and disaster recovery for critical systems
Databricks Platform Management
Set up, configure, and maintain Databricks workspaces for data engineering, machine learning, and analytics workloads.
Automate cluster management, job scheduling, and monitoring within Databricks.
Collaborate with data teams to optimize Databricks performance and ensure seamless integration with Azure services.
CI/CD Pipeline Development
Design and implement CI/CD pipelines for deploying infrastructure, applications, and data workflows using tools like Azure DevOps, GitHub Actions, or similar.
Automate testing, deployment, and monitoring processes to ensure rapid and reliable delivery of updates.
Monitoring & Incident Management
Implement monitoring and alerting solutions using tools like Dynatrace, Azure Monitor, Log Analytics, and Databricks metrics.
Troubleshoot and resolve infrastructure and application issues, ensuring minimal downtime.
Security & Compliance
Enforce security best practices, including identity and access management (IAM), encryption, and network security.
Ensure compliance with organizational and regulatory standards for data protection and cloud operations.
Collaboration & Documentation
Work closely with cross-functional teams, including data engineers, software developers, and business stakeholders, to align infrastructure with business needs.
Maintain comprehensive documentation for infrastructure, processes, and configurations.
Required Qualifications
Education: Bachelor’s degree in Computer Science, Engineering, or a related field.
Must Have Experience:
6+ years of experience in DevOps or Cloud Engineering roles.
Proven expertise in Microsoft Azure services, including Azure Data Lake, Azure Databricks, Azure Data Factory (ADF), Azure Functions, Azure Kubernetes Service (AKS), and Azure Active Directory.
Hands-on experience with Databricks for data engineering and analytics.
Technical Skills:
Proficiency in Infrastructure as Code (IaC) tools like Terraform, ARM templates, or Bicep.
Strong scripting skills in Python, or Bash.
Experience with containerization and orchestration tools like Docker and Kubernetes.
Familiarity with version control systems (e.g., Git) and CI/CD tools (e.g., Azure DevOps, GitHub Actions).
Soft Skills:
Strong problem-solving and analytical skills.
Excellent communication and collaboration abilities.
Job Description: Python Engineer
Role Summary
We are looking for a talented Python Engineer to design, develop, and maintain high-quality backend applications and automation solutions. The ideal candidate should have strong programming skills, familiarity with modern development practices, and the ability to work in a fast-paced, collaborative environment.
Key Responsibilities:
Python Development & Automation
- Design, develop, and maintain Python scripts, tools, and automation frameworks.
- Build automation for operational tasks such as deployment, monitoring, system checks, and maintenance.
- Write clean, modular, and well-documented Python code following best practices.
- Develop APIs, CLI tools, or microservices when required.
Linux Systems Engineering
- Manage, configure, and troubleshoot Linux environments (RHEL, CentOS, Ubuntu).
- Perform system performance tuning, log analysis, and root-cause diagnostics.
- Work with system services, processes, networking, file systems, and security controls.
- Implement shell scripting (bash) alongside Python for system-level automation.
CI/CD & Infrastructure Support
- Support integration of Python automation into CI/CD pipelines (Jenkins).
- Participate in build and release processes for infrastructure components.
- Ensure automation aligns with established infrastructure standards and governance.
- Use Bash scripting together with Python to improve automation efficiency.
Cloud & DevOps Collaboration (if applicable)
- Collaborate with Cloud/DevOps engineers on automation for AWS or other cloud platforms.
- Integrate Python tools with configuration management tools such as Chef or Ansible, or with Terraform modules.
- Contribute to containerization efforts (Docker, Kubernetes) leveraging Python automation.
MUST-HAVES:
- Machine Learning + Aws + (EKS OR ECS OR Kubernetes) + (Redshift AND Glue) + Sage maker
- Notice period - 0 to 15 days only
- Hybrid work mode- 3 days office, 2 days at home
SKILLS: AWS, AWS CLOUD, AMAZON REDSHIFT, EKS
ADDITIONAL GUIDELINES:
- Interview process: - 2 Technical round + 1 Client round
- 3 days in office, Hybrid model.
CORE RESPONSIBILITIES:
- The MLE will design, build, test, and deploy scalable machine learning systems, optimizing model accuracy and efficiency
- Model Development: Algorithms and architectures span traditional statistical methods to deep learning along with employing LLMs in modern frameworks.
- Data Preparation: Prepare, cleanse, and transform data for model training and evaluation.
- Algorithm Implementation: Implement and optimize machine learning algorithms and statistical models.
- System Integration: Integrate models into existing systems and workflows.
- Model Deployment: Deploy models to production environments and monitor performance.
- Collaboration: Work closely with data scientists, software engineers, and other stakeholders.
- Continuous Improvement: Identify areas for improvement in model performance and systems.
SKILLS:
- Programming and Software Engineering: Knowledge of software engineering best practices (version control, testing, CI/CD).
- Data Engineering: Ability to handle data pipelines, data cleaning, and feature engineering. Proficiency in SQL for data manipulation + Kafka, Chaos search logs, etc. for troubleshooting; Other tech touch points are Scylla DB (like BigTable), OpenSearch, Neo4J graph
- Model Deployment and Monitoring: MLOps Experience in deploying ML models to production environments.
- Knowledge of model monitoring and performance evaluation.
REQUIRED EXPERIENCE:
- Amazon SageMaker: Deep understanding of SageMaker's capabilities for building, training, and deploying ML models; understanding of the Sage maker pipeline with ability to analyze gaps and recommend/implement improvements
- AWS Cloud Infrastructure: Familiarity with S3, EC2, Lambda and using these services in ML workflows
- AWS data: Redshift, Glue
- Containerization and Orchestration: Understanding of Docker and Kubernetes, and their implementation within AWS (EKS, ECS)
Required Skills & Qualifications:
✔ Experience: 4+ years in Cloud Engineering, with a focus on GCP.
✔ Cloud Expertise: Strong knowledge of GCP services (GKE, Compute Engine, IAM, VPC, Cloud Storage, Cloud SQL, Cloud Functions).
✔ Kubernetes & Containers: Experience with GKE, Docker, GKE Networking, Helm.
✔ DevOps Tools: Hands-on experience with Azure DevOps for CI/CD pipeline automation.
✔ Infrastructure-as-Code (IaC): Expertise in Terraform for provisioning cloud resources.
✔ Scripting & Automation: Proficiency in Python, Bash, or PowerShell for automation.
Role: DevOps Engineer
Experience: 2–3+ years
Location: Pune
Work Mode: Hybrid (3 days Work from office)
Mandatory Skills:
- Strong hands-on experience with CI/CD tools like Jenkins, GitHub Actions, or AWS CodePipeline
- Proficiency in scripting languages (Bash, Python, PowerShell)
- Hands-on experience with containerization (Docker) and container management
- Proven experience managing infrastructure (On-premise or AWS/VMware)
- Experience with version control systems (Git/Bitbucket/GitHub)
- Familiarity with monitoring and logging tools for system performance tracking
- Knowledge of security best practices and compliance standards
- Bachelor's degree in Computer Science, Engineering, or related field
- Willingness to support production issues during odd hours when required
Preferred Qualifications:
- Certifications in AWS, Docker, or VMware
- Experience with configuration management tools like Ansible
- Exposure to Agile and DevOps methodologies
- Hands-on experience with Virtual Machines and Container orchestration
Job Details
- Job Title: ML Engineer II - Aws, Aws Cloud
- Industry: Technology
- Domain - Information technology (IT)
- Experience Required: 6-12 years
- Employment Type: Full Time
- Job Location: Pune
- CTC Range: Best in Industry
Job Description:
Core Responsibilities:
? The MLE will design, build, test, and deploy scalable machine learning systems, optimizing model accuracy and efficiency
? Model Development: Algorithms and architectures span traditional statistical methods to deep learning along with employing LLMs in modern frameworks.
? Data Preparation: Prepare, cleanse, and transform data for model training and evaluation.
? Algorithm Implementation: Implement and optimize machine learning algorithms and statistical models.
? System Integration: Integrate models into existing systems and workflows.
? Model Deployment: Deploy models to production environments and monitor performance.
? Collaboration: Work closely with data scientists, software engineers, and other stakeholders.
? Continuous Improvement: Identify areas for improvement in model performance and systems.
Skills:
? Programming and Software Engineering: Knowledge of software engineering best practices (version control, testing, CI/CD).
? Data Engineering: Ability to handle data pipelines, data cleaning, and feature engineering. Proficiency in SQL for data manipulation + Kafka, Chaossearch logs, etc for troubleshooting; Other tech touch points are ScyllaDB (like BigTable), OpenSearch, Neo4J graph
? Model Deployment and Monitoring: MLOps Experience in deploying ML models to production environments.
? Knowledge of model monitoring and performance evaluation.
Required experience:
? Amazon SageMaker: Deep understanding of SageMaker's capabilities for building, training, and deploying ML models; understanding of the Sagemaker pipeline with ability to analyze gaps and recommend/implement improvements
? AWS Cloud Infrastructure: Familiarity with S3, EC2, Lambda and using these services in
ML workflows
? AWS data: Redshift, Glue
? Containerization and Orchestration: Understanding of Docker and Kubernetes, and their implementation within AWS (EKS, ECS)
Skills: Aws, Aws Cloud, Amazon Redshift, Eks
Must-Haves
Aws, Aws Cloud, Amazon Redshift, Eks
NP: Immediate – 30 Days
Position-Tech Lead
Experience: 8-10
Job Location: Pune
We are seeking a highly skilled Tech Lead with strong expertise in Java, microservices architecture, and cloud-native application development. The ideal candidate will bring hands-on leadership experience in designing scalable solutions, guiding development teams, and collaborating with DevOps engineers on OpenShift (OCP) platforms. This role requires a blend of technical leadership, solution design, and delivery ownership.
Key Responsibilities
Lead the design and development of Java / Spring Boot based microservices in a cloud-native environment.
Provide technical leadership to a team of developers, ensuring adherence to coding, security, and architectural best practices.
Collaborate with architects and DevOps engineers to deploy and manage microservices on Red Hat OpenShift (OCP).
Oversee end-to-end delivery including requirement analysis, design, development, code review, testing, and deployment.
Define and implement API specifications, integration patterns, and microservices orchestration.
Work closely with DevOps teams to integrate CI/CD pipelines, containerized deployments, Helm, and GitOps workflows.
Ensure application performance, scalability, and reliability with proactive observability practices (Grafana, Prometheus, etc.).
Required Skills & Qualifications
8-10 years of proven experience in Java application development with at least 4+ years in microservices architecture.
Strong expertise in Spring Boot, REST APIs, JPA/Hibernate, and messaging frameworks (Kafka, RabbitMQ, etc.).
Hands-on experience with containerization (Docker) and orchestration (OpenShift/Kubernetes).
Familiarity with OCP DevOps practices including CI/CD (ArgoCD, Tekton, Jenkins), Helm, and YAML deployments.
Good understanding of observability stacks (Grafana, Prometheus, Loki, Alertmanager) and logging practices.
Solid knowledge of cloud-native design principles, scalability, and fault tolerance.
Exposure to security best practices (OAuth, RBAC, secrets management via Vault or similar).
Job Position: Lead II - Software Engineering
Domain: Information technology (IT)
Location: India - Thiruvananthapuram
Salary: Best in Industry
Job Positions: 1
Experience: 8 - 12 Years
Skills: .Net, Sql Azure, Rest Api, Vue.Js
Notice Period: Immediate – 30 Days
Job Summary:
We are looking for a highly skilled Senior .NET Developer with a minimum of 7 years of experience across the full software development lifecycle, including post-live support. The ideal candidate will have a strong background in .NET backend API development, Agile methodologies, and Cloud infrastructure (preferably Azure). You will play a key role in solution design, development, DevOps pipeline enhancement, and mentoring junior engineers.
Key Responsibilities:
- Design, develop, and maintain scalable and secure .NET backend APIs.
- Collaborate with product owners and stakeholders to understand requirements and translate them into technical solutions.
- Lead and contribute to Agile software delivery processes (Scrum, Kanban).
- Develop and improve CI/CD pipelines and support release cadence targets, using Infrastructure as Code tools (e.g., Terraform).
- Provide post-live support, troubleshooting, and issue resolution as part of full lifecycle responsibilities.
- Implement unit and integration testing to ensure code quality and system stability.
- Work closely with DevOps and cloud engineering teams to manage deployments on Azure (Web Apps, Container Apps, Functions, SQL).
- Contribute to front-end components when necessary, leveraging HTML, CSS, and JavaScript UI frameworks.
- Mentor and coach engineers within a co-located or distributed team environment.
- Maintain best practices in code versioning, testing, and documentation.
Mandatory Skills:
- 7+ years of .NET development experience, including API design and development
- Strong experience with Azure Cloud services, including:
- Web/Container Apps
- Azure Functions
- Azure SQL Server
- Solid understanding of Agile development methodologies (Scrum/Kanban)
- Experience in CI/CD pipeline design and implementation
- Proficient in Infrastructure as Code (IaC) – preferably Terraform
- Strong knowledge of RESTful services and JSON-based APIs
- Experience with unit and integration testing techniques
- Source control using Git
- Strong understanding of HTML, CSS, and cross-browser compatibility
Good-to-Have Skills:
- Experience with Kubernetes and Docker
- Knowledge of JavaScript UI frameworks, ideally Vue.js
- Familiarity with JIRA and Agile project tracking tools
- Exposure to Database as a Service (DBaaS) and Platform as a Service (PaaS) concepts
- Experience mentoring or coaching junior developers
- Strong problem-solving and communication skills
Role: Data Scientist (Python + R Expertise)
Exp: 8 -12 Years
CTC: up to 30 LPA
Required Skills & Qualifications:
- 8–12 years of hands-on experience as a Data Scientist or in a similar analytical role.
- Strong expertise in Python and R for data analysis, modeling, and visualization.
- Proficiency in machine learning frameworks (scikit-learn, TensorFlow, PyTorch, caret, etc.).
- Strong understanding of statistical modeling, hypothesis testing, regression, and classification techniques.
- Experience with SQL and working with large-scale structured and unstructured data.
- Familiarity with cloud platforms (AWS, Azure, or GCP) and deployment practices (Docker, MLflow).
- Excellent analytical, problem-solving, and communication skills.
Preferred Skills:
- Experience with NLP, time series forecasting, or deep learning projects.
- Exposure to data visualization tools (Tableau, Power BI, or R Shiny).
- Experience working in product or data-driven organizations.
- Knowledge of MLOps and model lifecycle management is a plus.
If interested kindly share your updated resume on 82008 31681
Key Responsibilities
Test Architecture & Design
- Architect test frameworks and infrastructure to validate microservices and distributed systems in multi-cluster, hybrid-cloud environments.
- Design complex test scenarios simulating production-like workloads, scaling, failure injection, and recovery.
- Ensure reliability, scalability, and maintainability of test systems.
Automation & Scalability
- Drive test automation integrated with CI/CD pipelines (e.g., Jenkins, GitHub Actions).
- Leverage Kubernetes APIs, Helm, and service meshes (Istio/Linkerd) for automation coverage of health, failover, and network resilience.
- Implement Infrastructure-as-Code (IaC) practices for test infrastructure to ensure repeatability and extensibility.
Technical Expertise
- Deep knowledge of Kubernetes internals, cluster lifecycle management, Helm, service meshes, and network policies.
- Strong scripting and automation skills with Python, Pytest, and Bash.
- Hands-on with observability stacks (Prometheus, Grafana, Jaeger) and performance benchmarking tools (e.g., K6).
- Experience with cloud platforms (AWS, Azure, GCP) and containerized CI/CD.
- Solid Linux proficiency: Bash scripting, debugging, networking, PKI management, Docker/containerd, GitOps/Flux, kubectl/Helm, troubleshooting multi-cluster environments.
Required Skills & Qualifications
- 6+ years in QA, Test Automation, or related engineering roles.
- Proven experience in architecting test frameworks for distributed/cloud-native systems.
- Expertise in Kubernetes, Helm, CI/CD, and cloud platforms (AWS/Azure/GCP).
- Strong Linux fundamentals with scripting and system debugging skills.
- Excellent problem-solving, troubleshooting, and technical leadership abilities.
Job Title : Senior QA Automation Architect (Cloud & Kubernetes)
Experience : 6+ Years
Location : India (Multiple Offices)
Shift Timings : 12 PM to 9 PM (Noon Shift)
Working Days : 5 Days WFO (NO Hybrid)
About the Role :
We’re looking for a Senior QA Automation Architect with deep expertise in cloud-native systems, Kubernetes, and automation frameworks.
You’ll design scalable test architectures, enhance automation coverage, and ensure product reliability across hybrid-cloud and distributed environments.
Key Responsibilities :
- Architect and maintain test automation frameworks for microservices.
- Integrate automated tests into CI/CD pipelines (Jenkins, GitHub Actions).
- Ensure reliability, scalability, and observability of test systems.
- Work closely with DevOps and Cloud teams to streamline automation infrastructure.
Mandatory Skills :
- Kubernetes, Helm, Docker, Linux
- Cloud Platforms : AWS / Azure / GCP
- CI/CD Tools : Jenkins, GitHub Actions
- Scripting : Python, Pytest, Bash
- Monitoring & Performance : Prometheus, Grafana, Jaeger, K6
- IaC Practices : Terraform / Ansible
Good to Have :
- Experience with Service Mesh (Istio/Linkerd).
- Container Security or DevSecOps exposure.
Job Title : React + Node.js Developer (Full Stack)
Experience : 5+ Years
Location : Mumbai or Pune (Final location to be decided post-interview)
Notice Period : Immediate to 15 Days
Interview Rounds : 1 Internal Round + 1 Client Round
Job Summary :
We are looking for a highly skilled Full Stack Developer (React + Node.js) with strong expertise in both frontend and backend development.
The ideal candidate should demonstrate hands-on experience with databases, excellent project understanding, and the ability to deliver scalable, high-performance applications in production environments.
Mandatory Skills :
React.js, Node.js, PostgreSQL/MySQL, JavaScript (ES6+), Docker, AWS/GCP, full-stack development, production system experience, and strong project understanding with hands-on database expertise.
Key Responsibilities :
- Design, develop, and deploy robust full-stack applications using React (frontend) and Node.js (backend).
- Exhibit a deep understanding of database design, optimization, and integration using PostgreSQL or MySQL.
- Translate project requirements into efficient, maintainable, and scalable technical solutions.
- Build clean, modular, and reusable components following SOLID principles and industry best practices.
- Manage backend services, APIs, and data-driven functionalities for large-scale applications.
- Work closely with product and engineering teams to ensure smooth end-to-end project delivery.
- Use Docker and cloud platforms (AWS/GCP) for containerization, deployment, and scaling of services.
- Participate in design discussions, code reviews, and troubleshooting production issues.
Required Skills :
- 5+ Years of hands-on experience in full-stack development using React and Node.js.
- Strong understanding and hands-on expertise with relational databases (PostgreSQL/MySQL).
- Solid grasp of JavaScript (ES6+), and proficiency in Object-Oriented Programming (OOP) or Functional Programming (FP).
- Proven experience working with production-grade systems and scalable architectures.
- Proficiency with Docker, API development, and cloud services (preferably AWS or GCP).
- Excellent project understanding, problem-solving ability, and strong communication skills (verbal and written).
Good to Have :
- Experience in Golang or Elixir for backend development.
- Knowledge of Kubernetes, Redis, RabbitMQ, or similar distributed tools.
- Exposure to AI APIs and tools.
- Contributions to open-source projects.
About the Role
We’re looking for a passionate Fullstack Product Engineer with a strong JavaScript foundation to work on a high-impact, scalable product. You’ll collaborate closely with product and engineering teams to build intuitive UIs and performant backends using modern technologies.
Responsibilities
- Build and maintain scalable features across the frontend and backend.
- Work with tech stacks like Node.js, React.js, Vue.js, and others.
- Contribute to system design, architecture, and code quality enforcement.
- Follow modern engineering practices including TDD, CI/CD, and live coding evaluations.
- Collaborate in code reviews, performance optimizations, and product iterations.
Required Skills
- 4–6 years of hands-on fullstack development experience.
- Strong command over JavaScript, Node.js, and React.js.
- Solid understanding of REST APIs and/or GraphQL.
- Good grasp of OOP principles, TDD, and writing clean, maintainable code.
- Experience with CI/CD tools like GitHub Actions, GitLab CI, Jenkins, etc.
- Familiarity with HTML, CSS, and frontend performance optimization.
Good to Have
- Exposure to Docker, AWS, Kubernetes, or Terraform.
- Experience in other backend languages or frameworks.
- Experience with microservices and scalable system architectures.
Job Description: Python Developer
Location: Pune Employment Type: Full-Time Experience: 0.6-1+ year
Role Overview
We are looking for a skilled Python Developer with 0.6-1+ years of experience to join our team. The ideal candidate should have hands-on experience in Python, REST APIs, Flask, and databases. You will be responsible for designing, developing, and maintaining scalable backend services.
Key Responsibilities
- Develop, test, and maintain high-quality Python applications.
- Design and build RESTful APIs using Flask.
- Integrate APIs with front-end and third-party services.
- Work with relational (PostgreSQL, MySQL) and NoSQL databases (MongoDB, Redis).
- Optimize performance and troubleshoot issues in backend applications.
- Collaborate with cross-functional teams to define and implement new features.
- Follow best practices for code quality, security, and performance optimization.
Required Skills
- Strong proficiency in Python (0.6-1+ years).
- Experience with Flask (or FastAPI/Django).
- Hands-on experience with REST API development.
- Proficiency in working with databases (SQL & NoSQL).
- Familiarity with Git, Docker, and CI/CD pipelines is a plus.
Preferred Qualifications
- Bachelor's degree in Computer Science, Engineering, or a related field.
- Experience working in Agile/Scrum environments.
- Ability to write clean, scalable, and well-documented code.
Job Description
We are seeking a highly skilled DevOps / Kubernetes Engineer. The ideal candidate will have strong expertise in container orchestration, infrastructure as code, and GitOps workflows, with hands-on experience in Azure cloud environments. You will be responsible for designing, deploying, and managing modern cloud-native infrastructure and applications at scale.
Key Responsibilities:
· Manage and operate Kubernetes clusters (AKS / K3s) for large-scale applications.
· Implement infrastructure as code using Terraform or OpenTofu for scalable, reliable, and secure infrastructure provisioning.
· Deploy and manage applications using Helm and ArgoCD with GitOps best practices.
· Work with Podman and Docker as container runtimes for development and production environments.
· Collaborate with cross-functional teams to ensure smooth deployment pipelines and CI/CD integrations.
· Optimize infrastructure for cost, performance, and reliability within Azure cloud.
· Troubleshoot, monitor, and maintain system health, scalability, and performance.
Required Skills & Experience:
· Strong hands-on experience with Kubernetes (AKS / K3s) cluster orchestration.
· Proficiency in Terraform or OpenTofu for infrastructure as code.
· Experience with Helm and ArgoCD for application deployment and GitOps.
· Solid understanding of Docker / Podman container runtimes.
· Cloud expertise in Azure with experience deploying and scaling workloads.
· Familiarity with CI/CD pipelines, monitoring, and logging frameworks.
· Knowledge of best practices around cloud security, scalability, and high availability.
Preferred Qualifications:
· Contributions to open-source projects under Apache 2.0 / MPL 2.0 licenses.
· Experience working in global distributed teams across CST/PST time zones.
· Strong problem-solving skills and ability to work independently in a fast-paced environment.
Job Summary:
We are seeking a highly skilled and proactive DevOps Engineer with 4+ years of experience to join our dynamic team. This role requires strong technical expertise across cloud infrastructure, CI/CD pipelines, container orchestration, and infrastructure as code (IaC). The ideal candidate should also have direct client-facing experience and a proactive approach to managing both internal and external stakeholders.
Key Responsibilities:
- Collaborate with cross-functional teams and external clients to understand infrastructure requirements and implement DevOps best practices.
- Design, build, and maintain scalable cloud infrastructure on AWS (EC2, S3, RDS, ECS, etc.).
- Develop and manage infrastructure using Terraform or CloudFormation.
- Manage and orchestrate containers using Docker and Kubernetes (EKS).
- Implement and maintain CI/CD pipelines using Jenkins or GitHub Actions.
- Write robust automation scripts using Python and Shell scripting.
- Monitor system performance and availability, and ensure high uptime and reliability.
- Execute and optimize SQL queries for MSSQL and PostgreSQL databases.
- Maintain clear documentation and provide technical support to stakeholders and clients.
Required Skills:
- Minimum 4+ years of experience in a DevOps or related role.
- Proven experience in client-facing engagements and communication.
- Strong knowledge of AWS services – EC2, S3, RDS, ECS, etc.
- Proficiency in Infrastructure as Code using Terraform or CloudFormation.
- Hands-on experience with Docker and Kubernetes (EKS).
- Strong experience in setting up and maintaining CI/CD pipelines with Jenkins or GitHub.
- Solid understanding of SQL and working experience with MSSQL and PostgreSQL.
- Proficient in Python and Shell scripting.
Preferred Qualifications:
AWS Certifications (e.g., AWS Certified DevOps Engineer) are a plus.
Experience working in Agile/Scrum environments.
Strong problem-solving and analytical skills.
🚀 We’re Hiring: Senior Python Backend Developer 🚀
📍 Location: Baner, Pune (Work from Office)
💰 Compensation: ₹6 LPA
🕑 Experience Required: Minimum 2 years as a Python Backend Developer
About Us
Foto Owl AI is a fast-growing product-based company headquartered in Baner, Pune.
We specialize in:
⚡ Hyper-personalized fan engagement
🤖 AI-powered real-time photo sharing
📸 Advanced media asset management
What You’ll Do
As a Senior Python Backend Developer, you’ll play a key role in designing, building, and deploying scalable backend systems that power our cutting-edge platforms.
Architect and develop complex, secure, and scalable backend services
Build and maintain APIs & data pipelines for web, mobile, and AI-driven platforms
Optimize SQL & NoSQL databases for high performance
Manage AWS infrastructure (EC2, S3, RDS, Lambda, CloudWatch, etc.)
Implement observability, monitoring, and security best practices
Collaborate cross-functionally with product & AI teams
Mentor junior developers and conduct code reviews
Troubleshoot and resolve production issues with efficiency
What We’re Looking For
✅ Strong expertise in Python backend development
✅ Solid knowledge of Data Structures & Algorithms
✅ Hands-on experience with SQL (PostgreSQL/MySQL) and NoSQL (MongoDB, Redis, etc.)
✅ Proficiency in RESTful APIs & Microservice design
✅ Knowledge of Docker, Kubernetes, and cloud-native systems
✅ Experience managing AWS-based deployments
Why Join Us?
At Foto Owl AI, you’ll be part of a passionate team building world-class media tech products used in sports, events, and fan engagement platforms. If you love scalable backend systems, real-time challenges, and AI-driven products, this is the place for you.
Job Summary:
We are seeking a highly skilled and proactive DevOps Engineer with 4+ years of experience to join our dynamic team. This role requires strong technical expertise across cloud infrastructure, CI/CD pipelines, container orchestration, and infrastructure as code (IaC). The ideal candidate should also have direct client-facing experience and a proactive approach to managing both internal and external stakeholders.
Key Responsibilities:
- Collaborate with cross-functional teams and external clients to understand infrastructure requirements and implement DevOps best practices.
- Design, build, and maintain scalable cloud infrastructure on AWS (EC2, S3, RDS, ECS, etc.).
- Develop and manage infrastructure using Terraform or CloudFormation.
- Manage and orchestrate containers using Docker and Kubernetes (EKS).
- Implement and maintain CI/CD pipelines using Jenkins or GitHub Actions.
- Write robust automation scripts using Python and Shell scripting.
- Monitor system performance and availability, and ensure high uptime and reliability.
- Execute and optimize SQLqueries for MSSQL and PostgresQL databases.
- Maintain clear documentation and provide technical support to stakeholders and clients.
Required Skills:
- Minimum 4+ years of experience in a DevOps or related role.
- Proven experience in client-facing engagements and communication.
- Strong knowledge of AWS services – EC2, S3, RDS, ECS, etc.
- Proficiency in Infrastructure as Code using Terraform or CloudFormation.
- Hands-on experience with Docker and Kubernetes (EKS).
- Strong experience in setting up and maintaining CI/CD pipelines with Jenkins or GitHub.
- Solid understanding of SQL and working experience with MSSQL and PostgreSQL.
- Proficient in Python and Shell scripting.
Preferred Qualifications:
- AWS Certifications (e.g., AWS Certified DevOps Engineer) are a plus.
- Experience working in Agile/Scrum environments.
- Strong problem-solving and analytical skills.

A domestic client 15 years old, in the logitech industry
Responsibilities:
- Work with product owners, managers, and customers to explore requirements and translate use-cases into functional requirements.
- Collaborate with cross-functional teams and architects to design, develop, test, and deploy web applications using ASP. NETCore | Open-source web framework for. NET, . NET Core, and C#.
- Build scalable, reliable, clean code and unit tests for. NET applications.
- Help maintain code quality, organization, and automation by performing code reviews, refactoring, and unit testing.
- Develop integration with third-party APIs and external applications to deliver robust and scalable applications.
- Maintain services, enhance, optimize, and upgrade existing systems.
- Contribute to architectural and design discussions and document design decisions.
- Effectively participate in planning meetings, retrospectives, daily stand-ups, and other meetings as part of the software development process.
- Contribute to the continuous improvement of development processes and practices.
- Resolve production issues, participate in production incident analysis by conducting effective troubleshooting and RCA within the SLA.
- Work with Operations teams on product deployment, issue resolution, and support.
- Mentor junior developers and assist in their professional growth. Stay updated with the latest technologies and best practices.
Requirements:
- 5+ years of experience with proficiency in C# language.
- Bachelor's or master's degree in computer science or a related field.
- Good working experience in. NET Framework, . NET Core, and ASP. NETCore | Open-source web framework for. NET and C#.
- Good understanding of OOP and design patterns - SOLID, Integration, REST, Micro-services, and cloud-native designs.
- Understanding of fundamental design principles behind building and scaling distributed applications.
- Knack for writing clean, readable, reusable, and testable C# code.
- Strong knowledge of data structures and collections in C#.
- Good knowledge of front-end development languages, including JavaScript, HTML5 and CSS.
- Experience in designing relational DB schema, PL/SQL queries performance tuning.
- Experience in working in an Agile environment following Scrum/SAFE methodologies.
- Knowledge of CI/CD, DevOps, containers, and automation frameworks.
- Experience in developing and deploying on at least one cloud environment.
- Excellent problem-solving, communication, and collaboration skills.
- Ability to work independently and effectively in a fast-paced environment.
🚀 Hiring: Dot net full stack at Deqode
⭐ Experience: 8+ Years
📍 Location: Bangalore | Mumbai | Pune | Gurgaon | Chennai | Hyderabad
⭐ Work Mode:- Hybrid
⏱️ Notice Period: Immediate Joiners
(Only immediate joiners & candidates serving notice period)
We’re looking for an experienced Dotnet Full Stack Developer with strong hands-on skills in ReactJS, .NET Core, and Azure Cloud Services (Azure Functions, Azure SQL, APIM, etc.).
⭐ Must-Have Skills:-
➡️ Design and develop scalable web applications using ReactJS, C#, and .NET Core.
➡️Azure (Functions, App Services, SQL, APIM, Service Bus)
➡️Familiarity with DevOps practices, CI/CD pipelines, Docker, and Kubernetes.
➡️Advanced experience in Entity Framework Core and SQL Server.
➡️Expertise in RESTful API development and microservices.
Job Summary:
We are seeking a highly skilled and proactive DevOps Engineer with 4+ years of experience to join our dynamic team. This role requires strong technical expertise across cloud infrastructure, CI/CD pipelines, container orchestration, and infrastructure as code (IaC). The ideal candidate should also have direct client-facing experience and a proactive approach to managing both internal and external stakeholders.
Key Responsibilities:
- Collaborate with cross-functional teams and external clients to understand infrastructure requirements and implement DevOps best practices.
- Design, build, and maintain scalable cloud infrastructure on AWS (EC2, S3, RDS, ECS, etc.).
- Develop and manage infrastructure using Terraform or CloudFormation.
- Manage and orchestrate containers using Docker and Kubernetes (EKS).
- Implement and maintain CI/CD pipelines using Jenkins or GitHub Actions.
- Write robust automation scripts using Python and Shell scripting.
- Monitor system performance and availability, and ensure high uptime and reliability.
- Execute and optimize SQL queries for MSSQL and PostgreSQL databases.
- Maintain clear documentation and provide technical support to stakeholders and clients.
Required Skills:
- Minimum 4+ years of experience in a DevOps or related role.
- Proven experience in client-facing engagements and communication.
- Strong knowledge of AWS services – EC2, S3, RDS, ECS, etc.
- Proficiency in Infrastructure as Code using Terraform or CloudFormation.
- Hands-on experience with Docker and Kubernetes (EKS).
- Strong experience in setting up and maintaining CI/CD pipelines with Jenkins or GitHub.
- Solid understanding of SQL and working experience with MSSQL and PostgreSQL.
- Proficient in Python and Shell scripting.
Preferred Qualifications:
- AWS Certifications (e.g., AWS Certified DevOps Engineer) are a plus.
- Experience working in Agile/Scrum environments.
- Strong problem-solving and analytical skills.
Work Mode & Timing:
- Hybrid – Pune-based candidates preferred.
- Working hours: 12:30 PM to 9:30 PM IST to align with client time zones.
Job Title : Senior Python Developer – Product Engineering
Experience : 5 to 8 Years
Location : Pune, India (Hybrid – 3-4 days WFO, 1-2 days WFH)
Employment Type : Full-time
Commitment : Minimum 3 years (with end-of-term bonus)
Openings : 2 positions
- Junior : 3 to 5 Years
- Senior : 5 to 8 Years
Mandatory Skills : Python 3.x, REST APIs, multithreading, Celery, encryption (OpenSSL/cryptography.io), PostgreSQL/Redis, Docker/K8s, secure coding
Nice to Have : Experience with EFSS/DRM/DLP platforms, delta sync, file systems, LDAP/AD/SIEM integrations
🎯 Roles & Responsibilities :
- Design and develop backend services for DRM enforcement, file synchronization, and endpoint telemetry.
- Build scalable Python-based APIs interacting with file systems, agents, and enterprise infra.
- Implement encryption workflows, secure file handling, delta sync, and file versioning.
- Integrate with 3rd-party platforms: LDAP, AD, DLP, CASB, SIEM.
- Collaborate with DevOps to ensure high availability and performance of hybrid deployments.
- Participate in code reviews, architectural discussions, and mentor junior developers.
- Troubleshoot production issues and continuously optimize performance.
✅ Required Skills :
- 5 to 8 years of hands-on experience in Python 3.x development.
- Expertise in REST APIs, Celery, multithreading, and file I/O.
- Proficient in encryption libraries (OpenSSL, cryptography.io) and secure coding.
- Experience with PostgreSQL, Redis, SQLite, and Linux internals.
- Strong command over Docker, Kubernetes, CI/CD, and Git workflows.
- Ability to write clean, testable, and scalable code in production environments.
➕ Preferred Skills :
- Background in DRM, EFSS, DLP, or enterprise security platforms.
- Familiarity with file diffing, watermarking, or agent-based tools.
- Knowledge of compliance frameworks (GDPR, DPDP, RBI-CSF) is a plus.
Full Stack Developer (Node.js & React)
Location: Pune, India (Local or Ready to Relocate)
Employment Type: 6–8 Month Contract (Potential Conversion to FTE Based on Performance)
About the Role
We are seeking a highly skilled Full Stack Developer with expertise in Node.js and React to join our dynamic team in Pune. This role involves designing, developing, and deploying scalable web applications. You will collaborate with cross-functional teams to deliver high-impact solutions while adhering to best practices in coding, testing, and security.
Key Responsibilities
- Develop and maintain server-side applications using Node.js (Express/NestJS) and client-side interfaces with React.js (Redux/Hooks).
- Architect RESTful APIs and integrate with databases (SQL/NoSQL) and third-party services.
- Implement responsive UI/UX designs with modern front-end libraries (e.g., Material-UI, Tailwind CSS).
- Write unit/integration tests (Jest, Mocha, React Testing Library) and ensure code quality via CI/CD pipelines.
- Collaborate with product managers, designers, and QA engineers in an Agile environment.
- Troubleshoot performance bottlenecks and optimize applications for scalability.
- Document technical specifications and deployment processes.
Required Skills & Qualifications
- Experience: 3+ years in full-stack development with Node.js and React.
- Backend Proficiency:
- Strong knowledge of Node.js, Express, or NestJS.
- Experience with databases (PostgreSQL, MongoDB, Redis).
- API design (REST/GraphQL) and authentication (JWT/OAuth).
- Frontend Proficiency:
- Expertise in React.js (Functional Components, Hooks, Context API).
- State management (Redux, Zustand) and modern CSS frameworks.
- DevOps & Tools:
- Git, Docker, AWS/Azure, and CI/CD tools (Jenkins/GitHub Actions).
- Testing frameworks (Jest, Cypress, Mocha).
- Soft Skills:
- Problem-solving mindset and ability to work in a fast-paced environment.
- Excellent communication and collaboration skills.
- Location: Based in Pune or willing to relocate immediately.
Preferred Qualifications
- Experience with TypeScript, Next.js, or serverless architectures.
- Knowledge of microservices, message brokers (Kafka/RabbitMQ), or container orchestration (Kubernetes).
- Familiarity with Agile/Scrum methodologies.
- Contributions to open-source projects or a strong GitHub portfolio.
What We Offer
- Competitive Contract Compensation with timely payouts.
- Potential for FTE Conversion: Performance-based path to a full-time role.
- Hybrid Work Model: Flexible in-office (Pune) and remote options.
- Learning Opportunities: Access to cutting-edge tools and mentorship.
- Collaborative Environment: Work with industry experts on innovative projects.
Apply Now!
Ready to make an impact? Send your resume and GitHub/Portfolio links with the subject line:
"Full Stack Developer (Node/React) - Pune".
Local candidates or those relocating to Pune will be prioritized. Applications without portfolios will not be considered.
Equal Opportunity Employer
We celebrate diversity and are committed to creating an inclusive environment for all employees.
Job Title: Java Full Stack Developer
Experience: 6+ Years
Locations: Bangalore, Mumbai, Pune, Gurgaon
Work Mode: Hybrid
Notice Period: Immediate Joiners Preferred / Candidates Who Have Completed Their Notice Period
About the Role
We are looking for a highly skilled and experienced Java Full Stack Developer with a strong command over backend technologies and modern frontend frameworks. The ideal candidate will have deep experience with Java, ReactJS, and DevOps tools like Jenkins, Docker, and basic Kubernetes knowledge. You’ll be contributing to complex software solutions across industries, collaborating with cross-functional teams, and deploying production-grade systems in a cloud-native, CI/CD-driven environment.
Key Responsibilities
- Design and develop scalable web applications using Java (Spring Boot) and ReactJS
- Collaborate with UX/UI designers and backend developers to implement robust, efficient front-end interfaces
- Develop and maintain CI/CD pipelines using Jenkins, ensuring high-quality software delivery
- Containerize applications using Docker and ensure smooth deployment and orchestration using Kubernetes (basic level)
- Write clean, modular, and testable code and participate in code reviews
- Troubleshoot and resolve performance, reliability, and functional issues in production
- Work in Agile teams and participate in daily stand-ups, sprint planning, and retrospective meetings
- Ensure all security, compliance, and performance standards are met in the development lifecycle
Mandatory Skills
- Backend: Java, Spring Boot
- Frontend: ReactJS
- DevOps Tools: Jenkins, Docker
- Containers & Orchestration: Basic knowledge of Kubernetes
- Strong understanding of RESTful services and APIs
- Familiarity with Git and version control workflows
- Good understanding of SDLC, Agile/Scrum methodologies
Minimum requirements
5+ years of industry software engineering experience (does not include internships nor includes co-ops)
Strong coding skills in any programming language (we understand new languages can be learned on the job so our interview process is language agnostic)
Strong collaboration skills, can work across workstreams within your team and contribute to your peers’ success
Have the ability to thrive on a high level of autonomy, responsibility, and think of yourself as entrepreneurial
Interest in working as a generalist across varying technologies and stacks to solve problems and delight both internal and external users
Preferred Qualifications
Experience with large-scale financial tracking systems
Good understanding and practical knowledge in cloud based services (e.g. gRPC, GraphQL, Docker/Kubernetes, cloud services such as AWS, etc.)
Lead DevSecOps Engineer
Location: Pune, India (In-office) | Experience: 3–5 years | Type: Full-time
Apply here → https://lnk.ink/CLqe2
About FlytBase:
FlytBase is a Physical AI platform powering autonomous drones and robots across industrial sites. Our software enables 24/7 operations in critical infrastructure like solar farms, ports, oil refineries, and more.
We're building intelligent autonomy — not just automation — and security is core to that vision.
What You’ll Own
You’ll be leading and building the backbone of our AI-native drone orchestration platform — used by global industrial giants for autonomous operations.
Expect to:
- Design and manage multi-region, multi-cloud infrastructure (AWS, Kubernetes, Terraform, Docker)
- Own infrastructure provisioning through GitOps, Ansible, Helm, and IaC
- Set up observability stacks (Prometheus, Grafana) and write custom alerting rules
- Build for Zero Trust security — logs, secrets, audits, access policies
- Lead incident response, postmortems, and playbooks to reduce MTTR
- Automate and secure CI/CD pipelines with SAST, DAST, image hardening
- Script your way out of toil using Python, Bash, or LLM-based agents
- Work alongside dev, platform, and product teams to ship secure, scalable systems
What We’re Looking For:
You’ve probably done a lot of this already:
- 3–5+ years in DevOps / DevSecOps for high-availability SaaS or product infra
- Hands-on with Kubernetes, Terraform, Docker, and cloud-native tooling
- Strong in Linux internals, OS hardening, and network security
- Built and owned CI/CD pipelines, IaC, and automated releases
- Written scripts (Python/Bash) that saved your team hours
- Familiar with SOC 2, ISO 27001, threat detection, and compliance work
Bonus if you’ve:
- Played with LLMs or AI agents to streamline ops and Built bots that monitor, patch, or auto-deploy.
What It Means to Be a Flyter
- AI-native instincts: You don’t just use AI — you think in it. Your terminal window has a co-pilot.
- Ownership without oversight: You own outcomes, not tasks. No one micromanages you here.
- Joy in complexity: Security + infra + scale = your happy place.
- Radical candor: You give and receive sharp feedback early — and grow faster because of it.
- Loops over lines: we prioritize continuous feedback, iteration, and learning over one-way execution or rigid, linear planning.
- H3: Happy. Healthy. High-Performing. We believe long-term performance stems from an environment where you feel emotionally fulfilled, physically well, and deeply motivated.
- Systems > Heroics: We value well-designed, repeatable systems over last-minute firefighting or one-off effort.
Perks:
▪ Unlimited leave & flexible hours
▪ Top-tier health coverage
▪ Budget for AI tools, courses
▪ International deployments
▪ ESOPs and high-agency team culture
Apply Here- https://lnk.ink/CLqe2
Job Title : Lead Java Developer (Backend)
Experience Required : 8 to 15 Years
Open Positions : 5
Location : Any major metro city (Bengaluru, Pune, Chennai, Kolkata, Hyderabad)
Work Mode : Open to Remote / Hybrid / Onsite
Notice Period : Immediate Joiner/30 Days or Less
About the Role :
- We are looking for experienced Lead Java Developers who bring not only strong backend development skills but also a product-oriented mindset and leadership capability.
- This is an opportunity to be part of high-impact digital transformation initiatives that go beyond writing code—you’ll help shape future-ready platforms and drive meaningful change.
- This role is embedded within a forward-thinking digital engineering team that thrives on co-innovation, lean delivery, and end-to-end ownership of platforms and products.
Key Responsibilities :
- Design, develop, and implement scalable backend systems using Java and Spring Boot.
- Collaborate with product managers, designers, and engineers to build intuitive and reliable digital products.
- Advocate and implement engineering best practices : SOLID principles, OOP, clean code, CI/CD, TDD/BDD.
- Lead Agile-based development cycles with a focus on speed, quality, and customer outcomes.
- Guide and mentor team members, fostering technical excellence and ownership.
- Utilize cloud platforms and DevOps tools to ensure performance and reliability of applications.
What We’re Looking For :
- Proven experience in Java backend development (Spring Boot, Microservices).
- 8+ Years of hands-on engineering experience with at least 2+ years in a Lead role.
- Familiarity with cloud platforms such as AWS, Azure, or GCP.
- Good understanding of containerization and orchestration tools like Docker and Kubernetes.
- Exposure to DevOps and Infrastructure as Code practices.
- Strong problem-solving skills and the ability to design solutions from first principles.
- Prior experience in product-based or startup environments is a big plus.
Ideal Candidate Profile :
- A tech enthusiast with a passion for clean code and scalable architecture.
- Someone who thrives in collaborative, transparent, and feedback-driven environments.
- A leader who takes ownership beyond individual deliverables to drive overall team and project success.
Interview Process
- Initial Technical Screening (via platform partner)
- Technical Interview with Engineering Team
- Client-facing Final Round
Additional Info :
- Targeting profiles from product/startup backgrounds.
- Strong preference for candidates with under 1 month of notice period.
- Interviews will be fast-tracked for qualified profiles.
📍 Position : Java Architect
📅 Experience : 10 to 15 Years
🧑💼 Open Positions : 3+
📍 Work Location : Bangalore, Pune, Chennai
💼 Work Mode : Hybrid
📅 Notice Period : Immediate joiners preferred; up to 1 month maximum
🔧 Core Responsibilities :
- Lead architecture design and development for scalable enterprise-level applications.
- Own and manage all aspects of technical development and delivery.
- Define and enforce best coding practices, architectural guidelines, and development standards.
- Plan and estimate the end-to-end technical scope of projects.
- Conduct code reviews, ensure CI/CD, and implement TDD/BDD methodologies.
- Mentor and lead individual contributors and small development teams.
- Collaborate with cross-functional teams, including DevOps, Product, and QA.
- Engage in high-level and low-level design (HLD/LLD), solutioning, and cloud-native transformations.
🛠️ Required Technical Skills :
- Strong hands-on expertise in Java, Spring Boot, Microservices architecture
- Experience with Kafka or similar messaging/event streaming platforms
- Proficiency in cloud platforms – AWS and Azure (must-have)
- Exposure to frontend technologies (nice-to-have)
- Solid understanding of HLD, system architecture, and design patterns
- Good grasp of DevOps concepts, Docker, Kubernetes, and Infrastructure as Code (IaC)
- Agile/Lean development, Pair Programming, and Continuous Integration practices
- Polyglot mindset is a plus (Scala, Golang, Python, etc.)
🚀 Ideal Candidate Profile :
- Currently working in a product-based environment
- Already functioning as an Architect or Principal Engineer
- Proven track record as an Individual Contributor (IC)
- Strong engineering fundamentals with a passion for scalable software systems
- No compromise on code quality, craftsmanship, and best practices
🧪 Interview Process :
- Round 1: Technical pairing round
- Rounds 2 & 3: Technical rounds with panel (code pairing + architecture)
- Final Round: HR and offer discussion
🚀 Hiring: Azure DevOps Engineer – Immediate Joiners Only! 🚀
📍 Location: Pune (Hybrid)
💼 Experience: 5+ Years
🕒 Mode of Work: Hybrid
Are you a proactive and skilled Azure DevOps Engineer looking for your next challenge? We are hiring immediate joiners to join our dynamic team! If you are passionate about CI/CD, cloud automation, and SRE best practices, we want to hear from you.
🔹 Key Skills Required:
✅ Cloud Expertise: Proficiency in any cloud (Azure preferred)
✅ CI/CD Pipelines: Hands-on experience in designing and managing pipelines
✅ Containers & IaC: Strong knowledge of Docker, Terraform, Kubernetes
✅ Incident Management: Quick issue resolution and RCA
✅ SRE & Observability: Experience with SLI/SLO/SLA, monitoring, tracing, logging
✅ Programming: Proficiency in Python, Golang
✅ Performance Optimization: Identifying and resolving system bottlenecks
JioTesseract, a digital arm of Reliance Industries, is India's leading and largest AR/VR organization with the mission to democratize mixed reality for India and the world. We make products at the cross of hardware, software, content and services with focus on making India the leader in spatial computing. We specialize in creating solutions in AR, VR and AI, with some of our notable products such as JioGlass, JioDive, 360 Streaming, Metaverse, AR/VR headsets for consumers and enterprise space.
Mon-fri role, In office, with excellent perks and benefits!
Position Overview
We are seeking a Software Architect to lead the design and development of high-performance robotics and AI software stacks utilizing NVIDIA technologies. This role will focus on defining scalable, modular, and efficient architectures for robot perception, planning, simulation, and embedded AI applications. You will collaborate with cross-functional teams to build next-generation autonomous systems 9
Key Responsibilities:
1. System Architecture & Design
● Define scalable software architectures for robotics perception, navigation, and AI-driven decision-making.
● Design modular and reusable frameworks that leverage NVIDIA’s Jetson, Isaac ROS, Omniverse, and CUDA ecosystems.
● Establish best practices for real-time computing, GPU acceleration, and edge AI inference.
2. Perception & AI Integration
● Architect sensor fusion pipelines using LIDAR, cameras, IMUs, and radar with DeepStream, TensorRT, and ROS2.
● Optimize computer vision, SLAM, and deep learning models for edge deployment on Jetson Orin and Xavier.
● Ensure efficient GPU-accelerated AI inference for real-time robotics applications.
3. Embedded & Real-Time Systems
● Design high-performance embedded software stacks for real-time robotic control and autonomy.
● Utilize NVIDIA CUDA, cuDNN, and TensorRT to accelerate AI model execution on Jetson platforms.
● Develop robust middleware frameworks to support real-time robotics applications in ROS2 and Isaac SDK.
4. Robotics Simulation & Digital Twins
● Define architectures for robotic simulation environments using NVIDIA Isaac Sim & Omniverse.
● Leverage synthetic data generation (Omniverse Replicator) for training AI models.
● Optimize sim-to-real transfer learning for AI-driven robotic behaviors.
5. Navigation & Motion Planning
● Architect GPU-accelerated motion planning and SLAM pipelines for autonomous robots.
● Optimize path planning, localization, and multi-agent coordination using Isaac ROS Navigation.
● Implement reinforcement learning-based policies using Isaac Gym.
6. Performance Optimization & Scalability
● Ensure low-latency AI inference and real-time execution of robotics applications.
● Optimize CUDA kernels and parallel processing pipelines for NVIDIA hardware.
● Develop benchmarking and profiling tools to measure software performance on edge AI devices.
Required Qualifications:
● Master’s or Ph.D. in Computer Science, Robotics, AI, or Embedded Systems.
● Extensive experience (7+ years) in software development, with at least 3-5 years focused on architecture and system design, especially for robotics or embedded systems.
● Expertise in CUDA, TensorRT, DeepStream, PyTorch, TensorFlow, and ROS2.
● Experience in NVIDIA Jetson platforms, Isaac SDK, and GPU-accelerated AI.
● Proficiency in programming languages such as C++, Python, or similar, with deep understanding of low-level and high-level design principles.
● Strong background in robotic perception, planning, and real-time control.
● Experience with cloud-edge AI deployment and scalable architectures.
Preferred Qualifications
● Hands-on experience with NVIDIA DRIVE, NVIDIA Omniverse, and Isaac Gym
● Knowledge of robot kinematics, control systems, and reinforcement learning
● Expertise in distributed computing, containerization (Docker), and cloud robotics
● Familiarity with automotive, industrial automation, or warehouse robotics
● Experience designing architectures for autonomous systems or multi-robot systems.
● Familiarity with cloud-based solutions, edge computing, or distributed computing for robotics
● Experience with microservices or service-oriented architecture (SOA)
● Knowledge of machine learning and AI integration within robotic systems
● Knowledge of testing on edge devices with HIL and simulations (Isaac Sim, Gazebo, V-REP etc.)
JioTesseract, a digital arm of Reliance Industries, is India's leading and largest AR/VR organization with the mission to democratize mixed reality for India and the world. We make products at the cross of hardware, software, content and services with focus on making India the leader in spatial computing. We specialize in creating solutions in AR, VR and AI, with some of our notable products such as JioGlass, JioDive, 360 Streaming, Metaverse, AR/VR headsets for consumers and enterprise space.
Mon-Fri, In office role with excellent perks and benefits!
Key Responsibilities:
1. Design, develop, and maintain backend services and APIs using Node.js or Python, or Java.
2. Build and implement scalable and robust microservices and integrate API gateways.
3. Develop and optimize NoSQL database structures and queries (e.g., MongoDB, DynamoDB).
4. Implement real-time data pipelines using Kafka.
5. Collaborate with front-end developers to ensure seamless integration of backend services.
6. Write clean, reusable, and efficient code following best practices, including design patterns.
7. Troubleshoot, debug, and enhance existing systems for improved performance.
Mandatory Skills:
1. Proficiency in at least one backend technology: Node.js or Python, or Java.
2. Strong experience in:
i. Microservices architecture,
ii. API gateways,
iii. NoSQL databases (e.g., MongoDB, DynamoDB),
iv. Kafka
v. Data structures (e.g., arrays, linked lists, trees).
3. Frameworks:
i. If Java : Spring framework for backend development.
ii. If Python: FastAPI/Django frameworks for AI applications.
iii. If Node: Express.js for Node.js development.
Good to Have Skills:
1. Experience with Kubernetes for container orchestration.
2. Familiarity with in-memory databases like Redis or Memcached.
3. Frontend skills: Basic knowledge of HTML, CSS, JavaScript, or frameworks like React.js.
Level of skills and experience:
5 years of hands-on experience in using Python, Spark,Sql.
Experienced in AWS Cloud usage and management.
Experience with Databricks (Lakehouse, ML, Unity Catalog, MLflow).
Experience using various ML models and frameworks such as XGBoost, Lightgbm, Torch.
Experience with orchestrators such as Airflow and Kubeflow.
Familiarity with containerization and orchestration technologies (e.g., Docker, Kubernetes).
Fundamental understanding of Parquet, Delta Lake and other data file formats.
Proficiency on an IaC tool such as Terraform, CDK or CloudFormation.
Strong written and verbal English communication skill and proficient in communication with non-technical stakeholderst
Job Description
Phonologies is seeking a Senior Data Engineer to lead data engineering efforts for developing and deploying generative AI and large language models (LLMs). The ideal candidate will excel in building data pipelines, fine-tuning models, and optimizing infrastructure to support scalable AI systems for enterprise applications.
Role & Responsibilities
- Data Pipeline Management: Design and manage pipelines for AI model training, ensuring efficient data ingestion, storage, and transformation for real-time deployment.
- LLM Fine-Tuning & Model Lifecycle: Fine-tune LLMs on domain-specific data, and oversee the model lifecycle using tools like MLFlow and Weights & Biases.
- Scalable Infrastructure: Optimize infrastructure for large-scale data processing and real-time LLM performance, leveraging containerization and orchestration in hybrid/cloud environments.
- Data Management: Ensure data quality, security, and compliance, with workflows for handling sensitive and proprietary datasets.
- Continuous Improvement & MLOps: Apply MLOps/LLMOps practices for automation, versioning, and lifecycle management, while refining tools and processes for scalability and performance.
- Collaboration: Work with data scientists, engineers, and product teams to integrate AI solutions and communicate technical capabilities to business stakeholders.
Preferred Candidate Profile
- Experience: 5+ years in data engineering, focusing on AI/ML infrastructure, LLM fine-tuning, and deployment.
- Technical Skills: Advanced proficiency in Python, SQL, and distributed data tools.
- Model Management: Hands-on experience with MLFlow, Weights & Biases, and model lifecycle management.
- AI & NLP Expertise: Familiarity with LLMs (e.g., GPT, BERT) and NLP frameworks like Hugging Face Transformers.
- Cloud & Infrastructure: Strong skills with AWS, Azure, Google Cloud, Docker, and Kubernetes.
- MLOps/LLMOps: Expertise in versioning, CI/CD, and automating AI workflows.
- Collaboration & Communication: Proven ability to work with cross-functional teams and explain technical concepts to non-technical stakeholders.
- Education: Degree in Computer Science, Data Engineering, or related field.
Perks and Benefits
- Competitive Compensation: INR 20L to 30L per year.
- Innovative Work Environment for Personal Growth: Work with cutting-edge AI and data engineering tools in a collaborative setting, for continuous learning in data engineering and AI.
Availability: Full time
Location: Pune, India
Experience: 4- 6 years
Tvarit Solutions Private Limited (wholly owned subsidiary of TVARIT GmbH, Germany). TVARIT provides software to reduce manufacturing waste like scrap, energy, and machine downtime using its patented technology. With its software products, and highly competent team from renowned Universities, TVARIT has gained customer trust across 4 continents within a short span of 5 years. TVARIT is awarded among the top 8 out of 490 AI companies by European Data Incubator, apart from many more awards by the German government and industrial organizations making TVARIT one of the most innovative AI companies in Germany and Europe.
We are looking for a passionate Full Stack Developer Level 2 to join our technology team in Pune. You will be responsible for handling operations, design, development, testing, leading the software development team and working toward infrastructure development that will support the company’s solutions. You will get an opportunity to work closely on projects that will involve the automation of the manufacturing process.
Key responsibilities
- Creating Plugins for Open-Source framework Grafana using React & Golang.
- Develop pixel-perfect implementation of the front end using React.
- Design efficient DB interaction to optimize performance.
- Interact with and build Python APIs.
- Collaborate across teams and lead/train the junior developers.
- Design and maintain functional requirement documents and guides.
- Get feedback from, and build solutions for, users and customers.
Must have worked on these technologies.
- 2+ years of experience working with React-Typescript on a production level
- Experience with API creation using node.js or Python
- GitHub or any other SVC
- Have worked with any Linux/Unix-based operating system (Ubuntu, Debian, MacOS, etc)
Good to have experience:
- Python-based backend technologies, relational and no-relational databases, Python Web Frameworks (Django or Flask)
- Experience with the Go programming language
- Experience working with Grafana, or on any other micro frontend architecture framework
- Experience with Docker
- Leading a team for at least a year
Benefits and perks:
- Culture of innovation, creativity, learning, and even failure, we believe in bringing out the best in you.
- Progressive leave policy for effective work-life balance.
- Get mentored by highly qualified internal resource groups and opportunities to avail industry-driven mentorship programs.
- Multicultural peer groups and supportive workplace policies.
- Work from beaches, hills, mountains, and many more with the yearly workcation program; we believe in mixing elements of vacation and work.
How it's like to work for a Startup?
Working for TVARIT (deep-tech German IT Startup) can offer you a unique blend of innovation, collaboration, and growth opportunities. But it's essential to approach it with a willingness to adapt and thrive in a dynamic environment.
If this position sparked your interest, do apply today!
By submitting my documents for the recruitment process, I agree that my data will be processed for the purpose of the recruitment process and stored for an additional 6 months after the process is completed. Without your consent, we unfortunately cannot consider your documents for the recruitment process. You can revoke your consent at any time. Further information on how we process your data can be found in our privacy policy at the following link: https://tvarit.com/privacy-policy/
Durch das Abschicken meiner Unterlagen für den Rrecruitingprozess erkläre ich mich damit einverstanden, dass meine Daten zum Zweck des Recruitingprozesses verarbeitet und nach Abschluss des Recruitingprozesses für weitere 6 Monate gespeichert werden. Ohne dein Einverständnis können wir deine Unterlagen für den Rrecruitingprozess leider nicht berücksichtigen. Du kannst dein Einverständnis jederzeit widerrufen. Weitere Informationen, wie wir deine Daten verarbeiten findest du in unserer Datenschutzerklärung unter folgendem Link: https://tvarit.com/privacy-policy/
Role & Responsiblities
- DevOps Engineer will be working with implementation and management of DevOps tools and technologies.
- Create and support advanced pipelines using Gitlab.
- Create and support advanced container and serverless environments.
- Deploy Cloud infrastructure using Terraform and cloud formation templates.
- Implement deployments to OpenShift Container Platform, Amazon ECS and EKS
- Troubleshoot containerized builds and deployments
- Implement processes and automations for migrating between OpenShift, AKS and EKS
- Implement CI/CD automations.
Required Skillsets
- 3-5 years of cloud-based architecture software engineering experience.
- Deep understanding of Kubernetes and its architecture.
- Mastery of cloud security engineering tools, techniques, and procedures.
- Experience with AWS services such as Amazon S3, EKS, ECS, DynamoDB, AWS Lambda, API Gateway, etc.
- Experience with designing and supporting infrastructure via Infrastructure-as-Code in AWS, via CDK, CloudFormation Templates, Terraform or other toolset.
- Experienced with tools like Jenkins, Github, Puppet or other similar toolset.
- Experienced with monitoring functions like cloudwatch, newrelic, graphana, splunk, etc,
- Excellence in verbal and written communication, and in working collaboratively with a variety of colleagues and clients in a remote development environment.
- Proven track record in cloud computing systems and enterprise architecture and security
Who are we looking for?
We are looking for a Senior Data Scientist, who will design and develop data-driven solutions using state-of-the-art methods. You should be someone with strong and proven experience in working on data-driven solutions. If you feel you’re enthusiastic about transforming business requirements into insightful data-driven solutions, you are welcome to join our fast-growing team to unlock your best potential.
Job Summary
- Supporting company mission by understanding complex business problems through data-driven solutions.
- Designing and developing machine learning pipelines in Python and deploying them in AWS/GCP, ...
- Developing end-to-end ML production-ready solutions and visualizations.
- Analyse large sets of time-series industrial data from various sources, such as production systems, sensors, and databases to draw actionable insights and present them via custom dashboards.
- Communicating complex technical concepts and findings to non-technical stakeholders of the projects
- Implementing the prototypes using suitable statistical tools and artificial intelligence algorithms.
- Preparing high-quality research papers and participating in conferences to present and report experimental results and research findings.
- Carrying out research collaborating with internal and external teams and facilitating review of ML systems for innovative ideas to prototype new models.
Qualification and experience
- B.Tech/Masters/Ph.D. in computer science, electrical engineering, mathematics, data science, and related fields.
- 5+ years of professional experience in the field of machine learning, and data science.
- Experience with large-scale Time-series data-based production code development is a plus.
Skills and competencies
- Familiarity with Docker, and ML Libraries like PyTorch, sklearn, pandas, SQL, and Git is a must.
- Ability to work on multiple projects. Must have strong design and implementation skills.
- Ability to conduct research based on complex business problems.
- Strong presentation skills and the ability to collaborate in a multi-disciplinary team.
- Must have programming experience in Python.
- Excellent English communication skills, both written and verbal.
Benefits and Perks
- Culture of innovation, creativity, learning, and even failure, we believe in bringing out the best in you.
- Progressive leave policy for effective work-life balance.
- Get mentored by highly qualified internal resource groups and opportunity to avail industry-driven mentorship program, as we believe in empowering people.
- Multicultural peer groups and supportive workplace policies.
- Work from beaches, hills, mountains, and many more with the yearly workcation program; we believe in mixing elements of vacation and work.
Hiring Process
- Call with Talent Acquisition Team: After application screening, a first-level screening with the talent acquisition team to understand the candidate's goals and alignment with the job requirements.
- First Round: Technical round 1 to gauge your domain knowledge and functional expertise.
- Second Round: In-depth technical round and discussion about the departmental goals, your role, and expectations.
- Final HR Round: Culture fit round and compensation discussions.
- Offer: Congratulations you made it!
If this position sparked your interest, apply now to initiate the screening process.
· IMMEDIATE JOINER
Professional Experience with 5+ years in Confluent Kafka Admin
· Demonstrated experience design / development.
· Must have proven knowledge and practical application of – Confluent Kafka (Producers/ Consumers / Kafka Connectors / Kafka Stream/ksqlDB/Schema Registry)
· Experience in performance optimization of consumers, producers.
· Good experience debugging issues related offset, consumer lag, partitions.
· Experience with Administrative tasks on Confluent Kafka.
· Kafka admin experience including but not limited to setup new Kafka cluster, create topics, grant permissions, offset reset, purge data, setup connectors, setup replicator task, troubleshooting issues, Monitor Kafka cluster health and performance, backup and recovery.
· Experience in implementing security measures for Kafka clusters, including access controls and encryption, to protect sensitive data.
· Install/Upgrade Kafka cluster techniques.
· Good experience with writing unit tests using Junit and Mockito
· Have experience with working in client facing project.
· Exposure to any cloud environment like AZURE is added advantage.
· Experience in developing or working on REST Microservices
Experience in Java, Springboot is a plus

















