50+ Google Cloud Platform (GCP) Jobs in Pune | Google Cloud Platform (GCP) Job openings in Pune
Apply to 50+ Google Cloud Platform (GCP) Jobs in Pune on CutShort.io. Explore the latest Google Cloud Platform (GCP) Job opportunities across top companies like Google, Amazon & Adobe.
Responsibilities
- Act as a liaison between business and technical teams to bridge gaps and support successful project delivery.
- Maintain high-quality metadata and data artifacts that are accurate, complete, consistent, unambiguous, reliable, accessible, traceable, and valid.
- Create and deliver high-quality data models while adhering to defined data governance practices and standards.
- Translate high-level functional or business data requirements into technical solutions, including database design and data mapping.
- Participate in requirement-gathering activities, elicitation, gap analysis, data analysis, effort estimation, and review processes.
Qualifications
- 8–12 years of strong data analysis and/or data modeling experience.
- Strong individual contributor with solid understanding of SDLC and Agile methodologies.
- Comprehensive expertise in conceptual, logical, and physical data modeling.
Skills
- Strong financial domain knowledge and data analysis capabilities.
- Excellent communication and stakeholder management skills.
- Ability to work effectively in a fast-paced and continuously evolving environment.
- Problem-solving mindset with a solution-oriented approach.
- Team player with a self-starter attitude and strong sense of ownership.
- Proficiency in SQL, MS Office tools, GCP BigQuery, Erwin, and Visual Paradigm (preferred).
As a Google Cloud Infrastructure / DevOps Engineer, you will design, implement, and maintain cloud infrastructure while enabling efficient development operations. This role bridges development and operations, with a strong focus on automation, scalability, reliability, and collaboration. You will work closely with cross-functional teams to optimize systems and enhance CI/CD pipelines.
Key Responsibilities:
Cloud Infrastructure Management
- Manage and monitor Google Cloud Platform (GCP) services and components.
- Ensure high availability, scalability, and security of cloud resources.
CI/CD Pipeline Implementation
- Design and implement automated pipelines for application releases.
- Build and maintain CI/CD workflows.
- Collaborate with developers to streamline deployment processes.
- Automate testing, deployment, and rollback procedures.
Infrastructure as Code (IaC)
- Use Terraform (or similar tools) to define and manage infrastructure.
- Maintain version-controlled infrastructure code.
- Ensure environment consistency across dev, staging, and production.
Monitoring & Troubleshooting
- Monitor system performance, resource usage, and application health.
- Troubleshoot cloud infrastructure and deployment pipeline issues.
- Implement proactive monitoring and alerting.
Security & Compliance
- Apply cloud security best practices.
- Ensure compliance with industry standards and internal policies.
- Collaborate with security teams to address vulnerabilities.
Collaboration & Documentation
- Work closely with development, operations, and QA teams.
- Document architecture, processes, and configurations.
- Share knowledge and best practices with the team.
Qualifications:
Education
- Bachelor’s degree in Computer Science, Information Technology, or a related field.
Experience
- Minimum 3 years of industry experience.
- At least 1 year designing and managing production systems on GCP.
- Familiarity with GCP services (Compute Engine, GKE, Cloud Storage, etc.).
- Exposure to Docker, Kubernetes, and microservices architecture.
Skills
- Proficiency in Python or Bash for automation.
- Strong understanding of DevOps principles.
- Knowledge of Jenkins or other CI/CD tools.
- Experience with GKE for container orchestration.
- Familiarity with event streaming platforms (Kafka, Google Cloud Pub/Sub).
About the Company: Bits In Glass – India
Industry Leader
- Established for 20+ years with global operations in the US, Canada, UK, and India.
- In 2021, Bits In Glass joined hands with Crochet Technologies, strengthening global delivery capabilities.
- Offices in Pune, Hyderabad, and Chandigarh.
- Specialized Pega Partner since 2017, ranked among the top 30 Pega partners globally.
- Long-standing sponsor of the annual PegaWorld event.
- Elite Appian partner since 2008 with deep industry expertise.
- Dedicated global Pega Center of Excellence (CoE) supporting customers and development teams worldwide.
Employee Benefits
- Career Growth: Clear pathways for advancement and professional development.
- Challenging Projects: Work on innovative, high-impact global projects.
- Global Exposure: Collaborate with international teams and clients.
- Flexible Work Arrangements: Supporting work-life balance.
- Comprehensive Benefits: Competitive compensation, health insurance, paid time off.
- Learning Opportunities: Upskill on AI-enabled Pega solutions, data engineering, integrations, cloud migration, and more.
Company Culture
- Collaborative Environment: Strong focus on teamwork, innovation, and knowledge sharing.
- Inclusive Workplace: Diverse and respectful workplace culture.
- Continuous Learning: Encourages certifications, learning programs, and internal knowledge sessions.
Core Values
- Integrity: Ethical practices and transparency.
- Excellence: Commitment to high-quality work.
- Client-Centric Approach: Delivering solutions tailored to client needs.
JD for Cloud engineer
Job Summary:
We are looking for an experienced GCP Cloud Engineer to design, implement, and manage cloud-based solutions on Google Cloud Platform (GCP). The ideal candidate should have expertise in GKE (Google Kubernetes Engine), Cloud Run, Cloud Loadbalancer, Cloud function, Azure DevOps, and Terraform, with a strong focus on automation, security, and scalability.
You will work closely with development, operations, and security teams to ensure robust cloud infrastructure and CI/CD pipelines while optimizing performance and cost.
Key Responsibilities:
1. Cloud Infrastructure Design & Management
- Architect, deploy, and maintain GCP cloud resources via terraform/other automation.
- Implement Google Cloud Storage, Cloud SQL, filestore, for data storage and processing needs.
- Manage and configure Cloud Load Balancers (HTTP(S), TCP/UDP, and SSL Proxy) for high availability and scalability.
- Optimize resource allocation, monitoring, and cost efficiency across GCP environments.
2. Kubernetes & Container Orchestration
- Deploy, manage, and optimize workloads on Google Kubernetes Engine (GKE).
- Work with Helm charts for microservices deployments.
- Automate scaling, rolling updates, and zero-downtime deployments.
3. Serverless & Compute Services
- Deploy and manage applications on Cloud Run and Cloud Functions for scalable, serverless workloads.
- Optimize containerized applications running on Cloud Run for cost efficiency and performance.
4. CI/CD & DevOps Automation
- Design, implement, and manage CI/CD pipelines using Azure DevOps.
- Automate infrastructure deployment using Terraform, Bash and Powershell scripting
- Integrate security and compliance checks into the DevOps workflow (DevSecOps).
Required Skills & Qualifications:
✔ Experience: 8+ years in Cloud Engineering, with a focus on GCP.
✔ Cloud Expertise: Strong knowledge of GCP services (GKE, Compute Engine, IAM, VPC, Cloud Storage, Cloud SQL, Cloud Functions).
✔ Kubernetes & Containers: Experience with GKE, Docker, GKE Networking, Helm.
✔ DevOps Tools: Hands-on experience with Azure DevOps for CI/CD pipeline automation.
✔ Infrastructure-as-Code (IaC): Expertise in Terraform for provisioning cloud resources.
✔ Scripting & Automation: Proficiency in Python, Bash, or PowerShell for automation.
✔ Security & Compliance: Knowledge of cloud security principles, IAM, and compliance standards.
Role Overview
We are seeking a skilled Java Developer with a strong background in building scalable, high-quality, and high-performance digital applications on the Java technology stack. This role is critical for developing microservice architectures and managing data with distributed databases and GraphQLinterfaces.
Skills:
Java, GCP, NoSQL, docker, contanerization
Primary Responsibilities:
- Design and develop scalable services/microservices using Java/Node and MVC architecture, ensuring clean, performant, and maintainable code.
- Implement GraphQL APIs to enhance the functionality and performance of applications.
- Work with Cassandra and other distributed database systems to design robust, scalable database schemas that support business processes.
- Design and Develop functionality/application for given requirements by focusing on Functional, Non Functional and Maintenance needs.
- Collaborate within team and with cross functional teams to effectively implement, deploy and monitor applications.
- Document and Improve existing processes/tools.
- Support and Troubleshoot production incidents with a sense of urgency by understanding customer impact.
- Proficient in developing applications and web services, as well as cloud-native apps using MVC framworks like Spring Boot, and REST API.
- Thorough understanding and hands-on experience with containerization and orchestration technologies like Docker, Kubernetes, etc.
- Strong background in working with cloud platforms, especially GCP
- Demonstrated expertise in building and deploying services using CI/CD pipelines, leveraging tools like GitHub, CircleCI, Jenkins, and GitLab.
- Comprehensive knowledge of distributed database designs.
- Experience in building Observablity in applications with OTel OR Promothues is a plus
- Experience working in NodeJS is a plus.
Soft Skills Required:
- Should be able to work independently in highly cross functional projects/environment.
- Team player who pays attention to detail and has a Team win mindset.
Job Title
Senior Developer - Java+GCP
Job Description
Job Role: Senior Developer - Java + GCP
Years of Experience: 6 - 8 years
Work Location: Bangalore / Pune / Hyderabad
Work Mode: Hybrid (3 days WFO)
Job Description:
We are seeking a skilled Java / Node Developer with a strong background in building scalable, high-quality, and high-performance digital applications on the Java technology stack. This role is critical for developing microservice architectures and managing data with distributed databases and GraphQLinterfaces.
Primary Responsibilities:
- Design and develop scalable services/microservices using Java/Node and MVC architecture, ensuring clean, performant,and maintainable code.
- Implement GraphQL APIs to enhance the functionality and performance of applications.
- Work with Cassandra and other distributed database systems to design robust, scalable database schemas that support business processes.
- Design and Develop functionality/application for given requirements by focusing on Functional, Non Functional and Maintenance needs.
- Collaborate within team and with cross functional teams to effectively implement, deploy and monitor applications.
- Document and Improve existing processes/tools.
- Support and Troubleshoot production incidents with a sense of urgency by understanding customer impact.
- Proficient in developing applications and web services, as well as cloud-native apps using MVC framworks like Spring Boot, and REST API.
- Thorough understanding and hands-on experience with containerization and orchestration technologies like Docker, Kubernetes, etc.
- Strong background in working with cloud platforms, especially GCP
- Demonstrated expertise in building and deploying services using CI/CD pipelines, leveraging tools like GitHub, CircleCI, Jenkins, and GitLab.
- Comprehensive knowledge of distributed database designs.
- Experience in building Observablity in applications with OTel OR Promothues is a plus
- Experience working in NodeJS is a plus.
Soft Skills Required:
- Should be able to work independently in highly cross functional projects/environment.
- Team player who pays attention to detail and has a Team win mindset.
View Less
Skills
java, GCP, NOSQL, DOCKER, CONTAINERIZATION
Required Skills & Qualifications:
✔ Experience: 4+ years in Cloud Engineering, with a focus on GCP.
✔ Cloud Expertise: Strong knowledge of GCP services (GKE, Compute Engine, IAM, VPC, Cloud Storage, Cloud SQL, Cloud Functions).
✔ Kubernetes & Containers: Experience with GKE, Docker, GKE Networking, Helm.
✔ DevOps Tools: Hands-on experience with Azure DevOps for CI/CD pipeline automation.
✔ Infrastructure-as-Code (IaC): Expertise in Terraform for provisioning cloud resources.
✔ Scripting & Automation: Proficiency in Python, Bash, or PowerShell for automation.
Job Title: GCP Cloud Engineer/Lead
Location: Pune, Balewadi
Shift / Time Zone: 1:30 PM – 10:30 PM IST (3:00 AM – 12:00 PM EST, 3–4 hours overlap with US Eastern Time)
Role Summary
We are seeking an experienced GCP Cloud Engineer to join our team supporting CVS. The ideal candidate will have a strong background in Google Cloud Platform (GCP) architecture, automation, microservices, and Kubernetes, along with the ability to translate business strategy into actionable technical initiatives. This role requires a blend of hands-on technical expertise, cross-functional collaboration, and customer engagement to ensure scalable and secure cloud solutions.
Key Responsibilities
- Design, implement, and manage cloud infrastructure on Google Cloud Platform (GCP) leveraging best practices for scalability, performance, and cost efficiency.
- Develop and maintain microservices-based architectures and containerized deployments using Kubernetes and related technologies.
- Evaluate and recommend new tools, services, and architectures that align with enterprise cloud strategies.
- Collaborate closely with Infrastructure Engineering Leadership to translate long-term customer strategies into actionable enablement plans, onboarding frameworks, and proactive support programs.
- Act as a bridge between customers, Product Management, and Engineering teams, translating business needs into technical requirements and providing strategic feedback to influence product direction.
- Identify and mitigate technical risks and roadblocks in collaboration with executive stakeholders and engineering teams.
- Advocate for customer needs within the engineering organization to enhance adoption, performance, and cost optimization.
- Contribute to the development of Customer Success methodologies and mentor other engineers in best practices.
Must-Have Skills
- 8+ years of total experience, with 5+ years specifically as a GCP Cloud Engineer.
- Deep expertise in Google Cloud Platform (GCP) — including Compute Engine, Cloud Storage, Networking, IAM, and Cloud Functions.
- Strong experience in microservices-based architecture and Kubernetes container orchestration.
- Hands-on experience with infrastructure automation tools (Terraform, Ansible, or similar).
- Proven ability to design, automate, and optimize CI/CD pipelines for cloud workloads.
- Excellent problem-solving, communication, and collaboration skills.
- GCP Professional Certification (Cloud Architect / DevOps Engineer / Cloud Engineer) preferred or in progress.
- Ability to multitask effectively in a fast-paced, dynamic environment with shifting priorities.
Good-to-Have Skills
- Experience with Cloud Monitoring, Logging, and Security best practices in GCP.
- Exposure to DevOps tools (Jenkins, GitHub Actions, ArgoCD, or similar).
- Familiarity with multi-cloud or hybrid-cloud environments.
- Knowledge of Python, Go, or Shell scripting for automation and infrastructure management.
- Understanding of network design, VPC architecture, and service mesh (Istio/Anthos).
- Experience working with enterprise-scale customers and cross-functional product teams.
- Strong presentation and stakeholder communication skills, particularly with executive audiences.
Advanced Backend Development: Design, build, and maintain efficient, reusable, and reliable Python code. Develop complex backend services using FastAPI, MongoDB, and Postgres.
Microservices Architecture Design: Lead the design and implementation of a scalable microservices architecture, ensuring systems are robust and reliable.
Database Management and Optimization: Oversee and optimize the performance of MongoDB and Postgres databases, ensuring data integrity and security.
Message Broker Implementation: Implement and manage sophisticated message broker systems like RabbitMQ or Kafka for asynchronous processing and inter-service communication.
Git and Version Control Expertise: Utilize Git for sophisticated source code management. Lead code reviews and maintain high standards in code quality.
Project and Team Management: Manage backend development projects, coordinating with cross-functional teams. Mentor junior developers and contribute to team growth and skill development. Cloud Infrastructure Management: Extensive work with cloud services, specifically Google Cloud Platform (GCP), for deployment, scaling, and management of applications.
Performance Tuning and Optimization: Focus on optimizing applications for maximum speed, efficiency, and scalability.
Unit Testing and Quality Assurance: Develop and maintain thorough unit tests for all developed code. Lead initiatives in test-driven development (TDD) to ensure code quality and reliability.
Security Best Practices: Implement and advocate for security best practices, data protection protocols, and compliance standards across all backend services.
Job Title: Site Reliability Engineer (SRE) / Application Support Engineer
Experience: 3–7 Years
Location: Bangalore / Mumbai / Pune
About the Role
The successful candidate will join the S&C Site Reliability Engineering (SRE) Team, responsible for providing Tier 2/3 support to S&C business applications and environments. This role requires close collaboration with client-facing teams (Client Services, Product, and Research) as well as Infrastructure, Technology, and Application Development teams to maintain and support production and non-production environments.
Key Responsibilities
- Provide Tier 2/3 product technical support and issue resolution.
- Develop and maintain software tools to improve operations and support efficiency.
- Manage system and software configurations; troubleshoot environment-related issues.
- Identify opportunities to optimize system performance through configuration improvements or development suggestions.
- Plan, document, and deploy software applications across Unix/Linux, Azure, and GCP environments.
- Collaborate with Development and QA teams throughout the software release lifecycle.
- Analyze and improve release and deployment processes to drive automation and efficiency.
- Coordinate with infrastructure teams for maintenance, planned downtimes, and resource management across production and non-production environments.
- Participate in on-call support (minimum one week per month) for off-hour emergencies and maintenance activities.
Required Skills & Qualifications
- Education:
- Bachelor’s degree in Computer Science, Engineering, or a related field (BE/MCA).
- Master’s degree is a plus.
- Experience:
- 3–7 years in Production Support, Application Management, or Application Development (support/maintenance).
- Technical Skills:
- Strong Unix/Linux administration skills.
- Excellent scripting skills — Shell, Python, Batch (mandatory).
- Database expertise — Oracle (must have).
- Understanding of Software Development Life Cycle (SDLC).
- PowerShell knowledge is a plus.
- Experience in Java or Ruby development is desirable.
- Exposure to cloud platforms (GCP, Azure, or AWS) is an added advantage.
- Soft Skills:
- Excellent problem-solving and troubleshooting abilities.
- Strong collaboration and communication skills.
- Ability to work in a fast-paced, cross-functional environment.
Required skills and experience
• Bachelor's degree, Computer Science, Engineering or other similar concentration (BE/MCA)
• Master’s degree a plus
• 3-8 years’ experience in Production Support/ Application Management/ Application Development (support/ maintenance) role.
• Excellent problem-solving/troubleshooting skills, fast learner
• Strong knowledge of Unix Administration.
• Strong scripting skills in Shell, Python, Batch is must.
• Strong Database experience – Oracle
• Strong knowledge of Software Development Life Cycle
• Power shell is nice to have
• Software development skillsets in Java or Ruby.
• Worked upon any of the cloud platforms – GCP/Azure/AWS is nice to have
Department: S&C – Site Reliability Engineering (SRE)
Experience Required: 4–8 Years
Location: Bangalore / Pune /Mumbai
Employment Type: Full-time
- Provide Tier 2/3 technical product support to internal and external stakeholders.
- Develop automation tools and scripts to improve operational efficiency and support processes.
- Manage and maintain system and software configurations; troubleshoot environment/application-related issues.
- Optimize system performance through configuration tuning or development enhancements.
- Plan, document, and deploy applications in Unix/Linux, Azure, and GCP environments.
- Collaborate with Development, QA, and Infrastructure teams throughout the release and deployment of lifecycles.
- Drive automation initiatives for release and deployment processes.
- Coordinate with infrastructure teams to manage hardware/software resources, maintenance, and scheduled downtimes across production and non-production environments.
- Participate in on-call rotations (minimum one week per month) to address critical incidents and off-hour maintenance tasks.
Key Competencies
- Strong analytical, troubleshooting, and critical thinking abilities.
- Excellent cross-functional collaboration skills.
- Strong focus on documentation, process improvement, and system reliability.
- Proactive, detail-oriented, and adaptable in a fast-paced work environment.
🚀 We’re Hiring: React + Node.js Developer (Full Stack)
📍 Location: Mumbai / Pune (Final location will be decided post-interview)
💼 Experience: 5–8 years
🕒 Notice Period: Immediate to 15 days
About the Role:
We’re looking for a skilled Full Stack Developer with hands-on experience in React and Node.js, and a passion for building scalable, high-performance applications.
Key Skills & Responsibilities:
Strong expertise in React (frontend) and Node.js (backend).
Experience with relational databases (PostgreSQL / MySQL).
Familiarity with production systems and cloud services (AWS / GCP).
Strong grasp of OOP / FP and clean coding principles (e.g., SOLID).
Hands-on with Docker, and good to have exposure to Kubernetes, RabbitMQ, Redis.
Experience or interest in AI APIs & tools is a plus.
Excellent communication and collaboration skills.
Bonus: Contributions to open-source projects.
What You’ll Do:
In this role, you will take the lead in designing, building, and optimising high-performance backend systems and micro-services architecture. As a Tech Lead, you will mentor and guide a team of backend engineers, collaborate with cross-functional teams, drive technical excellence, and establish best practices for system design and engineering.
Leadership & Mentorship
- Lead and mentor a team of engineers, ensuring technical excellence, code quality, and continuous improvement.
- Provide guidance on complex technical issues, system design challenges, and architectural decisions.
- Foster a culture of collaboration, innovation, and knowledge sharing within the team.
- Develop and maintain backend engineering processes, practices, and coding standards across the team.
- Conduct regular code reviews and provide constructive technical feedback to team members.
- Actively mentor engineers on advanced design patterns, micro-services architecture, and best practices.
System Architecture & Engineering
- Design and develop robust, scalable, and efficient micro-services architecture and backend systems.
- Architect solutions that support high-throughput, low-latency operations across the healthcare domain.
- Translate business requirements into detailed technical designs, system diagrams, and implementation plans.
- Ensure seamless integration of backend services with frontend applications, data pipelines, and third-party APIs.
- Optimize system performance, scalability, and resource utilisation.
- Implement comprehensive testing strategies, including unit tests, integration tests, and end-to-end tests.
Technology Stack & Implementation
- Lead the selection and implementation of appropriate backend technologies and frameworks.
- Proficiency with Java, Spring Boot, and Spring Framework for building enterprise-grade applications.
- Design and implement event-driven architectures using Apache Kafka for real-time data processing and asynchronous communication.
- Strong expertise in database design, optimisation, and management (SQL and NoSQL databases).
- Experience with cloud platforms (AWS/GCP) for deploying and scaling backend services.
- Hands-on experience with containerisation technologies (Docker, Kubernetes).
- Proficiency in version control, build tools, and automation frameworks.
Collaboration & Stakeholder Management
- Work closely with Product Managers, Data Scientists, Frontend Engineers, and Business Stakeholders to understand requirements and deliver solutions that meet business needs.
- Participate in cross-functional design reviews and architecture discussions.
- Collaborate with the Data Engineering and Analytics teams to ensure smooth integration of data services.
- Communicate technical decisions and trade-offs effectively to both technical and non-technical audiences.
- Participate in Agile ceremonies and ensure technical alignment with sprint goals.
Continuous Improvement
- Stay updated with the latest trends in backend engineering, cloud technologies, and micro-services architecture.
- Drive automation, testing, and optimisation within backend systems to improve efficiency and reliability.
- Participate in architectural reviews and design discussions for cross-team initiatives.
- Identify technical risks early and develop mitigation strategies.
- Maintain comprehensive system documentation and architecture decision records (ADRs).
Who You Are:
Experience
- Bachelor’s degree in engineering (CS / IT) or equivalent degree from a well-known Institute / University.
- 5+ years of experience in backend software engineering or related roles.
- At least 2 years of experience in a technical leadership or senior engineering position.
- Extensive experience designing and implementing micro-services architecture.
- Strong background with event-driven architectures and message-broker systems (Kafka, RabbitMQ).
- Deep expertise in relational database design, optimisation, and management (PostgreSQL, MySQL, Oracle).
- Experience with big data analytics databases (Clickhouse, Druid, etc.).
- Proven ability to mentor and guide other engineers in technical problem-solving.
- Demonstrated success in leading technical initiatives and driving architectural decisions.
Technical Skills
- Expert-level proficiency in Java and Spring Boot framework.
- Strong understanding of micro-services architecture patterns, design patterns, and SOLID principles.
- Hands-on experience with Apache Kafka for building event-driven systems.
- Proficient in SQL and database optimisation.
- Strong fundamentals in data structures, algorithms, and system design.
- Experience with cloud platforms (AWS/GCP/Azure) for deploying and managing services.
- Familiarity with containerisation and orchestration tools (Docker, Kubernetes).
- Experience with monitoring, logging, and observability tools (ELK Stack, Prometheus, Grafana, etc.).
- Version control expertise (Git, GitHub, GitLab).
- Strong understanding of security best practices and compliance requirements.
Soft Skills
- Excellent problem-solving and analytical skills with ability to break down complex problems.
- Strong communication and interpersonal skills, with ability to work effectively with technical and non-technical stakeholders.
- Ability to manage multiple priorities in a fast-paced, dynamic environment.
- Strong mentoring and leadership capabilities.
- Collaborative mindset with a focus on team growth and development.
- Proactive approach to identifying and addressing technical debt.
- Passion for building high-quality, maintainable software.
- Strong sense of ownership and accountability for deliverables.
What We Offer
- Opportunity to lead cutting-edge backend systems in the healthcare tech space.
- Mentorship from experienced engineering leaders and architects.
- Career growth opportunities in a dynamic, fast-growing organizstion.
- Competitive compensation and benefits package.
- Collaborative and inclusive engineering culture.
- Access to learning and development resources.
- Impact on improving patient outcomes through technology.
We are seeking Cloud Developer with experience in GCP/Azure along with Terraform coding. They will help to manage & standardize the IaC module.
Experience:5 - 8 Years
Location:Mumbai & Pune
Mode of Work:Full Time
Key Responsibilities:
- Design, develop, and maintain robust software applications using most common and popular coding languages suitable for the application design, with a strong focus on clean, maintainable, and efficient code.
- Develop, maintain, and enhance Terraform modules to encapsulate common infrastructure patterns and promote code reuse and standardization.
- Develop RESTful APIs and backend services aligned with modern architectural practices.
- Apply object-oriented programming principles and design patterns to build scalable systems.
- Build and maintain automated test frameworks and scripts to ensure high product quality.
- Troubleshoot and resolve technical issues across application layers, from code to infrastructure.
- Work with cloud platforms such as Azure or Google Cloud Platform (GCP).
- Use Git and related version control practices effectively in a team-based development environment.
- Integrate and experiment with AI development tools like GitHub Copilot, Azure OpenAI, or similar to boost engineering efficiency.
Requirements:
- 5+ years of experience
- Experience with IaC Module
- Terraform coding experience along with Terraform Module as a part of central platform team
- Azure/GCP cloud experience is a must
- Experience with C#/Python/Java Coding - is good to have
If interested please share your updated resume with below details :
Total Experience -
Relevant Experience -
Current Location -
Current CTC -
Expected CTC -
Notice period -
Any offer in hand -
Job Description:
Position - Cloud Developer
Experience - 5 - 8 years
Location - Mumbai & Pune
Responsibilities:
- Design, develop, and maintain robust software applications using most common and popular coding languages suitable for the application design, with a strong focus on clean, maintainable, and efficient code.
- Develop, maintain, and enhance Terraform modules to encapsulate common infrastructure patterns and promote code reuse and standardization.
- Develop RESTful APIs and backend services aligned with modern architectural practices.
- Apply object-oriented programming principles and design patterns to build scalable systems.
- Build and maintain automated test frameworks and scripts to ensure high product quality.
- Troubleshoot and resolve technical issues across application layers, from code to infrastructure.
- Work with cloud platforms such as Azure or Google Cloud Platform (GCP).
- Use Git and related version control practices effectively in a team-based development environment.
- Integrate and experiment with AI development tools like GitHub Copilot, Azure OpenAI, or similar to boost engineering efficiency.
Skills:
- 5+ years of experience
- Experience with IaC Module
- Terraform coding experience along with Terraform Module as a part of central platform team
- Azure/GCP cloud experience is a must
- Experience with C#/Python/Java Coding - is good to have
Dear Candidate,
Greetings from Wissen Technology.
We have an exciting Job opportunity for GCP SRE Engineer Professionals. Please refer to the Job Description below and share your profile if interested.
About Wissen Technology:
- The Wissen Group was founded in the year 2000. Wissen Technology, a part of Wissen Group, was established in the year 2015.
- Wissen Technology is a specialized technology company that delivers high-end consulting for organizations in the Banking & Finance, Telecom, and Healthcare domains. We help clients build world class products.
- Our workforce consists of 1000+ highly skilled professionals, with leadership and senior management executives who have graduated from Ivy League Universities like Wharton, MIT, IITs, IIMs, and NITs and with rich work experience in some of the biggest companies in the world.
- Wissen Technology has grown its revenues by 400% in these five years without any external funding or investments.
- Globally present with offices US, India, UK, Australia, Mexico, and Canada.
- We offer an array of services including Application Development, Artificial Intelligence & Machine Learning, Big Data & Analytics, Visualization & Business Intelligence, Robotic Process Automation, Cloud, Mobility, Agile & DevOps, Quality Assurance & Test Automation.
- Wissen Technology has been certified as a Great Place to Work®.
- Wissen Technology has been voted as the Top 20 AI/ML vendor by CIO Insider in 2020.
- Over the years, Wissen Group has successfully delivered $650 million worth of projects for more than 20 of the Fortune 500 companies.
- The technology and thought leadership that the company commands in the industry is the direct result of the kind of people Wissen has been able to attract. Wissen is committed to providing them the best possible opportunities and careers, which extends to providing the best possible experience and value to our clients.
We have served client across sectors like Banking, Telecom, Healthcare, Manufacturing, and Energy. They include likes of Morgan Stanley, MSCI, State Street Corporation, Flipkart, Swiggy, Trafigura, GE to name a few.
Job Description:
Please find below details:
Experience - 4+ Years
Location- Bangalore/Mumbai/Pune
Team Responsibilities
The successful candidate shall be part of the S&C – SRE Team. Our team provides a tier 2/3 support to S&C Business. This position involves collaboration with the client facing teams like Client Services, Product and Research teams and Infrastructure/Technology and Application development teams to perform Environment and Application maintenance and support.
Resource's key Responsibilities
• Provide Tier 2/3 product technical support.
• Building software to help operations and support activities.
• Manage system\software configurations and troubleshoot environment issues.
• Identify opportunities for optimizing system performance through changes in configuration or suggestions for development.
• Plan, document and deploy software applications on our Unix/Linux/Azure and GCP based systems.
• Collaborate with development and software testing teams throughout the release process.
• Analyze release and deployment processes to identify key areas for automation and optimization.
• Manage hardware and software resources & coordinate maintenance, planned downtimes with
infrastructure group across all the environments. (Production / Non-Production).
• Must spend minimum one week a month as on call support to help with off-hour emergencies and maintenance activities.
Required skills and experience
• Bachelor's degree, Computer Science, Engineering or other similar concentration (BE/MCA)
• Master’s degree a plus
• 6-8 years’ experience in Production Support/ Application Management/ Application Development (support/ maintenance) role.
• Excellent problem-solving/troubleshooting skills, fast learner
• Strong knowledge of Unix Administration.
• Strong scripting skills in Shell, Python, Batch is must.
• Strong Database experience – Oracle
• Strong knowledge of Software Development Life Cycle
• Power shell is nice to have
• Software development skillsets in Java or Ruby.
• Worked upon any of the cloud platforms – GCP/Azure/AWS is nice to have
Dear Candidate,
Greetings from Wissen Technology.
We have an exciting Job opportunity for GCP SRE Engineer Professionals. Please refer to the Job Description below and share your profile if interested.
About Wissen Technology:
- The Wissen Group was founded in the year 2000. Wissen Technology, a part of Wissen Group, was established in the year 2015.
- Wissen Technology is a specialized technology company that delivers high-end consulting for organizations in the Banking & Finance, Telecom, and Healthcare domains. We help clients build world class products.
- Our workforce consists of 1000+ highly skilled professionals, with leadership and senior management executives who have graduated from Ivy League Universities like Wharton, MIT, IITs, IIMs, and NITs and with rich work experience in some of the biggest companies in the world.
- Wissen Technology has grown its revenues by 400% in these five years without any external funding or investments.
- Globally present with offices US, India, UK, Australia, Mexico, and Canada.
- We offer an array of services including Application Development, Artificial Intelligence & Machine Learning, Big Data & Analytics, Visualization & Business Intelligence, Robotic Process Automation, Cloud, Mobility, Agile & DevOps, Quality Assurance & Test Automation.
- Wissen Technology has been certified as a Great Place to Work®.
- Wissen Technology has been voted as the Top 20 AI/ML vendor by CIO Insider in 2020.
- Over the years, Wissen Group has successfully delivered $650 million worth of projects for more than 20 of the Fortune 500 companies.
- The technology and thought leadership that the company commands in the industry is the direct result of the kind of people Wissen has been able to attract. Wissen is committed to providing them the best possible opportunities and careers, which extends to providing the best possible experience and value to our clients.
We have served client across sectors like Banking, Telecom, Healthcare, Manufacturing, and Energy. They include likes of Morgan Stanley, MSCI, State Street Corporation, Flipkart, Swiggy, Trafigura, GE to name a few
Job Description:
Please find below details:
Experience - 4+ Years
Location- Bangalore/Mumbai/Pune
Team Responsibilities
The successful candidate shall be part of the S&C – SRE Team. Our team provides a tier 2/3 support to S&C Business. This position involves collaboration with the client facing teams like Client Services, Product and Research teams and Infrastructure/Technology and Application development teams to perform Environment and Application maintenance and support.
Resource's key Responsibilities
• Provide Tier 2/3 product technical support.
• Building software to help operations and support activities.
• Manage system\software configurations and troubleshoot environment issues.
• Identify opportunities for optimizing system performance through changes in configuration or suggestions for development.
• Plan, document and deploy software applications on our Unix/Linux/Azure and GCP based systems.
• Collaborate with development and software testing teams throughout the release process.
• Analyze release and deployment processes to identify key areas for automation and optimization.
• Manage hardware and software resources & coordinate maintenance, planned downtimes with
infrastructure group across all the environments. (Production / Non-Production).
• Must spend minimum one week a month as on call support to help with off-hour emergencies and maintenance activities.
Required skills and experience
• Bachelor's degree, Computer Science, Engineering or other similar concentration (BE/MCA)
• Master’s degree a plus
• 6-8 years’ experience in Production Support/ Application Management/ Application Development (support/ maintenance) role.
• Excellent problem-solving/troubleshooting skills, fast learner
• Strong knowledge of Unix Administration.
• Strong scripting skills in Shell, Python, Batch is must.
• Strong Database experience – Oracle
• Strong knowledge of Software Development Life Cycle
• Power shell is nice to have
• Software development skillsets in Java or Ruby.
• Worked upon any of the cloud platforms – GCP/Azure/AWS is nice to have
Job Title : Senior QA Automation Architect (Cloud & Kubernetes)
Experience : 6+ Years
Location : India (Multiple Offices)
Shift Timings : 12 PM to 9 PM (Noon Shift)
Working Days : 5 Days WFO (NO Hybrid)
About the Role :
We’re looking for a Senior QA Automation Architect with deep expertise in cloud-native systems, Kubernetes, and automation frameworks.
You’ll design scalable test architectures, enhance automation coverage, and ensure product reliability across hybrid-cloud and distributed environments.
Key Responsibilities :
- Architect and maintain test automation frameworks for microservices.
- Integrate automated tests into CI/CD pipelines (Jenkins, GitHub Actions).
- Ensure reliability, scalability, and observability of test systems.
- Work closely with DevOps and Cloud teams to streamline automation infrastructure.
Mandatory Skills :
- Kubernetes, Helm, Docker, Linux
- Cloud Platforms : AWS / Azure / GCP
- CI/CD Tools : Jenkins, GitHub Actions
- Scripting : Python, Pytest, Bash
- Monitoring & Performance : Prometheus, Grafana, Jaeger, K6
- IaC Practices : Terraform / Ansible
Good to Have :
- Experience with Service Mesh (Istio/Linkerd).
- Container Security or DevSecOps exposure.
Company Overview:
Virtana delivers the industry’s only unified platform for Hybrid Cloud Performance, Capacity and Cost Management. Our platform provides unparalleled, real-time visibility into the performance, utilization, and cost of infrastructure across the hybrid cloud – empowering customers to manage their mission critical applications across physical, virtual, and cloud computing environments. Our SaaS platform allows organizations to easily manage and optimize their spend in the public cloud, assure resources are performing properly through real-time monitoring, and provide the unique ability to plan migrations across the hybrid cloud.
As we continue to expand our portfolio, we are seeking a highly skilled and hands-on Staff Software Engineer in backend technologies to contribute to the futuristic development of our sophisticated monitoring products.
Position Overview:
As a Staff Software Engineer specializing in backend technologies for Storage and Network monitoring in an AI enabled Data center as well as Cloud, you will play a critical role in designing, developing, and delivering high-quality features within aggressive timelines. Your expertise in microservices-based streaming architectures and strong hands-on development skills are essential to solve complex problems related to large-scale data processing. Proficiency in backend technologies such as Java, Python is crucial.
Key Responsibilities:
- Hands-on Development: Actively participate in the design, development, and delivery of high-quality features, demonstrating strong hands-on expertise in backend technologies like Java, Python, Go or related languages.
- Microservices and Streaming Architectures: Design and implement microservices-based streaming architectures to efficiently process and analyze large volumes of data, ensuring real-time insights and optimal performance.
- Agile Development: Collaborate within an agile development environment to deliver features on aggressive schedules, maintaining a high standard of quality in code, design, and architecture.
- Feature Ownership: Take ownership of features from inception to deployment, ensuring they meet product requirements and align with the overall product vision.
- Problem Solving and Optimization: Tackle complex technical challenges related to data processing, storage, and real-time monitoring, and optimize backend systems for high throughput and low latency.
- Code Reviews and Best Practices: Conduct code reviews, provide constructive feedback, and promote best practices to maintain a high-quality and maintainable codebase.
- Collaboration and Communication: Work closely with cross-functional teams, including UI/UX designers, product managers, and QA engineers, to ensure smooth integration and alignment with product goals.
- Documentation: Create and maintain technical documentation, including system architecture, design decisions, and API documentation, to facilitate knowledge sharing and onboarding.
Qualifications:
- Bachelor's or Master's degree in Computer Science, Software Engineering, or a related field.
- 8+ years of hands-on experience in backend development, demonstrating expertise in Java, Python or related technologies.
- Strong domain knowledge in Storage and Networking, with exposure to monitoring technologies and practices.
- Experience is handling the large data-lakes with purpose-built data stores (Vector databases, no-SQL, Graph, Time-series).
- Practical knowledge of OO design patterns and Frameworks like Spring, Hibernate.
- Extensive experience with cloud platforms such as AWS, Azure or GCP and development expertise on Kubernetes, Docker, etc.
- Solid experience designing and delivering features with high quality on aggressive schedules.
- Proven experience in microservices-based streaming architectures, particularly in handling large amounts of data for storage and networking monitoring.
- Familiarity with performance optimization techniques and principles for backend systems.
- Excellent problem-solving and critical-thinking abilities.
- Outstanding communication and collaboration skills.
Why Join Us:
- Opportunity to be a key contributor in the development of a leading performance monitoring company specializing in AI-powered Storage and Network monitoring.
- Collaborative and innovative work environment.
- Competitive salary and benefits package.
- Professional growth and development opportunities.
- Chance to work on cutting-edge technology and products that make a real impact.
If you are a hands-on technologist with a proven track record of designing and delivering high-quality features on aggressive schedules and possess strong expertise in microservices-based streaming architectures, we invite you to apply and help us redefine the future of performance monitoring.
Job Description
The ideal candidate will possess expertise in Core Java (at least Java 8), Spring framework, JDBC, threading, database management, and cloud platforms such as Azure and GCP. The candidate should also have strong debugging skills, the ability to understand multi-service flow, experience with large data processing, and excellent problem-solving abilities.
JD:
- Develop, and maintain Java applications using Core Java, Spring framework, JDBC, and threading concepts.
- Strong understanding of the Spring framework and its various modules.
- Experience with JDBC for database connectivity and manipulation
- Utilize database management systems to store and retrieve data efficiently.
- Proficiency in Core Java8 and thorough understanding of threading concepts and concurrent programming.
- Experience in in working with relational and nosql databases.
- Basic understanding of cloud platforms such as Azure and GCP and gain experience on DevOps practices is added advantage.
- Knowledge of containerization technologies (e.g., Docker, Kubernetes)
- Perform debugging and troubleshooting of applications using log analysis techniques.
- Understand multi-service flow and integration between components.
- Handle large-scale data processing tasks efficiently and effectively.
- Hands on experience using Spark is an added advantage.
- Good problem-solving and analytical abilities.
- Collaborate with cross-functional teams to identify and solve complex technical problems.
- Knowledge of Agile methodologies such as Scrum or Kanban
- Stay updated with the latest technologies and industry trends to improve development processes continuously and methodologies.

Lead Frontend Architect (Vue.js & Firebase)
Amplifai transforms AI potential into measurable business value, guiding organizations from strategic planning to execution. With deep expertise in AI product development, technical architecture, regulatory compliance, and commercialization, we deliver secure, ethical, and high-performing solutions. Having co-founded one of Europe’s most innovative AI companies, our team drives unparalleled growth for clients through cutting-edge technologies like GPT tools, AI agents, and modern frameworks. Join our new Pune office to shape the future of AI-driven innovation!
One of our partners is transforming how the construction industry measures and manages carbon emissions, helping organizations meet their sustainability goals with accurate, scalable, and actionable insights. Their SaaS platform enables carbon footprint calculations, Life Cycle Assessment (LCA) data management, and complex environmental reporting — and we’re ready to take it from 70 customers to 700+ enterprise clients.
We’re seeking a Senior Cloud Architect & Tech Lead to spearhead the next phase of our platform’s growth. You’ll lead architectural decisions for a complex sustainability and carbon accounting platform built on Firebase/Google Cloud with a Vue.js frontend, driving scalability, enterprise readiness, and technical excellence. This is a hands-on leadership role where you’ll guide the engineering team, optimize system performance, and shape a long-term technical roadmap to support 10x growth — all while leveraging cutting-edge GenAI developer tools like Cursor, Claude, Lovable, and GitHub Copilot to accelerate delivery and innovation.
Key Responsibilities:
· Lead architecture design for a highly scalable, enterprise-ready SaaS platform built with Vue.js, Firebase Functions (Node.js), Firestore, Redis, and GenKit AI.
· Design and optimize complex hierarchical data models and computational workloads for high performance at scale.
· Evaluate platform evolution options — from deep Firebase optimizations to potential migration strategies — balancing technical debt, scalability, and enterprise needs.
· Implement SOC2/ISO27001-ready security controls including audit logging, data encryption, and enterprise-grade access management.
· Drive performance engineering to address Firestore fan-out queries, function cold starts, and database scaling bottlenecks.
· Oversee CI/CD automation and deployment pipelines for multi-environment enterprise releases.
· Design APIs and integration strategies to meet enterprise customer requirements and enable global scaling.
· Mentor and guide the development team, ensuring technical quality, scalability, and adoption of best practices.
· Collaborate cross-functionally with product managers, sustainability experts, and customer success teams to deliver impactful features and integrations.
· Plan and execute disaster recovery strategies, business continuity procedures, and cost-optimized infrastructure scaling.
· Maintain comprehensive technical documentation for architecture, processes, and security controls.
Required Skills & Experience:
· 5+ years of Google Cloud Platform experience with deep expertise in the Firebase ecosystem.
· Proven ability to scale SaaS platforms through 5–10x growth phases, ideally in an enterprise B2B environment.
· Strong background in serverless architecture, event-driven systems, and scaling NoSQL databases (Firestore, MongoDB, DynamoDB).
· Expertise in Vue.js for large-scale application performance and maintainability.
· Hands-on experience implementing enterprise security frameworks (SOC2, ISO27001) and compliance requirements.
· Demonstrated daily use of GenAI developer tools such as Cursor, Claude, Lovable, and GitHub Copilot to accelerate coding, documentation, and architecture work.
· Track record of performance optimization for high-traffic production systems.
· 3+ years leading engineering teams through architectural transitions and complex technical challenges.
· Strong communication skills to work with both technical and non-technical stakeholders.
Preferred Qualifications
· Domain knowledge in construction industry workflows or sustainability technology (LCA, carbon accounting).
· Experience with numerical computing, scientific applications, or computationally intensive workloads.
· Familiarity with multi-region deployments and advanced analytics architectures.
· Knowledge of data residency and privacy regulations
· Knowledge of BIM (Building Information Modeling), IFC standards for construction and engineering data interoperability.
Ideal Candidate
You’re a Senior Software Engineer who thrives on scaling complex systems for enterprise customers. You embrace GenAI tools as an integral part of your development workflow, using platforms like Cursor, Claude, Lovable, and GitHub Copilot to deliver faster and smarter. Experience with BIM, IFC, or Speckle is a strong plus, enabling you to bridge sustainability tech with real-world construction data standards. You balance deep technical execution with strategic thinking and can communicate effectively across teams. While direct sustainability or construction tech experience is a plus, your ability to quickly master complex domains is what will set you apart.
Who we are:
DeepIntent is leading the healthcare advertising industry with data-driven solutions built for the future. From day one, our mission has been to improve patient outcomes through the artful use of advertising, data science, and real-world clinical data. For more information visit, www.DeepIntent.com.
Who you are:
- 5+ years of software engineering experience with 2+ years in senior technical roles
- Proven track record designing and implementing large-scale, distributed backend systems
- Experience leading technical initiatives across multiple teams
- Strong background in mentoring engineers and driving technical excellence
- Programming Languages: Expert-level proficiency in Java and Spring Boot framework
- Framework Expertise: Deep experience with Spring ecosystem (Spring Boot, Spring Security, Spring Data, Spring Cloud)
- API Development: Strong experience building RESTfuI APIs, GraphQL endpoints, and micro-services architectures
- Cloud Platforms: Advanced knowledge of AWS, GCP, Azure and cloud-native development patterns
- Databases: Proficiency with both SQL (PostgreSQL, MySQL, Oracle) and NoSQL (MongoDB, Redis, Cassandra) databases, including design and optimization
- Bachelor's or Master's degree in Computer Science, Engineering, Software Engineering, or related field (or equivalent industry experience)
- Excellent technical communication skills for both technical and non-technical stakeholders
- Strong mentorship abilities with experience coaching junior and mid—level engineers
- Proven ability to drive consensus on technical decisions across teams
- Comfortable with ambiguous problems and breaking down complex challenges
What You'll Do:
- Lead design and implementation of complex backend systems and micro-services serving multiple product teams
- Drive architectural decisions ensuring scalability, reliability, and performance
- Create technical design documents, system architecture diagrams, and API specifications
- Champion engineering best practices including code quality, testing strategies, and security
- Partner with Tech Leads, Engineering Managers, and Product Managers to align solutions with business objectives
- Lead technical initiatives requiring coordination between backend, frontend, and data teams
- Participate in architecture review boards and provide guidance for organisation-wide initiatives
- Serve as technical consultant for complex system design problems across product areas
- Mentor and coach engineers at various levels with technical guidance and career development
- Conduct code reviews and design reviews, sharing knowledge and raising technical standards
- Lead technical discussions and knowledge-sharing sessions
- Help establish coding standards and engineering processes
- Design and develop robust, scalable backend services and APIs using Java and Spring Boot
- Implement comprehensive testing strategies and optimise application performance
- Ensure security best practices across all applications
- Research and prototype new approaches to improve system architecture and developer productivity
1 Senior Associate Technology L1 – Java Microservices
Company Description
Publicis Sapient is a digital transformation partner helping established organizations get to their future, digitally-enabled state, both in the way they work and the way they serve their customers. We help unlock value through a start-up mindset and modern methods, fusing strategy, consulting and customer experience with agile engineering and problem-solving creativity. United by our core values and our purpose of helping people thrive in the brave pursuit of next, our 20,000+ people in 53 offices around the world combine experience across technology, data sciences, consulting and customer obsession to accelerate our clients’ businesses through designing the products and services their customers truly value.
Job Description
We are looking for a Senior Associate Technology Level 1 - Java Microservices Developer to join our team of bright thinkers and doers. You’ll use your problem-solving creativity to design, architect, and develop high-end technology solutions that solve our clients’ most complex and challenging problems across different industries.
We are on a mission to transform the world, and you will be instrumental in shaping how we do it with your ideas, thoughts, and solutions.
Your Impact:
• Drive the design, planning, and implementation of multifaceted applications, giving you breadth and depth of knowledge across the entire project lifecycle.
• Combine your technical expertise and problem-solving passion to work closely with clients, turning • complex ideas into end-to-end solutions that transform our clients’ business
• Constantly innovate and evaluate emerging technologies and methods to provide scalable and elegant solutions that help clients achieve their business goals.
Qualifications
➢ 5 to 7 Years of software development experience
➢ Strong development skills in Java JDK 1.8 or above
➢ Java fundamentals like Exceptional handling, Serialization/Deserialization and Immutability concepts
➢ Good fundamental knowledge in Enums, Collections, Annotations, Generics, Auto boxing and Data Structure
➢ Database RDBMS/No SQL (SQL, Joins, Indexing)
➢ Multithreading (Re-entrant Lock, Fork & Join, Sync, Executor Framework)
➢ Spring Core & Spring Boot, security, transactions ➢ Hands-on experience with JMS (ActiveMQ, RabbitMQ, Kafka etc)
➢ Memory Mgmt (JVM configuration, Profiling, GC), profiling, Perf tunning, Testing, Jmeter/similar tool)
➢ Devops (CI/CD: Maven/Gradle, Jenkins, Quality plugins, Docker and containersization)
➢ Logical/Analytical skills. Thorough understanding of OOPS concepts, Design principles and implementation of
➢ different type of Design patterns. ➢ Hands-on experience with any of the logging frameworks (SLF4J/LogBack/Log4j) ➢ Experience of writing Junit test cases using Mockito / Powermock frameworks.
➢ Should have practical experience with Maven/Gradle and knowledge of version control systems like Git/SVN etc.
➢ Good communication skills and ability to work with global teams to define and deliver on projects.
➢ Sound understanding/experience in software development process, test-driven development.
➢ Cloud – AWS / AZURE / GCP / PCF or any private cloud would also be fine
➢ Experience in Microservices
- Looking manage IaC module
- Terraform experience is a must
- Terraform Module as a part of central platform team
- Azure/GCP exp is a must
- C#/Python/Java coding – is good to have
Software Engineer
Challenge convention and work on cutting edge technology that is transforming the way our customers manage their physical, virtual and cloud computing environments. Virtual Instruments seeks highly talented people to join our growing team, where your contributions will impact the development and delivery of our product roadmap. Our award-winning Virtana Platform provides the only real-time, system-wide, enterprise scale solution for providing visibility into performance, health and utilization metrics, translating into improved performance and availability while lowering the total cost of the infrastructure supporting mission-critical applications.
We are seeking an individual with knowledge in Systems Management and/or Systems Monitoring Software and/or Performance Management Software and Solutions with insight into integrated infrastructure platforms like Cisco UCS, infrastructure providers like Nutanix, VMware, EMC & NetApp and public cloud platforms like Google Cloud and AWS to expand the depth and breadth of Virtana Products.
Work Location- Pune/ Chennai
Job Type- Hybrid
Role Responsibilities:
- The engineer will be primarily responsible for design and development of software solutions for the Virtana Platform
- Partner and work closely with team leads, architects and engineering managers to design and implement new integrations and solutions for the Virtana Platform.
- Communicate effectively with people having differing levels of technical knowledge.
- Work closely with Quality Assurance and DevOps teams assisting with functional and system testing design and deployment
- Provide customers with complex application support, problem diagnosis and problem resolution
Required Qualifications:
- Minimum of 4+ years of experience in a Web Application centric Client Server Application development environment focused on Systems Management, Systems Monitoring and Performance Management Software.
- Able to understand and comprehend integrated infrastructure platforms and experience working with one or more data collection technologies like SNMP, REST, OTEL, WMI, WBEM.
- Minimum of 4 years of development experience with one of these high level languages like Python, Java, GO is required.
- Bachelor’s (B.E, B.Tech) or Master’s degree (M.E, M.Tech. MCA) in computer science, Computer Engineering or equivalent
- 2 years of development experience in public cloud environment using Kubernetes etc (Google Cloud and/or AWS)
Desired Qualifications:
- Prior experience with other virtualization platforms like OpenShift is a plus
- Prior experience as a contributor to engineering and integration efforts with strong attention to detail and exposure to Open-Source software is a plus
- Demonstrated ability as a strong technical engineer who can design and code with strong communication skills
- Firsthand development experience with the development of Systems, Network and performance Management Software and/or Solutions is a plus
- Ability to use a variety of debugging tools, simulators and test harnesses is a plus
About Virtana: Virtana delivers the industry’s only broadest and deepest Observability Platform that allows organizations to monitor infrastructure, de-risk cloud migrations, and reduce cloud costs by 25% or more.
Over 200 Global 2000 enterprise customers, such as AstraZeneca, Dell, Salesforce, Geico, Costco, Nasdaq, and Boeing, have valued Virtana’s software solutions for over a decade.
Our modular platform for hybrid IT digital operations includes Infrastructure Performance Monitoring and Management (IPM), Artificial Intelligence for IT Operations (AIOps), Cloud Cost Management (Fin Ops), and Workload Placement Readiness Solutions. Virtana is simplifying the complexity of hybrid IT environments with a single cloud-agnostic platform across all the categories listed above. The $30B IT Operations Management (ITOM) Software market is ripe for disruption, and Virtana is uniquely positioned for success.
Senior Software Engineer
Challenge convention and work on cutting edge technology that is transforming the way our customers manage their physical, virtual and cloud computing environments. Virtual Instruments seeks highly talented people to join our growing team, where your contributions will impact the development and delivery of our product roadmap. Our award-winning Virtana Platform provides the only real-time, system-wide, enterprise scale solution for providing visibility into performance, health and utilization metrics, translating into improved performance and availability while lowering the total cost of the infrastructure supporting mission-critical applications.
We are seeking an individual with expert knowledge in Systems Management and/or Systems Monitoring Software, Observability platforms and/or Performance Management Software and Solutions with insight into integrated infrastructure platforms like Cisco UCS, infrastructure providers like Nutanix, VMware, EMC & NetApp and public cloud platforms like Google Cloud and AWS to expand the depth and breadth of Virtana Products.
Work Location: Pune/ Chennai
Job Type:Hybrid
Role Responsibilities:
- The engineer will be primarily responsible for architecture, design and development of software solutions for the Virtana Platform
- Partner and work closely with cross functional teams and with other engineers and product managers to architect, design and implement new features and solutions for the Virtana Platform.
- Communicate effectively across the departments and R&D organization having differing levels of technical knowledge.
- Work closely with UX Design, Quality Assurance, DevOps and Documentation teams. Assist with functional and system test design and deployment automation
- Provide customers with complex and end-to-end application support, problem diagnosis and problem resolution
- Learn new technologies quickly and leverage 3rd party libraries and tools as necessary to expedite delivery
Required Qualifications:
- Minimum of 7+ years of progressive experience with back-end development in a Client Server Application development environment focused on Systems Management, Systems Monitoring and Performance Management Software.
- Deep experience in public cloud environment using Kubernetes and other distributed managed services like Kafka etc (Google Cloud and/or AWS)
- Experience with CI/CD and cloud-based software development and delivery
- Deep experience with integrated infrastructure platforms and experience working with one or more data collection technologies like SNMP, REST, OTEL, WMI, WBEM.
- Minimum of 6 years of development experience with one or more of these high level languages like GO, Python, Java. Deep experience with one of these languages is required.
- Bachelor’s or Master’s degree in computer science, Computer Engineering or equivalent
- Highly effective verbal and written communication skills and ability to lead and participate in multiple projects
- Well versed with identifying opportunities and risks in a fast-paced environment and ability to adjust to changing business priorities
- Must be results-focused, team-oriented and with a strong work ethic
Desired Qualifications:
- Prior experience with other virtualization platforms like OpenShift is a plus
- Prior experience as a contributor to engineering and integration efforts with strong attention to detail and exposure to Open-Source software is a plus
- Demonstrated ability as a lead engineer who can architect, design and code with strong communication and teaming skills
- Deep development experience with the development of Systems, Network and performance Management Software and/or Solutions is a plus
About Virtana: Virtana delivers the industry’s only broadest and deepest Observability Platform that allows organizations to monitor infrastructure, de-risk cloud migrations, and reduce cloud costs by 25% or more.
Over 200 Global 2000 enterprise customers, such as AstraZeneca, Dell, Salesforce, Geico, Costco, Nasdaq, and Boeing, have valued Virtana’s software solutions for over a decade.
Our modular platform for hybrid IT digital operations includes Infrastructure Performance Monitoring and Management (IPM), Artificial Intelligence for IT Operations (AIOps), Cloud Cost Management (Fin Ops), and Workload Placement Readiness Solutions. Virtana is simplifying the complexity of hybrid IT environments with a single cloud-agnostic platform across all the categories listed above. The $30B IT Operations Management (ITOM) Software market is ripe for disruption, and Virtana is uniquely positioned for success.
Responsibilities:
● Design and build scalable APIs and microservices in Node.js (or equivalent backend frameworks).
● Develop and optimize high-performance systems handling large-scale data and concurrent users.
● Ensure system security, reliability, and fault tolerance.
● Collaborate closely with product managers, designers, and frontend engineers for seamless delivery.
● Write clean, maintainable, and well-documented code with a focus on best practices.
● Contribute to architectural decisions, technology choices, and overall system design.
● Monitor, debug, and continuously improve backend performance.
● Stay updated with modern backend technologies and bring innovation into the product.
Desired Qualifications & Skillset:
● 2+ years of professional backend development experience.
● Proficiency with Node.js, Express.js, or similar frameworks.
● Strong knowledge of web application architecture, databases (SQL/NoSQL), and caching strategies.
● Experience with cloud platforms (AWS/GCP/Azure), CI/CD pipelines, and containerization (Docker/Kubernetes) is a plus.
● Ability to break down complex problems into scalable solutions.
● Strong logical aptitude, quick learning ability, and a proactive mindset
Job Description
We are looking for an experienced GCP Cloud Engineer to design, implement, and manage cloud-based solutions on Google Cloud Platform (GCP). The ideal candidate should have expertise in GKE (Google Kubernetes Engine), Cloud Run, Cloud Loadbalancer, Cloud function, Azure DevOps, and Terraform, with a strong focus on automation, security, and scalability.
Work location: Pune/Mumbai/Bangalore
Experience: 4-7 Years
Joining: Mid of October
You will work closely with development, operations, and security teams to ensure robust cloud infrastructure and CI/CD pipelines while optimizing performance and cost.
Key Responsibilities:
1. Cloud Infrastructure Design & Management
· Architect, deploy, and maintain GCP cloud resources via terraform/other automation.
· Implement Google Cloud Storage, Cloud SQL, file store, for data storage and processing needs.
· Manage and configure Cloud Load Balancers (HTTP(S), TCP/UDP, and SSL Proxy) for high availability and scalability.
· Optimize resource allocation, monitoring, and cost efficiency across GCP environments.
2. Kubernetes & Container Orchestration
· Deploy, manage, and optimize workloads on Google Kubernetes Engine (GKE).
· Work with Helm charts, Istio, and service meshes for microservices deployments.
· Automate scaling, rolling updates, and zero-downtime deployments.
3. Serverless & Compute Services
· Deploy and manage applications on Cloud Run and Cloud Functions for scalable, serverless workloads.
· Optimize containerized applications running on Cloud Run for cost efficiency and performance.
4. CI/CD & DevOps Automation
· Design, implement, and manage CI/CD pipelines using Azure DevOps.
· Automate infrastructure deployment using Terraform, Bash and Power shell scripting
· Integrate security and compliance checks into the DevOps workflow (DevSecOps).
Required Skills & Qualifications:
✔ Experience: 4+ years in Cloud Engineering, with a focus on GCP.
✔ Cloud Expertise: Strong knowledge of GCP services (GKE, Compute Engine, IAM, VPC, Cloud Storage, Cloud SQL, Cloud Functions).
✔ Kubernetes & Containers: Experience with GKE, Docker, GKE Networking, Helm.
✔ DevOps Tools: Hands-on experience with Azure DevOps for CI/CD pipeline automation.
✔ Infrastructure-as-Code (IaC): Expertise in Terraform for provisioning cloud resources.
✔ Scripting & Automation: Proficiency in Python, Bash, or PowerShell for automation.
✔ Security & Compliance: Knowledge of cloud security principles, IAM, and compliance standards.
About Wissen Technology
Wissen Technology, established in 2015 and part of the Wissen Group (founded in 2000), is a specialized technology consulting company. We pride ourselves on delivering high-quality solutions for global organizations across Banking & Finance, Telecom, and Healthcare domains.
Here’s why Wissen Technology stands out:
Global Presence: Offices in US, India, UK, Australia, Mexico, and Canada.
Expert Team: Wissen Group comprises over 4000 highly skilled professionals worldwide, with Wissen Technology contributing 1400 of these experts. Our team includes graduates from prestigious institutions such as Wharton, MIT, IITs, IIMs, and NITs.
Recognitions: Great Place to Work® Certified.
Featured as a Top 20 AI/ML Vendor by CIO Insider (2020).
Impressive Growth: Achieved 400% revenue growth in 5 years without external funding.
Successful Projects: Delivered $650 million worth of projects to 20+ Fortune 500 companies.
For more details:
Website: www.wissen.com
Wissen Thought leadership : https://www.wissen.com/articles/
LinkedIn: Wissen Technology
Senior Software Engineer
Challenge convention and work on cutting edge technology that is transforming the way our customers manage their physical, virtual and cloud computing environments. Virtual Instruments seeks highly talented people to join our growing team, where your contributions will impact the development and delivery of our product roadmap. Our award-winning Virtana Platform provides the only real-time, system-wide, enterprise scale solution for providing visibility into performance, health and utilization metrics, translating into improved performance and availability while lowering the total cost of the infrastructure supporting mission-critical applications.
We are seeking an individual with expert knowledge in Systems Management and/or Systems Monitoring Software, Observability platforms and/or Performance Management Software and Solutions with insight into integrated infrastructure platforms like Cisco UCS, infrastructure providers like Nutanix, VMware, EMC & NetApp and public cloud platforms like Google Cloud and AWS to expand the depth and breadth of Virtana Products.
Role Responsibilities:
- The engineer will be primarily responsible for architecture, design and development of software solutions for the Virtana Platform
- Partner and work closely with cross functional teams and with other engineers and product managers to architect, design and implement new features and solutions for the Virtana Platform.
- Communicate effectively across the departments and R&D organization having differing levels of technical knowledge.
- Work closely with UX Design, Quality Assurance, DevOps and Documentation teams. Assist with functional and system test design and deployment automation
- Provide customers with complex and end-to-end application support, problem diagnosis and problem resolution
- Learn new technologies quickly and leverage 3rd party libraries and tools as necessary to expedite delivery
Required Qualifications:
- Minimum of 4-10 years of progressive experience with back-end development in a Client Server Application development environment focused on Systems Management, Systems Monitoring and Performance Management Software.
- Deep experience in public cloud environment using Kubernetes and other distributed managed services like Kafka etc (Google Cloud and/or AWS)
- Experience with CI/CD and cloud-based software development and delivery
- Deep experience with integrated infrastructure platforms and experience working with one or more data collection technologies like SNMP, REST, OTEL, WMI, WBEM.
- Minimum of 6 years of development experience with one or more of these high level languages like GO, Python, Java. Deep experience with one of these languages is required.
- Bachelor’s or Master’s degree in computer science, Computer Engineering or equivalent
- Highly effective verbal and written communication skills and ability to lead and participate in multiple projects
- Well versed with identifying opportunities and risks in a fast-paced environment and ability to adjust to changing business priorities
- Must be results-focused, team-oriented and with a strong work ethic
Desired Qualifications:
- Prior experience with other virtualization platforms like OpenShift is a plus
- Prior experience as a contributor to engineering and integration efforts with strong attention to detail and exposure to Open-Source software is a plus
- Demonstrated ability as a lead engineer who can architect, design and code with strong communication and teaming skills
- Deep development experience with the development of Systems, Network and performance Management Software and/or Solutions is a plus
About Virtana:
Virtana delivers the industry’s only broadest and deepest Observability Platform that allows organizations to monitor infrastructure, de-risk cloud migrations, and reduce cloud costs by 25% or more.
Over 200 Global 2000 enterprise customers, such as AstraZeneca, Dell, Salesforce, Geico, Costco, Nasdaq, and Boeing, have valued Virtana’s software solutions for over a decade.
Our modular platform for hybrid IT digital operations includes Infrastructure Performance Monitoring and Management (IPM), Artificial Intelligence for IT Operations (AIOps), Cloud Cost Management (Fin Ops), and Workload Placement Readiness Solutions. Virtana is simplifying the complexity of hybrid IT environments with a single cloud-agnostic platform across all the categories listed above. The $30B IT Operations Management (ITOM) Software market is ripe for disruption, and Virtana is uniquely positioned for success.
Job Title : AI Architect
Location : Pune (On-site | 3 Days WFO)
Experience : 6+ Years
Shift : US or flexible shifts
Job Summary :
We are looking for an experienced AI Architect to design and deploy AI/ML solutions that align with business goals.
The role involves leading end-to-end architecture, model development, deployment, and integration using modern AI/ML tools and cloud platforms (AWS/Azure/GCP).
Key Responsibilities :
- Define AI strategy and identify business use cases
- Design scalable AI/ML architectures
- Collaborate on data preparation, model development & deployment
- Ensure data quality, governance, and ethical AI practices
- Integrate AI into existing systems and monitor performance
Must-Have Skills :
- Machine Learning, Deep Learning, NLP, Computer Vision
- Data Engineering, Model Deployment (CI/CD, MLOps)
- Python Programming, Cloud (AWS/Azure/GCP)
- Distributed Systems, Data Governance
- Strong communication & stakeholder collaboration
Good to Have :
- AI certifications (Azure/GCP/AWS)
- Experience in big data and analytics
Requirements
- 7+ years of experience with Python
- Strong expertise in Python frameworks (Django, Flask, or FastAPI)
- Experience with GCP, Terraform, and Kubernetes
- Deep understanding of REST API development and GraphQL
- Strong knowledge of SQL and NoSQL databases
- Experience with microservices architecture
- Proficiency with CI/CD tools (Jenkins, CircleCI, GitLab)
- Experience with container orchestration using Kubernetes
- Understanding of cloud architecture and serverless computing
- Experience with monitoring and logging solutions
- Strong background in writing unit and integration tests
- Familiarity with AI/ML concepts and integration points
Responsibilities
- Design and develop scalable backend services for our AI platform
- Architect and implement complex systems with high reliability
- Build and maintain APIs for internal and external consumption
- Work closely with AI engineers to integrate ML functionality
- Optimize application performance and resource utilization
- Make architectural decisions that balance immediate needs with long-term scalability
- Mentor junior engineers and promote best practices
- Contribute to the evolution of our technical standards and processes
What You’ll Do:
We’re looking for a skilled DevOps Engineer to help us build and maintain reliable, secure, and scalable infrastructure. You will work closely with our development, product, and security teams to streamline deployments, improve performance, and ensure cloud infrastructure resilience.
Responsibilities:
● Deploy, manage, and monitor infrastructure on Google Cloud Platform (GCP)
● Build CI/CD pipelines using Jenkins and integrate them with Git workflows
● Design and manage Kubernetes clusters and helm-based deployments
● Manage infrastructure as code using Terraform
● Set up logging, monitoring, and alerting (Stackdriver, Prometheus, Grafana)
● Ensure security best practices across cloud resources, networks, and secrets
● Automate repetitive operations and improve system reliability
● Collaborate with developers to troubleshoot and resolve issues in staging/production environments
What We’re Looking For:
Required Skills:
● 1–3 years of hands-on experience in a DevOps or SRE role
● Strong knowledge of GCP services (IAM, GKE, Cloud Run, VPC, Cloud Build, etc.)
● Proficiency in Kubernetes (deployment, scaling, troubleshooting)
● Experience with Terraform for infrastructure provisioning
● CI/CD pipeline setup using Jenkins, GitHub Actions, or similar tools
● Understanding of DevSecOps principles and cloud security practices
● Good command over Linux, shell scripting, and basic networking concepts
Nice to have:
● Experience with Docker, Helm, ArgoCD
● Exposure to other cloud platforms (AWS, Azure)
● Familiarity with incident response and disaster recovery planning
● Knowledge of logging and monitoring tools like ELK, Prometheus, Grafana
Job title - Python developer
Exp – 4 to 6 years
Location – Pune/Mum/B’lore
PFB JD
Requirements:
- Proven experience as a Python Developer
- Strong knowledge of core Python and Pyspark concepts
- Experience with web frameworks such as Django or Flask
- Good exposure to any cloud platform (GCP Preferred)
- CI/CD exposure required
- Solid understanding of RESTful APIs and how to build them
- Experience working with databases like Oracle DB and MySQL
- Ability to write efficient SQL queries and optimize database performance
- Strong problem-solving skills and attention to detail
- Strong SQL programing (stored procedure, functions)
- Excellent communication and interpersonal skill
Roles and Responsibilities
- Design, develop, and maintain data pipelines and ETL processes using pyspark
- Work closely with data scientists and analysts to provide them with clean, structured data.
- Optimize data storage and retrieval for performance and scalability.
- Collaborate with cross-functional teams to gather data requirements.
- Ensure data quality and integrity through data validation and cleansing processes.
- Monitor and troubleshoot data-related issues to ensure data pipeline reliability.
- Stay up to date with industry best practices and emerging technologies in data engineering.
We are looking for an experienced GCP Cloud Engineer to design, implement, and manage cloud-based solutions on Google Cloud Platform (GCP). The ideal candidate should have expertise in GKE (Google Kubernetes Engine), Cloud Run, Cloud Loadbalancer, Cloud function, Azure DevOps, and Terraform, with a strong focus on automation, security, and scalability.
You will work closely with development, operations, and security teams to ensure robust cloud infrastructure and CI/CD pipelines while optimizing performance and cost.
Key Responsibilities:
1. Cloud Infrastructure Design & Management
· Architect, deploy, and maintain GCP cloud resources via terraform/other automation.
· Implement Google Cloud Storage, Cloud SQL, file store, for data storage and processing needs.
· Manage and configure Cloud Load Balancers (HTTP(S), TCP/UDP, and SSL Proxy) for high availability and scalability.
· Optimize resource allocation, monitoring, and cost efficiency across GCP environments.
2. Kubernetes & Container Orchestration
· Deploy, manage, and optimize workloads on Google Kubernetes Engine (GKE).
· Work with Helm charts, Istio, and service meshes for microservices deployments.
· Automate scaling, rolling updates, and zero-downtime deployments.
3. Serverless & Compute Services
· Deploy and manage applications on Cloud Run and Cloud Functions for scalable, serverless workloads.
· Optimize containerized applications running on Cloud Run for cost efficiency and performance.
4. CI/CD & DevOps Automation
· Design, implement, and manage CI/CD pipelines using Azure DevOps.
· Automate infrastructure deployment using Terraform, Bash and Power shell scripting
· Integrate security and compliance checks into the DevOps workflow (DevSecOps).
Required Skills & Qualifications:
✔ Experience: 8+ years in Cloud Engineering, with a focus on GCP.
✔ Cloud Expertise: Strong knowledge of GCP services (GKE, Compute Engine, IAM, VPC, Cloud Storage, Cloud SQL, Cloud Functions).
✔ Kubernetes & Containers: Experience with GKE, Docker, GKE Networking, Helm.
✔ DevOps Tools: Hands-on experience with Azure DevOps for CI/CD pipeline automation.
✔ Infrastructure-as-Code (IaC): Expertise in Terraform for provisioning cloud resources.
✔ Scripting & Automation: Proficiency in Python, Bash, or PowerShell for automation.
✔ Security & Compliance: Knowledge of cloud security principles, IAM, and compliance standards.
- Strong Site Reliability Engineer (SRE - CloudOps) Profile
- Mandatory (Experience 1) - Must have a minimum 1 years of experience in SRE (CloudOps)
- Mandatory (Core Skill 1) - Must have experience with Google Cloud platforms (GCP)
- Mandatory (Core Skill 2) - Experience with monitoring, APM, and alerting tools like Prometheus, Grafana, ELK, Newrelic, Pingdom, or Pagerduty
- Mandatory (Core Skill 3) ) - Hands-on experience with Kubernetes for orchestration and container management.
- Mandatory (Company) - B2C Product Companies.
Job Title : Lead Java Developer (Backend)
Experience Required : 8 to 15 Years
Open Positions : 5
Location : Any major metro city (Bengaluru, Pune, Chennai, Kolkata, Hyderabad)
Work Mode : Open to Remote / Hybrid / Onsite
Notice Period : Immediate Joiner/30 Days or Less
About the Role :
- We are looking for experienced Lead Java Developers who bring not only strong backend development skills but also a product-oriented mindset and leadership capability.
- This is an opportunity to be part of high-impact digital transformation initiatives that go beyond writing code—you’ll help shape future-ready platforms and drive meaningful change.
- This role is embedded within a forward-thinking digital engineering team that thrives on co-innovation, lean delivery, and end-to-end ownership of platforms and products.
Key Responsibilities :
- Design, develop, and implement scalable backend systems using Java and Spring Boot.
- Collaborate with product managers, designers, and engineers to build intuitive and reliable digital products.
- Advocate and implement engineering best practices : SOLID principles, OOP, clean code, CI/CD, TDD/BDD.
- Lead Agile-based development cycles with a focus on speed, quality, and customer outcomes.
- Guide and mentor team members, fostering technical excellence and ownership.
- Utilize cloud platforms and DevOps tools to ensure performance and reliability of applications.
What We’re Looking For :
- Proven experience in Java backend development (Spring Boot, Microservices).
- 8+ Years of hands-on engineering experience with at least 2+ years in a Lead role.
- Familiarity with cloud platforms such as AWS, Azure, or GCP.
- Good understanding of containerization and orchestration tools like Docker and Kubernetes.
- Exposure to DevOps and Infrastructure as Code practices.
- Strong problem-solving skills and the ability to design solutions from first principles.
- Prior experience in product-based or startup environments is a big plus.
Ideal Candidate Profile :
- A tech enthusiast with a passion for clean code and scalable architecture.
- Someone who thrives in collaborative, transparent, and feedback-driven environments.
- A leader who takes ownership beyond individual deliverables to drive overall team and project success.
Interview Process
- Initial Technical Screening (via platform partner)
- Technical Interview with Engineering Team
- Client-facing Final Round
Additional Info :
- Targeting profiles from product/startup backgrounds.
- Strong preference for candidates with under 1 month of notice period.
- Interviews will be fast-tracked for qualified profiles.
We are looking for a Senior Data Engineer with strong expertise in GCP, Databricks, and Airflow to design and implement a GCP Cloud Native Data Processing Framework. The ideal candidate will work on building scalable data pipelines and help migrate existing workloads to a modern framework.
- Shift: 2 PM 11 PM
- Work Mode: Hybrid (3 days a week) across Xebia locations
- Notice Period: Immediate joiners or those with a notice period of up to 30 days
Key Responsibilities:
- Design and implement a GCP Native Data Processing Framework leveraging Spark and GCP Cloud Services.
- Develop and maintain data pipelines using Databricks and Airflow for transforming Raw → Silver → Gold data layers.
- Ensure data integrity, consistency, and availability across all systems.
- Collaborate with data engineers, analysts, and stakeholders to optimize performance.
- Document standards and best practices for data engineering workflows.
Required Experience:
- 7-8 years of experience in data engineering, architecture, and pipeline development.
- Strong knowledge of GCP, Databricks, PySpark, and BigQuery.
- Experience with Orchestration tools like Airflow, Dagster, or GCP equivalents.
- Understanding of Data Lake table formats (Delta, Iceberg, etc.).
- Proficiency in Python for scripting and automation.
- Strong problem-solving skills and collaborative mindset.
⚠️ Please apply only if you have not applied recently or are not currently in the interview process for any open roles at Xebia.
Looking forward to your response!
Best regards,
Vijay S
Assistant Manager - TAG
Dear,
We are excited to inform you about an exclusive opportunity at Xebia for a Senior Backend Engineer role.
📌 Job Details:
- Role: Senior Backend Engineer
- Shift: 1 PM – 10 PM
- Work Mode: Hybrid (3 days a week) across Xebia locations
- Notice Period: Immediate joiners or up to 30 days
🔹 Job Responsibilities:
✅ Design and develop scalable, reliable, and maintainable backend solutions
✅ Work on event-driven microservices architecture
✅ Implement REST APIs and optimize backend performance
✅ Collaborate with cross-functional teams to drive innovation
✅ Mentor junior and mid-level engineers
🔹 Required Skills:
✔ Backend Development: Scala (preferred), Java, Kotlin
✔ Cloud: AWS or GCP
✔ Databases: MySQL, NoSQL (Cassandra)
✔ DevOps & CI/CD: Jenkins, Terraform, Infrastructure as Code
✔ Messaging & Caching: Kafka, RabbitMQ, Elasticsearch
✔ Agile Methodologies: Scrum, Kanban
⚠ Please apply only if you have not applied recently or are not currently in the interview process for any open roles at Xebia.
Looking forward to your response! Also, feel free to refer anyone in your network who might be a good fit.
Best regards,
Vijay S
Assistant Manager - TAG
What You’ll Do:
* Establish formal data practice for the organisation.
* Build & operate scalable and robust data architectures.
* Create pipelines for the self-service introduction and usage of new data.
* Implement DataOps practices
* Design, Develop, and operate Data Pipelines which support Data scientists and machine learning Engineers.
* Build simple, highly reliable Data storage, ingestion, and transformation solutions which are easy to deploy and manage.
* Collaborate with various business stakeholders, software engineers, machine learning engineers, and analysts.
Who You Are:
* Experience in designing, developing and operating configurable Data pipelines serving high volume and velocity data.
* Experience working with public clouds like GCP/AWS.
* Good understanding of software engineering, DataOps, data architecture, Agile and DevOps methodologies.
* Experience building Data architectures that optimize performance and cost, whether the components are prepackaged or homegrown.
* Proficient with SQL, Java, Spring boot, Python or JVM-based language, Bash.
* Experience with any of Apache open source projects such as Spark, Druid, Beam, Airflow etc. and big data databases like BigQuery, Clickhouse, etc
* Good communication skills with the ability to collaborate with both technical and non-technical people.
* Ability to Think Big, take bets and innovate, Dive Deep, Bias for Action, Hire and Develop the Best, Learn and be Curious
Urgent Hiring: Senior Java Developers |Bangalore (Hybrid) 🚀
We are looking for experienced Java professionals to join our team! If you have the right skills and are ready to make an impact, this is your opportunity!
📌 Role: Senior Java Developer
📌 Experience: 6 to 9 Years
📌 Education: BE/BTech/MCA (Full-time)
📌 Location: Bangalore (Hybrid)
📌 Notice Period: Immediate Joiners Only
✅ Mandatory Skills:
🔹 Strong Core Java
🔹 Spring Boot (data flow basics)
🔹 JPA
🔹 Google Cloud Platform (GCP)
🔹 Spring Framework
🔹 Docker, Kubernetes (Good to have)
Java Developer with GCP
Skills : Java and Spring Boot, GCP, Cloud Storage, BigQuery, RESTful API,
EXP : SA(6-10 Years)
Loc : Bangalore, Mangalore, Chennai, Coimbatore, Pune, Mumbai, Kolkata
Np : Immediate to 60 Days.
Kindly share your updated resume via WA - 91five000260seven
We are seeking a highly skilled and experienced Offshore Data Engineer . The role involves designing, implementing, and testing data pipelines and products.
Qualifications & Experience:
bachelor's or master's degree in computer science, Information Systems, or a related field.
5+ years of experience in data engineering, with expertise in data architecture and pipeline development.
☁️ Proven experience with GCP, Big Query, Databricks, Airflow, Spark, DBT, and GCP Services.
️ Hands-on experience with ETL processes, SQL, PostgreSQL, MySQL, MongoDB, Cassandra.
Strong proficiency in Python and data modelling.
Experience in testing and validation of data pipelines.
Preferred: Experience with eCommerce systems, data visualization tools (Tableau, Looker), and cloud certifications.
If you meet the above criteria and are interested, please share your updated CV along with the following details:
Total Experience:
Current CTC:
Expected CTC:
Current Location:
Preferred Location:
Notice Period / Last Working Day (if serving notice):
⚠️ Kindly share your details only if you have not applied recently or are not currently in the interview process for any open roles at Xebia.
Looking forward to your response!
- TIDB (Good to have)
- Kubernetes( Must to have)
- MySQL(Must to have)
- Maria DB(Must to have)
- Looking candidate who has more exposure into Reliability over maintenance
Position: SDE-1 DevSecOps
Location: Pune, India
Experience Required: 0+ Years
We are looking for a DevSecOps engineer to contribute to product development, mentor team members, and devise creative solutions for customer needs. We value effective communication in person, in documentation, and in code. Ideal candidates thrive in small, collaborative teams, love making an impact, and take pride in their work with a product-focused, self-driven approach. If you're passionate about integrating security and deployment seamlessly into the development process, we want you on our team.
About FlytBase
FlytBase is a global leader in enterprise drone software automation. FlytBase platform is enabling drone-in-a-box deployments all across the globe and has the largest network of partners in 50+ countries.
The team comprises young engineers and designers from top-tier universities such as IIT-B, IIT-KGP, University of Maryland, Georgia Tech, COEP, SRM, KIIT and with deep expertise in drone technology, computer science, electronics, aerospace, and robotics.
The company is headquartered in Silicon Valley, California, USA, and has R&D offices in Pune, India. Widely recognized as a pioneer in the commercial drone ecosystem, FlytBase continues to win awards globally - FlytBase was the Global Grand Champion at the ‘NTT Data Open Innovation Contest’ held in Tokyo, Japan, and was the recipient of ‘ TiE50 Award’ at TiE Silicon Valley.
Role and Responsibilities:
- Participate in the creation and maintenance of CI/CD solutions and pipelines.
- Leverage Linux and shell scripting for automating security and system updates, and design secure architectures using AWS services (VPC, EC2, S3, IAM, EKS/Kubernetes) to enhance application deployment and management.
- Build and maintain secure Docker containers, manage orchestration using Kubernetes, and automate configuration management with tools like Ansible and Chef, ensuring compliance with security standards.
- Implement and manage infrastructure using Terraform, aligning with security and compliance requirements, and set up Dynatrace for advanced monitoring, alerting, and visualization of security metrics. Develop Terraform scripts to automate and optimize infrastructure provisioning and management tasks.
- Utilize Git for secure source code management and integrate continuous security practices into CI/CD pipelines, applying vulnerability scanning and automated security testing tools.
- Contribute to security assessments, including vulnerability and penetration testing, NIST, CIS AWS, NIS2 etc.
- Implement and oversee compliance processes for SOC II, ISO27001, and GDPR.
- Stay updated on cybersecurity trends and best practices, including knowledge of SAST and DAST tools, OWASP Top10.
- Automate routine tasks and create tools to improve team efficiency and system robustness.
- Contribute to disaster recovery plans and ensure robust backup systems are in place.
- Develop and enforce security policies and respond effectively to security incidents.
- Manage incident response protocols, including on-call rotations and strategic planning.
- Conduct post-incident reviews to prevent recurrence and refine the system reliability framework.
- Implementing Service Level Indicators (SLIs) and maintaining Service Level Objectives (SLOs) and Service Level Agreements (SLAs) to ensure high standards of service delivery and reliability.
Best suited for candidates who: (Skills/Experience)
- Up to 4 years of experience in a related field, with a strong emphasis on learning and execution.
- Background in IT or computer science.
- Familiarity with CI/CD tools, cloud platforms (AWS, Azure, or GCP), and programming languages like Python, JavaScript, or Ruby.
- Solid understanding of network layers and TCP/IP protocols.
- In-depth understanding of operating systems, networking, and cloud services.
- Strong problem-solving skills with a 'hacker' mindset.
- Knowledge of security principles, threat modeling, risk assessment, and vulnerability management is a plus.
- Relevant certifications (e.g., CISSP, GWAPT, OSCP) are a plus.
Compensation:
This role comes with an annual CTC that is market competitive and depends on the quality of your work experience, degree of professionalism, culture fit, and alignment with FlytBase’s long-term business strategy.
Perks:
- Fast-paced Startup culture
- Hacker mode environment
- Enthusiastic and approachable team
- Professional autonomy
- Company-wide sense of purpose
- Flexible work hours
- Informal dress code
Avegen is a digital healthcare company empowering individuals to take control of their health and supporting healthcare professionals in delivering life-changing care. Avegen’s core product, HealthMachine®, is a cloud-hosted, next-generation digital healthcare engine for pioneers in digital healthcare, including healthcare providers and pharmaceutical companies, to deploy high-quality robust digital care solutions efficiently and effectively. We are ISO27001, ISO13485, and Cyber Essentials certified; and compliant with the NHS Data Protection Toolkit and GDPR.
Job Summary:
Senior Software Engineer will be responsible for developing, designing, and maintaining the core framework of mobile applications for our platform. This includes tasks such as creating and implementing new features, troubleshooting and debugging any issues, optimizing the performance of the app, collaborating with cross-functional teams, and staying current with the latest advancements in React Native and mobile app development. We are looking for exceptional candidates who have an in-depth understanding of React, JavaScript, and TypeScript, can create pixel-perfect UI, and are obsessed with creating the best experiences for end users.
Your responsibilities include:
- Architect and build performant mobile applications on both iOS and Android platforms using React Native.
- Work with managers to provide technical consultation and assist in defining the scope and sizing of work.
- Maintain compliance with standards such as ISO 27001, ISO 13485, and Cyber Essentials that Avegen adheres to.
- Lead configuration of our platform HealthMachine™ in line with functional specifications and development of platform modules with a focus on quality and performance.
- Write well-documented, clean Javascript/TypeScript code to build reusable components in the platform.
- Maintain code, write automated tests, and assist DevOps in CI/CD to ensure the product is of the highest quality.
- Lead by example in best practices for software design and quality. You will stay current with tools and technologies to seek out the best needed for the job.
- Train team members on software design principles and emerging technologies by taking regular engineering workshops.
Requirements:
- Hands-on experience working in a product company developing consumer-facing mobile apps that are deployed and currently in use in production. He/she must have at least 3 mobile apps live in the Apple App Store/Google Play Store.
- Proven ability to mentor junior engineers to realize a delivery goal.
- Solid attention to detail, problem-solving, and analytical skills & excellent troubleshooting skills.
- In-depth understanding of React and its ecosystem with the latest features.
- Experience in writing modular, reusable custom JavaScript/TypeScript modules that scale well for high-volume applications.
- Strong familiarity with native development tools such as Xcode and Android Studio.
- A positive, “can do” attitude who isn’t afraid to lead the complex React Native implementations.
- Experience in building mobile apps with intensive server communication (REST APIs, GraphQL, WebSockets, etc.).
- Self-starter, able to work in a fast-paced, deadline-driven environment with multiple priorities.
- Excellent command of version control systems like Git.
- Working in Agile/SCRUM methodology, understanding of the application life cycle, and experience working on project management tools like Atlassian JIRA.
- Good command of the Unix operating system and understanding of cloud computing platforms like AWS, GCP, Azure, etc.
- Hands-on experience in database technologies including RDBMS and NoSQL and a firm grasp of data models and ER diagrams.
- Open source contributions and experience developing your own React Native wrappers for native functionality is a plus.
Qualification:
BE/BTech/MS in Information Technology, Computer Science, or a related discipline.
TVARIT GmbH develops and delivers solutions in the field of artificial intelligence (AI) for the Manufacturing, automotive, and process industries. With its software products, TVARIT makes it possible for its customers to make intelligent and well-founded decisions, e.g., in forward-looking Maintenance, increasing the OEE and predictive quality. We have renowned reference customers, competent technology, a good research team from renowned Universities, and the award of a renowned AI prize (e.g., EU Horizon 2020) which makes TVARIT one of the most innovative AI companies in Germany and Europe.
We are looking for a self-motivated person with a positive "can-do" attitude and excellent oral and written communication skills in English.
We are seeking a skilled and motivated senior Data Engineer from the manufacturing Industry with over four years of experience to join our team. The Senior Data Engineer will oversee the department’s data infrastructure, including developing a data model, integrating large amounts of data from different systems, building & enhancing a data lake-house & subsequent analytics environment, and writing scripts to facilitate data analysis. The ideal candidate will have a strong foundation in ETL pipelines and Python, with additional experience in Azure and Terraform being a plus. This role requires a proactive individual who can contribute to our data infrastructure and support our analytics and data science initiatives.
Skills Required:
- Experience in the manufacturing industry (metal industry is a plus)
- 4+ years of experience as a Data Engineer
- Experience in data cleaning & structuring and data manipulation
- Architect and optimize complex data pipelines, leading the design and implementation of scalable data infrastructure, and ensuring data quality and reliability at scale
- ETL Pipelines: Proven experience in designing, building, and maintaining ETL pipelines.
- Python: Strong proficiency in Python programming for data manipulation, transformation, and automation.
- Experience in SQL and data structures
- Knowledge in big data technologies such as Spark, Flink, Hadoop, Apache, and NoSQL databases.
- Knowledge of cloud technologies (at least one) such as AWS, Azure, and Google Cloud Platform.
- Proficient in data management and data governance
- Strong analytical experience & skills that can extract actionable insights from raw data to help improve the business.
- Strong analytical and problem-solving skills.
- Excellent communication and teamwork abilities.
Nice To Have:
- Azure: Experience with Azure data services (e.g., Azure Data Factory, Azure Databricks, Azure SQL Database).
- Terraform: Knowledge of Terraform for infrastructure as code (IaC) to manage cloud.
- Bachelor’s degree in computer science, Information Technology, Engineering, or a related field from top-tier Indian Institutes of Information Technology (IIITs).
- Benefits And Perks
- A culture that fosters innovation, creativity, continuous learning, and resilience
- Progressive leave policy promoting work-life balance
- Mentorship opportunities with highly qualified internal resources and industry-driven programs
- Multicultural peer groups and supportive workplace policies
- Annual workcation program allowing you to work from various scenic locations
- Experience the unique environment of a dynamic start-up
Why should you join TVARIT ?
Working at TVARIT, a deep-tech German IT startup, offers a unique blend of innovation, collaboration, and growth opportunities. We seek individuals eager to adapt and thrive in a rapidly evolving environment.
If this opportunity excites you and aligns with your career aspirations, we encourage you to apply today!
Job Title: .NET Developer with Cloud Migration Experience
Job Description:
We are seeking a skilled .NET Developer with experience in C#, MVC, and ASP.NET to join our team. The ideal candidate will also have hands-on experience with cloud migration projects, particularly in migrating on-premise applications to cloud platforms such as Azure or AWS.
Responsibilities:
- Develop, test, and maintain .NET applications using C#, MVC, and ASP.NET
- Collaborate with cross-functional teams to define, design, and ship new features
- Participate in code reviews and ensure coding best practices are followed
- Work closely with the infrastructure team to migrate on-premise applications to the cloud
- Troubleshoot and debug issues that arise during migration and post-migration phases
- Stay updated with the latest trends and technologies in .NET development and cloud computing
Requirements:
- Bachelor's degree in Computer Science or related field
- X+ years of experience in .NET development using C#, MVC, and ASP.NET
- Hands-on experience with cloud migration projects, preferably with Azure or AWS
- Strong understanding of cloud computing concepts and principles
- Experience with database technologies such as SQL Server
- Excellent problem-solving and communication skills
Preferred Qualifications:
- Microsoft Azure or AWS certification
- Experience with other cloud platforms such as Google Cloud Platform (GCP)
- Familiarity with DevOps practices and tools
Company - Apptware Solutions
Location Baner Pune
Team Size - 130+
Job Description -
Cloud Engineer with 8+yrs of experience
Roles and Responsibilities
● Have 8+ years of strong experience in deployment, management and maintenance of large systems on-premise or cloud
● Experience maintaining and deploying highly-available, fault-tolerant systems at scale
● A drive towards automating repetitive tasks (e.g. scripting via Bash, Python, Ruby, etc)
● Practical experience with Docker containerization and clustering (Kubernetes/ECS)
● Expertise with AWS (e.g. IAM, EC2, VPC, ELB, ALB, Autoscaling, Lambda, VPN)
● Version control system experience (e.g. Git)
● Experience implementing CI/CD (e.g. Jenkins, TravisCI, CodePipeline)
● Operational (e.g. HA/Backups) NoSQL experience (e.g. MongoDB, Redis) SQL experience (e.g. MySQL)
● Experience with configuration management tools (e.g. Ansible, Chef) ● Experience with infrastructure-as-code (e.g. Terraform, Cloudformation)
● Bachelor's or master’s degree in CS, or equivalent practical experience
● Effective communication skills
● Hands-on cloud providers like MS Azure and GC
● A sense of ownership and ability to operate independently
● Experience with Jira and one or more Agile SDLC methodologies
● Nice to Have:
○ Sensu and Graphite
○ Ruby or Java
○ Python or Groovy
○ Java Performance Analysis
Role: Cloud Engineer
Industry Type: IT-Software, Software Services
Functional Area: IT Software - Application Programming, Maintenance Employment Type: Full Time, Permanent
Role Category: Programming & Design
Publicis Sapient Overview:
The Senior Associate People Senior Associate L1 in Data Engineering, you will translate client requirements into technical design, and implement components for data engineering solution. Utilize deep understanding of data integration and big data design principles in creating custom solutions or implementing package solutions. You will independently drive design discussions to insure the necessary health of the overall solution
.
Job Summary:
As Senior Associate L2 in Data Engineering, you will translate client requirements into technical design, and implement components for data engineering solution. Utilize deep understanding of data integration and big data design principles in creating custom solutions or implementing package solutions. You will independently drive design discussions to insure the necessary health of the overall solution
The role requires a hands-on technologist who has strong programming background like Java / Scala / Python, should have experience in Data Ingestion, Integration and data Wrangling, Computation, Analytics pipelines and exposure to Hadoop ecosystem components. You are also required to have hands-on knowledge on at least one of AWS, GCP, Azure cloud platforms.
Role & Responsibilities:
Your role is focused on Design, Development and delivery of solutions involving:
• Data Integration, Processing & Governance
• Data Storage and Computation Frameworks, Performance Optimizations
• Analytics & Visualizations
• Infrastructure & Cloud Computing
• Data Management Platforms
• Implement scalable architectural models for data processing and storage
• Build functionality for data ingestion from multiple heterogeneous sources in batch & real-time mode
• Build functionality for data analytics, search and aggregation
Experience Guidelines:
Mandatory Experience and Competencies:
# Competency
1.Overall 5+ years of IT experience with 3+ years in Data related technologies
2.Minimum 2.5 years of experience in Big Data technologies and working exposure in at least one cloud platform on related data services (AWS / Azure / GCP)
3.Hands-on experience with the Hadoop stack – HDFS, sqoop, kafka, Pulsar, NiFi, Spark, Spark Streaming, Flink, Storm, hive, oozie, airflow and other components required in building end to end data pipeline.
4.Strong experience in at least of the programming language Java, Scala, Python. Java preferable
5.Hands-on working knowledge of NoSQL and MPP data platforms like Hbase, MongoDb, Cassandra, AWS Redshift, Azure SQLDW, GCP BigQuery etc
6.Well-versed and working knowledge with data platform related services on at least 1 cloud platform, IAM and data security
Preferred Experience and Knowledge (Good to Have):
# Competency
1.Good knowledge of traditional ETL tools (Informatica, Talend, etc) and database technologies (Oracle, MySQL, SQL Server, Postgres) with hands on experience
2.Knowledge on data governance processes (security, lineage, catalog) and tools like Collibra, Alation etc
3.Knowledge on distributed messaging frameworks like ActiveMQ / RabbiMQ / Solace, search & indexing and Micro services architectures
4.Performance tuning and optimization of data pipelines
5.CI/CD – Infra provisioning on cloud, auto build & deployment pipelines, code quality
6.Cloud data specialty and other related Big data technology certifications
Personal Attributes:
• Strong written and verbal communication skills
• Articulation skills
• Good team player
• Self-starter who requires minimal oversight
• Ability to prioritize and manage multiple tasks
• Process orientation and the ability to define and set up processes
Publicis Sapient Overview:
The Senior Associate People Senior Associate L1 in Data Engineering, you will translate client requirements into technical design, and implement components for data engineering solution. Utilize deep understanding of data integration and big data design principles in creating custom solutions or implementing package solutions. You will independently drive design discussions to insure the necessary health of the overall solution
.
Job Summary:
As Senior Associate L1 in Data Engineering, you will do technical design, and implement components for data engineering solution. Utilize deep understanding of data integration and big data design principles in creating custom solutions or implementing package solutions. You will independently drive design discussions to insure the necessary health of the overall solution
The role requires a hands-on technologist who has strong programming background like Java / Scala / Python, should have experience in Data Ingestion, Integration and data Wrangling, Computation, Analytics pipelines and exposure to Hadoop ecosystem components. Having hands-on knowledge on at least one of AWS, GCP, Azure cloud platforms will be preferable.
Role & Responsibilities:
Job Title: Senior Associate L1 – Data Engineering
Your role is focused on Design, Development and delivery of solutions involving:
• Data Ingestion, Integration and Transformation
• Data Storage and Computation Frameworks, Performance Optimizations
• Analytics & Visualizations
• Infrastructure & Cloud Computing
• Data Management Platforms
• Build functionality for data ingestion from multiple heterogeneous sources in batch & real-time
• Build functionality for data analytics, search and aggregation
Experience Guidelines:
Mandatory Experience and Competencies:
# Competency
1.Overall 3.5+ years of IT experience with 1.5+ years in Data related technologies
2.Minimum 1.5 years of experience in Big Data technologies
3.Hands-on experience with the Hadoop stack – HDFS, sqoop, kafka, Pulsar, NiFi, Spark, Spark Streaming, Flink, Storm, hive, oozie, airflow and other components required in building end to end data pipeline. Working knowledge on real-time data pipelines is added advantage.
4.Strong experience in at least of the programming language Java, Scala, Python. Java preferable
5.Hands-on working knowledge of NoSQL and MPP data platforms like Hbase, MongoDb, Cassandra, AWS Redshift, Azure SQLDW, GCP BigQuery etc
Preferred Experience and Knowledge (Good to Have):
# Competency
1.Good knowledge of traditional ETL tools (Informatica, Talend, etc) and database technologies (Oracle, MySQL, SQL Server, Postgres) with hands on experience
2.Knowledge on data governance processes (security, lineage, catalog) and tools like Collibra, Alation etc
3.Knowledge on distributed messaging frameworks like ActiveMQ / RabbiMQ / Solace, search & indexing and Micro services architectures
4.Performance tuning and optimization of data pipelines
5.CI/CD – Infra provisioning on cloud, auto build & deployment pipelines, code quality
6.Working knowledge with data platform related services on at least 1 cloud platform, IAM and data security
7.Cloud data specialty and other related Big data technology certifications
Job Title: Senior Associate L1 – Data Engineering
Personal Attributes:
• Strong written and verbal communication skills
• Articulation skills
• Good team player
• Self-starter who requires minimal oversight
• Ability to prioritize and manage multiple tasks
• Process orientation and the ability to define and set up processes


















