Cutshort logo
Google cloud platform gcp jobs

50+ Google Cloud Platform (GCP) Jobs in India

Apply to 50+ Google Cloud Platform (GCP) Jobs on CutShort.io. Find your next job, effortlessly. Browse Google Cloud Platform (GCP) Jobs and apply today!

icon
Mango Sciences
Remote only
7 - 12 yrs
₹20L - ₹40L / yr
skill iconPython
SQL
ETL
Data pipeline
Datawarehousing
+12 more

The Mission: We are looking for a visionary Technical Leader to own our healthcare data ecosystem from the first byte to the final dashboard. You won't just be managing a platform; you’ll be the primary architect of a clinical data engine that powers life-changing analytics. If you are an expert in SQL and Python who thrives on solving the "puzzle" of healthcare interoperability (FHIR/HL7) while mentoring a high-performing team, this is your seat at the table.

What You’ll Own

  • Architectural Sovereignty: Define the end-to-end blueprint for our data warehouse (staging, marts, and semantic layers). You choose the frameworks, set the coding standards, and decide how we handle complex dimensional modeling and SCDs.
  • Engineering Excellence: Lead by example. You’ll write production-grade Python for ingestion frameworks and craft advanced, set-based SQL transformations that others use as gold-standard references.
  • The Interoperability Bridge: Turn the chaos of EHR exports, REST APIs, and claims data into clean, FHIR-aligned governed datasets. You ensure our data speaks the language of modern healthcare.
  • Technical Mentorship: Act as the "Engineer’s Engineer." You’ll run design reviews, champion CI/CD best practices, and build the runbooks that keep our small but mighty team efficient.
  • Security by Design: Direct the implementation of HIPAA-compliant data flows, ensuring encryption, auditability, and access controls are baked into the architecture, not bolted on.

The Stack You’ll Command

  • Languages: Expert-level SQL (CTE, Window Functions, Tuning) and Production Python.
  • Databases: Deep polyglot experience across MSSQL, PostgreSQL, Oracle, and NoSQL (MongoDB/Elasticsearch).
  • Orchestration: Advanced Apache Airflow (SLAs, retries, and complex DAGs).
  • Ecosystem: GitHub for CI/CD, Tableau/PowerBI for semantic layers, and Unix/Linux for shell scripting.

Who You Are

  • Experienced: You have 8–12+ years in data engineering, with a significant portion spent in a Lead or Architect capacity.
  • Healthcare-Fluent: You understand the stakes of PHI. You’ve worked with FHIR/HL7 and know how to map clinical resources to analytical models.
  • Performance-Obsessed: You don’t just make it work; you make it fast. You’re the person who uses EXPLAIN/ANALYZE to shave minutes off a query.
  • Culture-Builder: You believe in documentation, observability (lineage/freshness), and "leaving the campground cleaner than you found it."

Bonus Points for:

  • Privacy Pro: Experience with PII/PHI de-identification and privacy-by-design.
  • Cloud Native: Deep familiarity with Azure, AWS, or GCP security and data services.
  • Search Experts: Experience with near-real-time indexing via Elasticsearch.

To process your resume for the next process, please fill out the Google form with your updated resume.

 

Pre-screen Question: https://forms.gle/q3CzfdSiWoXTCEZJ7

 

Details: https://forms.gle/FGgkmQvLnS8tJqo5A

Read more
Quantiphi

at Quantiphi

3 candid answers
1 video
Nikita Sinha
Posted by Nikita Sinha
Mumbai, Trivandrum, Bengaluru (Bangalore)
3 - 6 yrs
Upto ₹30L / yr (Varies
)
Google Cloud Platform (GCP)
DevOps
CI/CD
skill iconKubernetes
skill iconGitHub
+2 more

Role & Responsibilities

  • Develop and deliver automation software to build and improve platform functionality
  • Ensure reliability, availability, and manageability of applications and cloud platforms
  • Champion adoption of Infrastructure as Code (IaC) practices
  • Design and build self-service, self-healing, monitoring, and alerting platforms
  • Automate development and testing workflows through CI/CD pipelines (Git, Jenkins, SonarQube, Artifactory, Docker containers)
  • Build and manage container hosting platforms using Kubernetes

Requirements

  • Strong experience deploying and maintaining GCP cloud infrastructure
  • Well-versed in service-oriented and cloud-based architecture design patterns
  • Knowledge of cloud services including compute, storage, networking, messaging, and automation tools (e.g., CloudFormation/Terraform equivalents)
  • Experience with relational and NoSQL databases (Postgres, Cassandra)
  • Hands-on experience with automation/configuration tools (Puppet, Chef, Ansible, Terraform)

Additional Skills

  • Strong Linux system administration and troubleshooting skills
  • Programming/scripting exposure (Bash, Python, Core Java, or Scala)
  • CI/CD pipeline experience (Jenkins, Git, Maven, etc.)
  • Experience integrating solutions in multi-region environments
  • Familiarity with Agile/Scrum/DevOps methodologies


Read more
AI Recruiting Platform

AI Recruiting Platform

Agency job
via Peak Hire Solutions by Dhara Thakkar
Remote only
1 - 15 yrs
₹70L - ₹99L / yr
MySQL
skill iconPython
Microservices
API
skill iconJava
+18 more

Description

Join company as a Backend Developer and become a pivotal force in building the robust, scalable services that power our innovative platforms. In this role, you will design, develop, and maintain server‑side applications, ensuring high performance and reliability for millions of users. You’ll collaborate closely with cross‑functional product, front‑end, and DevOps teams to translate business requirements into clean, efficient code, while participating in code reviews and architectural discussions. Our dynamic environment encourages continuous learning, offering opportunities to work with cutting‑edge technologies, cloud infrastructures, and modern development practices. As a key contributor, your work will directly impact product quality, user satisfaction, and the overall success of company's mission to streamline hiring solutions.


Requirements:

  • 1–15 years of professional experience in backend development, with a strong focus on building APIs and microservices.
  • Proficiency in server‑side languages such as Python, Java, Node.js, or Go, and solid understanding of object‑oriented and functional programming paradigms.
  • Extensive experience with relational (e.g., PostgreSQL, MySQL) and NoSQL databases (e.g., MongoDB, Redis), including schema design and query optimization.
  • Familiarity with cloud platforms (AWS, GCP, Azure) and containerization technologies like Docker and Kubernetes.
  • Hands‑on experience with version control (Git), CI/CD pipelines, and automated testing frameworks.
  • Strong problem‑solving abilities, effective communication skills, and a collaborative mindset for working within multidisciplinary teams.


Roles and Responsibilities:

  • Design, develop, and maintain high‑throughput backend services and RESTful APIs that support core product features.
  • Implement data models and storage solutions, ensuring data integrity, security, and optimal performance.
  • Collaborate with front‑end engineers, product managers, and designers to define technical requirements and deliver end‑to‑end solutions.
  • Participate in code reviews, provide constructive feedback, and uphold coding standards and best practices.
  • Monitor, troubleshoot, and optimize production systems, implementing robust logging, alerting, and performance tuning.
  • Contribute to the continuous improvement of development workflows, including CI/CD automation, testing strategies, and deployment processes.
  • Stay current with emerging technologies and industry trends, proposing innovative approaches to enhance system architecture.


Budget:

  • Job Type: payroll
  • Experience Range: 1–15 years


Read more
Remote only
0 - 0 yrs
₹1L - ₹1.5L / yr
skill iconAmazon Web Services (AWS)
Cyber Security
IT infrastructure
IT security
AWS CloudFormation
+11 more

📍 Position: IT Intern

👩‍💻 Experience: 0–6 Months (Freshers/Recent graduates can apply)

🎓 Qualification: B.Tech (IT) / M.Tech (IT) only

📌 Mode: Remote (WFH)

⏳ Shift: Willingness to work in night/rotational shifts

🗣 Communication: Excellent English


𝐊𝐞𝐲 𝐑𝐞𝐬𝐩𝐨𝐧𝐬𝐢𝐛𝐢𝐥𝐢𝐭𝐢𝐞𝐬:

- Assist in troubleshooting and resolving basic desktop, software, hardware, and network-related issues under supervision.

- Support user account management activities using #Azure Entra ID (Azure AD), Active Directory, and Microsoft 365.

- Assist the IT team in configuring, monitoring, and supporting #AWS cloud services (EC2, S3, IAM, WorkSpaces).

- Support maintenance and monitoring of on-premises server infrastructure, internal applications, and email services.

- Assist with backups, basic disaster recovery tasks, and security procedures as per company policies.

- Help create and update technical documentation and knowledge base articles.

- Work closely with internal teams and assist in system upgrades, IT infrastructure improvements, and ongoing projects.


💻 Technical Requirements:

- Laptop with i5 or higher processor

-Reliable internet connectivity with 100mbps speed

Read more
Neuvamacro Technology Pvt Ltd
Chennai
3 - 6 yrs
₹12L - ₹17L / yr
skill iconJavascript
skill iconPython
skill iconDjango
skill iconFlask
skill iconNodeJS (Node.js)
+11 more

Years of Experience – 3 to 6 years

Location – Chennai

Work Mode: Hybrid – 3 days mandatory Work From Office (WFO).

Job Type: Full-Time


Role Description:

• Develops software solutions by studying information needs; conferring with users; studying

systems flow, data usage, and work processes; investigating problem areas; following the

software development lifecycle.

• Determines operational feasibility by evaluating analysis, problem definition, requirements,

solution development, and proposed solutions.

• Documents and demonstrates solutions by developing documentation, flowcharts, layouts,

diagrams, charts, code comments and clear code.

• Prepares and installs solutions by determining and designing system specifications,

standards, and programming.

• Improves operations by conducting systems analysis, recommending changes in policies and

procedures.

• Updates job knowledge by studying state-of-the-art development tools, programming

techniques, and computing equipment; participating in educational opportunities; reading

professional publications; maintaining personal networks; participating in professional

organizations.

• Protects operations by keeping information confidential.

• Provides information by collecting, analyzing, and summarizing development and service

issues. Accomplishes engineering and organization mission by completing related results as

needed.

• Supports and develops software engineers by providing advice, coaching, and educational

opportunities.


Mandatory skills:

• Hands-on experience with web development in any of the following programming languages:

Python, JavaScript

• Hands-on experience in the following JavaScript framework: React

• Hands-on experience in any of the following framework: Python (Django, Flask) or NodeJS

(Express, NestJS)

• Experience with back-end development, basic microservices implementation and

containerization using Docker

• Expertise in Relational databases such as Postgres, MySQL, Oracle, etc.

• Expertise in NoSQL DB such as MongoDB, Amazon DynamoDB, Cassandra, etc.

• Good Knowledge with any of the cloud providers such as Amazon Web Services, Microsoft

Azure or Google Cloud.

• Excellent verbal and written communication skills.

Read more
TVARIT GmbH

at TVARIT GmbH

2 candid answers
DrSoumya Sahadevan
Posted by DrSoumya Sahadevan
Pune
7 - 15 yrs
₹20L - ₹30L / yr
skill iconAmazon Web Services (AWS)
Windows Azure
Google Cloud Platform (GCP)
PySpark
databricks
+2 more

About TVARIT

TVARIT GmbH specializes in developing and delivering cutting-edge artificial intelligence (AI) solutions for the metal industry, including steel, aluminum, copper, cast iron, and more. Our software products empower customers to make intelligent, data-driven decisions, driving advancements in Predictive Quality (PsQ), Predictive Maintenance (PdM), and Energy Consumption Reduction (PsE), etc. With a strong portfolio of renowned reference customers, state-of-the-art technology, a talented research team from prestigious universities, and recognition through esteemed awards such as the EU Horizon 2020 AI Prize, TVARIT is recognized as one of the most innovative AI companies in Germany and Europe. We are seeking a self-motivated individual with a positive "can-do" attitude and excellent oral and written communication skills in English to join our team.


Job Description: We are looking for a Senior Data Engineer with strong expertise in Azure Databricks, PySpark, and distributed computing to develop and optimize scalable ETL pipelines for manufacturing analytics. The role involves working with high-frequency industrial data to enable real-time and batch data processing.


Key Responsibilities · Build scalable real-time and batch processing workflows using Azure Databricks, PySpark, and Apache Spark.

· Perform data pre-processing, including cleaning, transformation, deduplication, normalization, encoding, and scaling to ensure high-quality input for downstream analytics.

· Design and maintain cloud-based data architectures, including data lakes, lakehouses, and warehouses, following Medallion Architecture.

· Deploy and optimize data solutions on Azure (preferred), AWS, or GCP with a focus on performance, security, and scalability.

· Develop and optimize ETL/ELT pipelines for structured and unstructured data from IoT, MES, SCADA, LIMS, and ERP systems. · Automate data workflows using CI/CD and DevOps best practices, ensuring security and compliance with industry standards

· Monitor, troubleshoot, and enhance data pipelines for high availability and reliability.

· Utilize Docker and Kubernetes for scalable data processing.

· Collaborate with automation team, data scientists and engineers to provide clean, structured data for AI/ML models.


Desired Skills and Qualifications · Bachelor’s or Master’s degree in Computer Science, Information Technology, or a related field.

· 7+ years of experience in core data engineering, with a strong focus on cloud platforms such as Azure (preferred), AWS, or GCP · Proficiency in PySpark, Azure Databricks, Python and Apache Spark, etc.

. 2 years of team handling experience.

· Expertise in relational databases (e.g., SQL Server, PostgreSQL), time series databases (e.g. Influx DB), and NoSQL databases (e.g., MongoDB, Cassandra) · Experience in containerization (Docker, Kubernetes).

· Strong analytical and problem-solving skills with attention to detail.

· Good to have MLOps, DevOps including model lifecycle management

· Excellent communication and collaboration skills, with a proven ability to work effectively as a team player.

· Comfortable working in a dynamic, fast-paced startup environment, adapting quickly to changing priorities and responsibilities.

Read more
wwwwebnyayai
Noida
4 - 8 yrs
₹6L - ₹30L / yr
Google Cloud Platform (GCP)
Artificial Intelligence (AI)
skill iconPython
skill iconDjango
Apache Kafka

We are looking to recruit an expert for backend software development at Webnyay. We are an enterprise SaaS startup catering to India and international markets. We are now growing fast and need a rockstar senior software developer who is an expert in Python/Django and GCP.


What we are looking for:

  • At least 6 years of professional software development experience.
  • At least 4 years of experience with Python & Django.
  • Proficiency in Natural Language Processing (tokenization, stopword removal, lemmatization, embeddings, etc.)
  • Experience in computer vision fundamentals, particularly object detection concepts and architectures (e.g., YOLO, Faster R-CNN)
  • Experience in search and retrieval systems and related concepts like ranking models, vector search, or semantic search techniques
  • Experience with multiple databases (relational and non-relational).
  • Experience with hosting on GCP and other cloud services.
  • Familiar with continuous integration and other automation.
  • Focus on code quality and writing scalable code.
  • Ability to learn and adopt new technologies depending on business requirements.
  • Prior startup experience will be a plus!


Some of your responsibilities would include:

  • Work closely in a highly AGILE environment with a team of engineers.
  • Create and maintain technical documentation of technical design and solution.
  • Build products/features that are highly scalable, secure, highly available, high performing and cost-effective.
  • Help team in debugging.
  • Perform code reviews.
  • Understand the full feature set/ implementation and architecture of the applications.
  • Analyze business goals and product requirements and contribute to application architecture design, development and delivery.
  • Provide technical expertise for every phase of the project lifecycle; from concept development to solution design, implementation, optimization and support.
  • Act as an Interface with business teams to understand and create technical specifications for workable solutions within the project.
  • Explore and work with LLM APIs and Generative AI.
  • Make performance-related recommendations, identify and eliminate performance bottlenecks (hardware, software, configuration); drive performance tuning, re-design and re-factoring.
  • Participate in the software development lifecycle, which includes research, new development, modification, security, reuse, re-engineering and maintenance of common component libraries.
  • Participate in product definition and feature prioritization.
  • Collaborate with internal teams and stakeholders across business verticals.


Read more
CLOUDSUFI

at CLOUDSUFI

3 recruiters
Ayushi Dwivedi
Posted by Ayushi Dwivedi
Remote only
6 - 11 yrs
₹30L - ₹45L / yr
Google Cloud Platform (GCP)
SQL
skill iconPython

Highlights - Current location of candidate should be Bangalore

Total Exp - 6-12yrs

Joining Time period - Within 30 days

GCP Bigquery expert, GCP Certified


About Us

CLOUDSUFI, a Google Cloud Premier Partner, is a global leading provider of data-driven digital transformation across cloud-based enterprises. With a global presence and focus on Software & Platforms, Life sciences and Healthcare, Retail, CPG, financial services and supply chain, CLOUDSUFI is positioned to meet customers where they are in their data monetization journey.

 

Our Values 

We are a passionate and empathetic team that prioritizes human values. Our purpose is to elevate the quality of lives for our family, customers, partners and the community.

 

Equal Opportunity Statement 

CLOUDSUFI is an equal opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all employees. All qualified candidates receive consideration for employment without regard to race, colour, religion, gender, gender identity or expression, sexual orientation and national origin status. We provide equal opportunities in employment, advancement, and all other areas of our workplace. Please explore more at https://www.cloudsufi.com/


Job Summary

We are seeking a highly skilled and motivated Data Engineer to join our Development POD for the Integration Project. The ideal candidate will be responsible for designing, building, and maintaining robust data pipelines to ingest, clean, transform, and integrate diverse public datasets into our knowledge graph. This role requires a strong understanding of Cloud Platform (GCP) services, data engineering best practices, and a commitment to data quality and scalability.


Key Responsibilities

ETL Development: Design, develop, and optimize data ingestion, cleaning, and transformation pipelines for various data sources (e.g., CSV, API, XLS, JSON, SDMX) using Cloud Platform services (Cloud Run, Dataflow) and Python.

Schema Mapping & Modeling: Work with LLM-based auto-schematization tools to map source data to our schema.org vocabulary, defining appropriate Statistical Variables (SVs) and generating MCF/TMCF files.

Entity Resolution & ID Generation: Implement processes for accurately matching new entities with existing IDs or generating unique, standardized IDs for new entities.

Knowledge Graph Integration: Integrate transformed data into the Knowledge Graph, ensuring proper versioning and adherence to existing standards. 

API Development: Develop and enhance REST and SPARQL APIs via Apigee to enable efficient access to integrated data for internal and external stakeholders.

Data Validation & Quality Assurance: Implement comprehensive data validation and quality checks (statistical, schema, anomaly detection) to ensure data integrity, accuracy, and freshness. Troubleshoot and resolve data import errors.

Automation & Optimization: Collaborate with the Automation POD to leverage and integrate intelligent assets for data identification, profiling, cleaning, schema mapping, and validation, aiming for significant reduction in manual effort.

Collaboration: Work closely with cross-functional teams, including Managed Service POD, Automation POD, and relevant stakeholders.


Qualifications and Skills

Education: Bachelor's or Master's degree in Computer Science, Data Engineering, Information Technology, or a related quantitative field.

Experience: 6+ years of proven experience as a Data Engineer, with a strong portfolio of successfully implemented data pipelines.

Programming Languages: Proficiency in Python for data manipulation, scripting, and pipeline development.

Cloud Platforms and Tools: Expertise in Google Cloud Platform (GCP) services, including Cloud Storage, Cloud SQL, Cloud Run, Dataflow, Pub/Sub, BigQuery, and Apigee. Proficiency with Git-based version control.


Core Competencies:

Must Have - SQL, Python, BigQuery, (GCP DataFlow / Apache Beam), Google Cloud Storage (GCS)

Must Have - GCP Certification

Must Have - Proven ability in comprehensive data wrangling, cleaning, and transforming complex datasets from various formats (e.g., API, CSV, XLS, JSON)

Secondary Skills - SPARQL, Schema.org, Apigee, CI/CD (Cloud Build), GCP, Cloud Data Fusion, Data Modelling

Solid understanding of data modeling, schema design, and knowledge graph concepts (e.g., Schema.org, RDF, SPARQL, JSON-LD).

Experience with data validation techniques and tools.

Familiarity with CI/CD practices and the ability to work in an Agile framework.

Strong problem-solving skills and keen attention to detail.

Read more
Ampera Technologies
Faisal AshrafNomani
Posted by Faisal AshrafNomani
Bengaluru (Bangalore), Chennai
4 - 10 yrs
Best in industry
AWS
Windows Azure
Google Cloud Platform (GCP)
Large Language Models (LLM)
AI Agents
+2 more

Job Description:

 

We are seeking a Cloud & AI Platform Engineer to design and operate AI-native infrastructure that supports large-scale machine learning, generative AI, and agentic AI systems. 

This role will focus on building secure, scalable, and automated multi-cloud platforms across AWS, Azure, GCP, and hybrid on-prem environments, enabling teams to deploy LLMs, AI agents, and data-driven applications reliably in production. 

You will work at the intersection of cloud engineering, MLOps, LLMOps, DevOps, and data infrastructure, helping build platforms that support RAG pipelines, vector search, AI model lifecycle management, and AI observability

 

Key Responsibilities

AI & Agentic Infrastructure 

  • Design infrastructure to support agentic AI systems, autonomous agents, and multi-agent workflows. 
  • Build scalable runtime environments for LLM orchestration frameworks. 
  • Enable deployment of AI copilots, assistants, and autonomous decision systems. 

Common frameworks may include: 

  • LangChain 
  • LlamaIndex 
  • AutoGPT 

 

LLMOps & AI Model Lifecycle 

Design and manage LLMOps pipelines for the full lifecycle of large language models: 

  • Model deployment 
  • Prompt management 
  • Versioning 
  • Evaluation and testing 
  • Model monitoring 

Integrate with AI platforms such as: 

  • Azure Machine Learning 
  • Amazon SageMaker 
  • Vertex AI 

 

Retrieval-Augmented Generation (RAG) Infrastructure 

Design and optimize RAG pipelines that integrate enterprise knowledge with LLMs. 

Responsibilities include: 

  • Document ingestion pipelines 
  • Embedding generation workflows 
  • Knowledge indexing 
  • Query orchestration 
  • Retrieval optimization 
  • Support scalable semantic search architectures. 

 

Vector Database & Knowledge Infrastructure 

Deploy and manage vector databases used for AI applications and semantic retrieval. 

Common technologies include: 

  • Pinecone 
  • Weaviate 
  • Milvus 
  • FAISS 

Responsibilities include: 

  • Index optimization 
  • Query latency tuning 
  • Scalable embedding storage 
  • Hybrid search architecture 

 

Multi-Cloud AI Infrastructure 

Design and maintain AI-ready infrastructure across: 

  • Amazon Web Services 
  • Microsoft Azure 
  • Google Cloud Platform 

Key responsibilities include: 

  • GPU infrastructure management 
  • Distributed training environments 
  • Hybrid cloud integrations with on-prem data centers 
  • Infrastructure scaling for AI workloads 

 

Data Platforms & Integration 

  • Support deployment and optimization of data lakes, data warehouses, and streaming platforms. 
  • Work with data engineering teams to ensure secure and scalable data infrastructure. 

 

Cloud Architecture & Infrastructure 

  • Design and implement scalable multi-cloud infrastructure across Azure, AWS, and Google Cloud. 
  • Build hybrid cloud architectures integrating on-premise environments with cloud platforms. 
  • Implement high availability, disaster recovery, and auto-scaling architectures for AI workloads. 

 

DevOps, Platform Engineering & Automation 

Build automated cloud infrastructure using modern DevOps practices. 

Tools may include: 

  • Terraform 
  • Docker 
  • Kubernetes 
  • GitHub Actions 

Responsibilities include: 

  • Infrastructure as Code (IaC) 
  • Automated deployments 
  • CI/CD pipelines for AI models and services 
  • Platform reliability and scalability 

 

AI Observability & Monitoring 

Implement observability frameworks to monitor AI systems in production. 

This includes: 

  • Model performance monitoring 
  • Prompt evaluation 
  • Hallucination detection 
  • Latency and throughput analysis 
  • Cost monitoring for LLM usage 

Tools may include: 

  • Arize AI 
  • WhyLabs 
  • Weights & Biases 

 

Security, Governance & Responsible AI 

Ensure AI systems follow strong governance and security practices. 

Responsibilities include: 

  • Data privacy and compliance 
  • Model governance frameworks 
  • Secure model deployment 
  • Monitoring model bias and drift 
  • AI risk management 

Support enterprise frameworks for Responsible AI and AI compliance. 

 

Data & Security 

  • Experience with data lake architectures, distributed storage, and ETL pipelines 
  • Knowledge of data security, encryption, IAM, and compliance frameworks 
  • Familiarity with AI governance and responsible AI practices 

 

 

Required Skills 

Cloud & Infrastructure 

  • Strong experience in Azure (must have), AWS or GCP 
  • Hybrid and multi-cloud architecture 
  • GPU infrastructure management 

DevOps & Automation 

  • Kubernetes 
  • Docker 
  • Terraform 
  • CI/CD pipelines 

AI / ML Platforms 

  • MLOps pipelines 
  • Model deployment 
  • Model monitoring 

AI Application Infrastructure 

  • Vector databases 
  • RAG pipelines 
  • LLM orchestration frameworks 

Programming 

Experience in one or more languages: 

  • Python 
  • Go 
  • Java 
  • TypeScript 

 

 

 

Preferred Qualifications 

  • Experience building AI copilots or autonomous agents 
  • Knowledge of distributed model training - Knowledge of GPU infrastructure and distributed training 
  • Familiarity with AI evaluation frameworks - Familiarity with model monitoring, drift detection, and AI observability 
  • Experience building enterprise AI platforms 

 

Education & Experience 

  • Bachelor’s or Master’s degree in Computer Science, Engineering, or related field 
  • 4–8+ years experience in cloud infrastructure, DevOps, or platform engineering 
  • Experience working in data-driven or AI-focused environments 

 

 

What Success Looks Like 

  • Reliable ML model deployment pipelines - Reliable infrastructure for LLMs and AI agents, Scalable RAG knowledge platforms 
  • Efficient multi-cloud infrastructure management - Fast deployment cycles for AI products 
  • Secure and scalable AI-ready cloud platforms 
  • Strong automation and governance across cloud and AI systems 


Read more
Hashone Career
Madhavan I
Posted by Madhavan I
Bengaluru (Bangalore), Pune, Hyderabad, Chennai
5 - 7 yrs
₹12L - ₹20L / yr
skill iconJava
Google Cloud Platform (GCP)

Job Title

Java Developer - R11729

Job Description

Job Description


Experience: 5 -7 Years

Job Type: Contract (6 Months - extendable)

Location: Bengaluru, Pune, Chennai, Hyderabad, Gurgaon


Experience developing microservices and cloud native apps using Java/J2EE, REST APIs, Spring Core, Spring MVC Framework, Spring Boot Framework JPA (Java Persistence API) (Or any other ORM), Spring Security and similar tech stacks

(Open source and proprietary)


Experience working with Unit testing using framework such as Junit, Mockito, JBehave

Build and deploy services using Gradle, Maven, Jenkins etc. as part of CI/CD process

Experience working in Google Cloud Platform 

GCP knowledge is mandatory for offshore

GCP knowledge is preferred for onshore but any one cloud knowledge is mandatory

Experience with any Relational Database (Oracle, PostgreSQL etc.)


Soft skills

Designing, developing, and implementing custom software and database application capabilities with limited oversight.

Excellent communication skills – design-related conversations, ability to build and nurture good relationships and foster an environment for collaboration.

Acting as a member of the team supporting teammates and collaborating with a do what it takes attitude to ensure project and team success


Responsibilities

Be part of a team of engineers in developing elegant and high performant code

Ensure quality practices – unit testing, code reviews / leading tests

Optimize application for non-functional requirements

Build and deploy components as part of CI/CD process

Will be responsible for end-to-end application delivery including coordination with required teams for production deployment.

Continuously monitors application health and KPIs (Key Performance Indicators), support triage of any production issues as and when needed.

Collaborate in troubleshooting complex data, features, service, platform issues and perform root cause analysis to proactively resolve product and operational issues.

Be an advocate of security best practices, champion and support the importance of security within engineering.

View Less

Skills

java, SPRING BOOT, MYSQL, MICROSERVICES, GCP

Read more
Redtring
Keshav Senthil
Posted by Keshav Senthil
Hyderabad
3 - 6 yrs
₹15L - ₹20L / yr
skill iconJava
skill iconKotlin
skill iconAmazon Web Services (AWS)
skill iconRedis
Apache Kafka
+7 more

About Us:


We are hiring for a pre seed funded startup called Zeromoblt (https://zeromoblt.com/), a high-agency Hyderabad-based startup revolutionizing student transportation with lean, intelligent tech stacks.


Our mission: architect world-class systems from scratch—fast, scalable, and algorithmically sharp—using Kotlin, React, AWS (EC2, IoT, IAM), Google Maps, and multi-cloud setups. Stealth mode operations mean you're building 0→1 products with founders, not fixing tickets.


What You'll Do

  • Lead end-to-end ownership of complex systems: design, build, deploy, monitor, and iterate at scale.
  • Architect high-performance backends in Kotlin (or JVM langs) that handle real-time routing and IoT data.
  • Craft scalable React UIs that power ops dashboards and parent-facing apps.
  • Drive cloud decisions across AWS, Azure/GCP—optimising costs for our bootstrap runway.
  • Apply DSA/system design to solve hard problems like dynamic route optimization and predictive scaling.
  • Shape the engineering roadmap: propose, prioritise, and ship features with founders.
  • Mentor juniors while executing solo on high-impact bets—no layers, just results.


We're Looking For

  • 3-6 years of hands-on engineering where you've owned and shipped production systems (prove it with code/stories).
  • Elite CS fundamentals: advanced DSA, system design (distributed systems a must), design patterns.
  • Mastery of Kotlin/Java + modern React; real AWS experience (EC2, IAM, CLI—you know our stack).
  • Proven "leap-taker": startup grit, side projects, or open-source that screams hunger.
  • Figure-it-out velocity: you thrive in chaos, learn our domain overnight, and deliver 10x faster than peers.


This Role Is Not For You If…

  • You need structured roadmaps, PM hand-holding, or big-tech process.
  • Comfort > impact: stable salary over equity upside and chaos.
  • You've never worn all hats (dev, ops, product) in a resource-constrained environment.


Why Join Us

  • Massive ownership: lead tech for 10k+ students, direct founder access, shape ZeroMoblt's scale.
  • Flat, high-trust team: flexible Hyderabad/remote, no bureaucracy.
  • Hungry culture: we hire hustlers scaling from 700 to 10k students—your wins are visible daily.
  • Hungry to Leap? Apply now!
Read more
NeoGenCode Technologies Pvt Ltd
Mumbai
5 - 10 yrs
₹12L - ₹24L / yr
DevOps
skill iconAmazon Web Services (AWS)
Windows Azure
Google Cloud Platform (GCP)
skill iconKubernetes
+12 more

Job Title : Senior DevOps Engineer (Only Mumbai Candidates)

Experience : 5+ Years

Location : Mumbai (On-site)

Notice Period : Immediate to 15 Days

Interview Process : 1 Internal Round + 1 Client Round


Mandatory Skills :

Multi-Cloud (AWS/GCP/Azure – any two), Kubernetes, Terraform, Helm (writing Helm Charts), CI/CD (GitLab CI/Jenkins/GitHub Actions), GitOps (ArgoCD/FluxCD), Multi-tenant deployments, Stateful microservices on Kubernetes, Enterprise Linux.


Role Overview :

We are looking for a Senior DevOps Engineer to design, build, and manage scalable cloud infrastructure and DevOps pipelines for product-based platforms.

The ideal candidate should have strong experience with Kubernetes, Terraform, Helm Charts, CI/CD, and GitOps practices.


Key Responsibilities :

  • Design and manage scalable cloud infrastructure across AWS/GCP/Azure.
  • Deploy and manage microservices on Kubernetes clusters.
  • Build and maintain Infrastructure as Code using Terraform and Helm.
  • Implement CI/CD pipelines using GitLab CI, Jenkins, or GitHub Actions.
  • Implement GitOps workflows using ArgoCD or FluxCD.
  • Ensure secure, scalable, and reliable DevOps architecture.
  • Implement monitoring and logging using Prometheus, Grafana, or ELK.

Good to Have :

  • Packer, OpenShift/Rancher/K3s, On-prem deployments, PaaS experience, scripting (Bash/Python), Terraform modules.
Read more
Redtring
Keshav Senthil
Posted by Keshav Senthil
Hyderabad
1 - 3 yrs
₹8L - ₹12L / yr
skill iconKotlin
skill iconJava
skill iconSpring Boot
skill iconReact.js
skill iconAmazon Web Services (AWS)
+6 more

Software Engineer (Backend) – Kotlin & React

About Us

We are a high-agency startup building elegant technological solutions to real-world problems.

Our mission is to build world-class systems from scratch that are lean, fast, and intelligent. We are currently operating in stealth mode, developing deeply technical products involving Kotlin, React, Azure, AWS, GCP, Google Maps integrations, and algorithmically intensive backends.

We are building a team of builders — not ticket takers. If you want to design systems, make real decisions, and own your work end-to-end, this is the place for you.

Role Overview

As a Software Engineer, you will take full ownership of building and scaling critical product systems. You will work directly with the founding team to transform complex real-world problems into scalable technical solutions.

This role is ideal for engineers who enjoy thinking deeply about systems, writing clean code, and building products from 0 → 1.

Key Responsibilities

System Development & Architecture

  • Design, develop, and maintain scalable backend services, primarily using Kotlin or JVM-based languages (Java/Scala).
  • Architect systems that are robust, high-performance, and production-ready.
  • Apply strong data structures, algorithms, and system design principles to solve complex engineering challenges.

Full Stack Development

  • Build fast, maintainable front-end applications using React.
  • Ensure seamless integration between frontend systems and backend services.

Cloud Infrastructure

  • Design and manage cloud architecture using AWS, Azure, and/or Google Cloud Platform (GCP).
  • Implement scalable deployment pipelines, monitoring, and infrastructure optimization.

Product & Technical Collaboration

  • Work closely with founders and product stakeholders to translate business problems into technical solutions.
  • Contribute actively to product and engineering roadmap decisions.

Performance Optimization

  • Continuously improve system performance, scalability, and reliability.
  • Implement efficient algorithms and system optimizations to gain a technical advantage.

Engineering Excellence

  • Write clean, well-tested, and maintainable code.
  • Maintain strong engineering standards across the codebase.

Required Skills & Qualifications

We value capability and ownership over years of experience. Whether you have 10 years of experience or none, what matters is your ability to build and solve hard problems.

Core Requirements

  • Strong computer science fundamentals (Data Structures, Algorithms, System Design).
  • Experience with Kotlin or JVM languages such as Java or Scala.
  • Experience building modern React applications.
  • Hands-on experience with cloud platforms (AWS / Azure / GCP).
  • Experience designing and deploying scalable distributed systems.
  • Strong problem-solving and analytical thinking.

Preferred / Bonus Skills

  • Experience with Google Maps APIs or geospatial integrations.
  • Prior startup experience.
  • Contributions to open-source projects.
  • Personal side projects demonstrating strong engineering ability.

Ideal Candidate

You will thrive in this role if you:

  • Take ownership of problems, not just tasks.
  • Are comfortable working in high-ambiguity environments.
  • Have a builder mindset and enjoy creating systems from scratch.
  • Learn quickly and execute with speed and precision.

This Role May Not Be For You If

  • You prefer strict task assignments and detailed specifications before starting work.
  • You want to focus only on coding tickets without product involvement.
  • You prefer large teams with multiple layers of management.

Why Join Us

  • Build 0 → 1 products with massive ownership.
  • Work in a flat organization with no unnecessary hierarchy.
  • Collaborate directly with founders and core product builders.
  • Your contributions will have immediate and visible impact.
  • Flexible remote work environment.
  • Opportunity to shape the technology, culture, and future of the company.

If you are passionate about building powerful systems, solving complex problems, and owning your work, we would love to hear from you.

Read more
Neuvamacro Technology Pvt Ltd
Remote only
5 - 15 yrs
₹12L - ₹15L / yr
Tableau
Snow flake schema
SQL
ETL
Data modeling
+4 more

Job Description:

Position Type: Full-Time Contract (with potential to convert to Permanent)

Location: Remote (Australian Time Zone)

Availability: Immediate Joiners Preferred

About the Role

We are seeking an experienced Tableau and Snowflake Specialist with 5+ years of hands‑on expertise to join our team as a full‑time contractor for the next few months. Based on performance and business requirements, this role has a strong potential to transition into a permanent position.

The ideal candidate is highly proficient in designing scalable dashboards, managing Snowflake data warehousing environments, and collaborating with cross-functional teams to drive data‑driven insights.

Key Responsibilities

  • Develop, design, and optimize advanced Tableau dashboards, reports, and visual analytics.
  • Build, maintain, and optimize datasets and data models in Snowflake Cloud Data Warehouse.
  • Collaborate with business stakeholders to gather requirements and translate them into analytics solutions.
  • Write efficient SQL queries, stored procedures, and data pipelines to support reporting needs.
  • Perform data profiling, data validation, and ensure data quality across systems.
  • Work closely with data engineering teams to improve data structures for better reporting efficiency.
  • Troubleshoot performance issues and implement best practices for both Snowflake and Tableau.
  • Support deployment, version control, and documentation of BI solutions.
  • Ensure availability of dashboards during Australian business hours.

Required Skills & Experience

  • 5+ years of strong hands-on experience with Tableau development (Dashboards, Storyboards, Calculated Fields, LOD Expressions).
  • 5+ years of experience working with Snowflake including schema design, warehouse configuration, and query optimization.
  • Advanced knowledge of SQL and performance tuning.
  • Strong understanding of data modeling, ETL processes, and cloud data platforms.
  • Experience working in fast-paced environments with tight delivery timelines.
  • Excellent communication and stakeholder management skills.
  • Ability to work independently and deliver high‑quality outputs aligned with business objectives.

Nice-to-Have Skills

  • Knowledge of Python or any ETL tool.
  • Experience with Snowflake integrations (Fivetran, DBT, Azure/AWS/GCP).
  • Tableau Server/Prep experience.

Contract Details

  • Full-Time Contract for several months.
  • High possibility of conversion to permanent, based on performance.
  • Must be available to work on the Australian Time Zone.
  • Immediate joiners are highly encouraged.


Read more
Techjays
Agency job
via techjays by Samuel Santhosh P
Remote, Coimbatore
5 - 6.5 yrs
₹30L - ₹45L / yr
skill iconPython
skill iconDjango
skill iconFlask
RESTful APIs
WebSocket
+12 more

We are seeking an experienced Python Lead to design, develop, and scale high-performance backend systems. The ideal candidate will have strong expertise in Python-based backend development, system design, and cloud-native architectures. You will lead the development of scalable APIs, work with modern cloud platforms, and collaborate with cross-functional teams to deliver reliable and efficient applications.

Key Responsibilities

  • Design and develop scalable backend services using Python (Django/Flask).
  • Build and maintain RESTful APIs and WebSocket-based applications.
  • Implement efficient algorithms, data structures, and design patterns for high-performance systems.
  • Develop and optimize database schemas and queries using PostgreSQL, MySQL, or MongoDB.
  • Integrate caching and queuing systems to improve system performance and reliability.
  • Deploy and manage applications on AWS or GCP cloud environments.
  • Implement and maintain CI/CD pipelines using tools such as Jenkins, GitLab CI, or GitHub Actions.
  • Work with Docker containers and Linux-based environments for development and deployment.
  • Collaborate with engineering teams to design scalable system architectures.
  • Explore and integrate AI-driven capabilities such as RAG, LLMs, and vector databases where applicable.

Required Skills

  • Strong expertise in Python backend development using Django or Flask
  • Experience with REST APIs, WebSockets, and microservices architecture
  • Solid knowledge of design patterns, algorithms, and data structures
  • Experience with relational and NoSQL databases (PostgreSQL, MySQL, MongoDB)
  • Hands-on experience with AWS or GCP cloud services
  • Experience with CI/CD pipelines and containerization (Docker)
  • Proficiency in Git and Linux environments

Preferred Skills

  • Familiarity with AI/ML concepts
  • Experience with RAG architectures and LLM integrations
  • Knowledge of vector databases such as Pinecone or ChromaDB

What We’re Looking For

  • Strong problem-solving and system design skills
  • Ability to lead backend development initiatives
  • Experience building scalable and production-grade systems
  • Excellent collaboration and communication skills


Read more
Wohlig Transformations Pvt Ltd
Apoorva Lakshkar
Posted by Apoorva Lakshkar
Mumbai
7 - 10 yrs
₹15L - ₹23L / yr
Google Cloud Platform (GCP)
skill iconAmazon Web Services (AWS)
DevOps
skill iconKubernetes

Job Overview 


We are seeking an experienced Senior Solution Architect to join our dynamic DevOps organization. The ideal candidate will have a strong background in cloud technologies, with expertise in migration projects across platforms such as GCP, AWS, and Azure. The candidate should possess a deep understanding of DevOps principles, Kubernetes orchestration, Data migration & management and automation tools like CI/CD pipelines and Terraform.The individual should be highly skilled in designing scalable application architectures capable of handling substantial workloads while ensuring the highest standards of quality.


Key Responsibilities 


  • Lead and drive cloud migration projects from on-premises data centers or other cloud platforms to GCP, AWS, or Azure.
  • Design and implement migration strategies that ensure minimal downtime and maximum efficiency.
  • Demonstrate proficiency in GCP, AWS, and Azure, with the ability to choose and optimize solutions based on specific business requirements.
  • Provide guidance on selecting the appropriate cloud services for various workloads.
  • Design, implement, and optimize CI/CD pipelines to streamline software delivery.
  • Utilize Terraform for infrastructure as code (IaC) to automate deployment processes.
  • Collaborate with development and operations teams to enhance the overall DevOps culture.
  • Possess in-depth knowledge and practical experience with Kubernetes orchestration for containerized applications.
  • Architect and optimize Kubernetes clusters for high availability and scalability.
  • Engage in research and development activities to stay abreast of industry trends and emerging technologies.
  • Evaluate and introduce new tools and methodologies to enhance the efficiency and effectiveness of cloud solutions.
  • Architect solutions that can handle large-scale workloads and provide guidance on scaling strategies.
  • Ensure high-performance levels and reliability in production environments.
  • Design scalable and high-performance database architectures tailored to meet business needs.
  • Execute database migrations with a keen focus on data consistency, integrity, and performance.
  • Develop and implement database pipelines to automate processes such as data migrations, schema changes, and backups.
  • Optimize database workflows to enhance efficiency and reliability.
  • Work closely with clients to assess and enhance the quality of existing architectures.
  • Implement best practices to ensure robust, secure, and well-architected solutions.
  • Drive migration projects, collaborating with cross-functional teams to ensure successful execution.
  • Provide technical leadership and mentorship to junior team members.


Required Skills and Qualifications: 


  • Bachelor's degree in Computer Science, Information Technology, or related field.
  • Relevant industry experience in a Solution Architect role.
  • Proven experience in leading cloud migration projects across GCP, AWS, and Azure.
  • Expertise in DevOps practices, CI/CD pipelines, and infrastructure automation.
  • In-depth knowledge of Kubernetes and container orchestration.
  • Strong background in scaling architectures to handle significant workloads.
  • Sound knowledge in database migrations
  • Excellent communication skills and the ability to articulate complex technical concepts to both technical and non-technical stakeholders.


Read more
Remote only
0 - 8 yrs
₹3L - ₹12L / yr
Linux/Unix
Google Cloud Platform (GCP)
DevOps
CI/CD
skill iconDocker
+8 more

We are looking for a passionate and detail-oriented Site Reliability Engineer (SRE) to ensure the reliability, scalability, and performance of our production systems. This role is open to freshers as well as experienced professionals who are eager to work on cloud infrastructure, automation, and system monitoring.


Key Responsibilities:

1. Monitor system performance, availability, and reliability.

2. Automate deployment, scaling, and infrastructure management processes.

3. Troubleshoot production issues and perform root cause analysis.

4. Improve system reliability through automation and performance tuning.

5. Implement CI/CD pipelines and DevOps best practices.

6. Maintain documentation for infrastructure and processes.

7. Collaborate with development and operations teams.

8. Ensure security, backup, and disaster recovery strategies are in place.


Required Skills:

1. Basic understanding of Linux/Unix systems.

2. Knowledge of cloud platforms (AWS / Azure / GCP).

3. Understanding of DevOps concepts and CI/CD pipelines.

4. Familiarity with Docker and Kubernetes (basic knowledge for freshers).

5. Scripting knowledge (Python / Bash / Shell).

6. Basic networking knowledge (DNS, HTTP, Load Balancing).

7. Knowledge of monitoring tools (Prometheus, Grafana, etc.).

8. Strong analytical and problem-solving skills.


Preferred Skills (Good to Have):

1. Experience with Infrastructure as Code (Terraform / Ansible).

2. Understanding of microservices architecture.

3. Experience with version control tools (Git).


Eligibility:

1. B.E / B.Tech / B.Sc / M.Tech / MCA or related field.

2. Freshers with strong DevOps interest are welcome.

3. 0–8 years of relevant experience.


Location: Remote / Chennai

Employment Type: Full-Time 


Apply here: https://connectsblue.com/jobs/741/site-reliability-engineer-sre-at-bluepms-software-solutions-pvt-ltd

Read more
Recruiting Bond

at Recruiting Bond

2 candid answers
Pavan Kumar
Posted by Pavan Kumar
Mumbai, Navi Mumbai
10 - 15 yrs
₹55L - ₹80L / yr
Distributed Systems
Systems design
Systems architecture
High-level design
LLD
+77 more

Location: Mumbai, Maharashtra, India

Sector: Technology, Information & Media

Company Size: 500 - 1,000 Employees

Employment: Full-Time, Permanent

Experience: 10 - 14 Years (Engineering Leadership)

Level: Engineering Manager / Group EM


ABOUT THIS MANDATE :


Recruiting Bond has been exclusively retained by one of India's most prominent and well-established digital platform organisations operating at the intersection of Technology, Information, and Media to identify and place an exceptional Engineering Manager who can lead engineering teams through an enterprise-wide AI adoption and digital transformation agenda.


This is a high-impact, hands-on leadership role at the nexus of people, product, and technology. The organisation is executing one of the most ambitious AI transformation programmes in its sector and this Engineering Manager will be a core driver of that change. You will lead multiple squads, own engineering delivery end-to-end, embed AI tooling and practices into the team's DNA, and shape the engineering culture of tomorrow.


We are seeking leaders who code when it matters, who build systems and teams with equal conviction, and who view AI not as a trend but as a fundamental shift in how great software is built.


THE OPPORTUNITY AT A GLANCE :


AI-First Engineering Culture :

  • Own AI adoption across your squads - from LLM tooling integration to automation-first delivery workflows. Make AI a default, not an afterthought.


Hands-On Engineering Leadership :

  • Stay close to the code. Lead architecture reviews, unblock engineers, and set the technical bar - not just the management agenda.


People & Org Builder :

  • Grow engineers into leaders. Build squads of 615 across functions. Drive hiring, career frameworks, and a culture of psychological safety.


KEY RESPONSIBILITIES :


1. Hands-On Technical Engagement :

  • Remain deeply embedded in the technical work participate in design reviews, architecture decisions, and critical code reviews
  • Set and uphold the engineering quality bar : performance benchmarks, security standards, test coverage, and release quality
  • Provide technical direction on backend platform strategy, API design, service decomposition, and data architecture
  • Identify and resolve systemic technical debt and architectural risks across team-owned services
  • Unblock engineers by diving into complex problems debugging, pair programming, and system analysis when it matters
  • Own key technical decisions in collaboration with Tech Leads and Principal Engineers; balance pragmatism with long-term sustainability


2. AI Adoption, Integration & Transformation (2026 Mandate) :

  • Define and execute the team's AI adoption roadmap - from developer tooling to product-facing AI features
  • Champion the integration of GenAI tools (GitHub Copilot, Cursor, Claude, ChatGPT) across the full engineering workflow coding, testing, documentation, incident response
  • Embed LLM-powered capabilities into the product : recommendation engines, intelligent search, conversational interfaces, content generation, and predictive systems
  • Lead evaluation and adoption of AI-assisted SDLC practices : automated code review, AI-generated test suites, intelligent observability, and anomaly detection
  • Partner with Data Science and ML Platform teams to productionise ML models with robust MLOps pipelines
  • Build team literacy in prompt engineering, RAG (Retrieval-Augmented Generation), and AI agent frameworks
  • Create an experimentation culture : run structured AI pilots, measure productivity impact, and scale what works
  • Stay ahead of the AI tooling landscape and advise senior leadership on strategic AI investments and engineering implications


3. People Leadership & Team Development :

  • Lead, manage, and grow squads of 6 - 15 engineers across seniority levels (L2 through L6 / Junior through Staff)
  • Conduct structured 1 : 1s, career growth conversations, and development planning with every direct report
  • Design and execute personalised AI upskilling programmes ensure every engineer develops practical AI fluency by end of 2026
  • Build and maintain a high-performance team culture : clarity of ownership, accountability, fast feedback loops, and psychological safety
  • Drive performance management fairly and rigorously recognise top performers, manage underperformance constructively
  • Lead technical hiring end-to-end : define job requirements, conduct bar-raising interviews, and make data-driven hire decisions
  • Contribute to engineering career frameworks and level definitions in partnership with the VP / Director of Engineering


4. Engineering Delivery & Execution Excellence :

  • Own end-to-end delivery for multiple product squads from planning and scoping through production release and post-launch stability
  • Implement and refine agile delivery frameworks (Scrum, Kanban, Shape Up) calibrated to squad needs and product cadence
  • Drive predictable delivery : maintain healthy sprint velocity, manage WIP limits, and ensure dependency resolution across teams.
  • Establish and own engineering KPIs : DORA metrics (deployment frequency, lead time, MTTR, change failure rate), uptime SLOs, and velocity trends
  • Lead incident management : build blameless post-mortem culture, own RCA processes, and drive systemic reliability improvements
  • Balance technical debt repayment with feature velocity negotiate prioritisation transparently with Product leadership


5. Strategic Leadership & Cross-Functional Influence :

  • Serve as the primary engineering partner for Product, Design, Data, and Business stakeholders translate ambiguity into executable engineering plans
  • Participate in quarterly roadmap planning, capacity forecasting, and OKR definition for engineering teams
  • Represent engineering in leadership forums articulate technical constraints, risks, and opportunities in business terms
  • Contribute to org-wide engineering strategy : platform investments, build-vs-buy decisions, and shared infrastructure priorities
  • Build relationships across geographies (Mumbai HQ + distributed teams) to maintain alignment and delivery cohesion
  • Act as a culture carrier and ambassador for engineering excellence, innovation, and responsible AI use


AI TRANSFORMATION LEADERSHIP 2026 EXPECTATIONS :


In 2026, Engineering Managers at this organisation are expected to be active architects of AI transformation not passive observers. The following outlines the specific AI leadership expectations for this role :


AI Developer Productivity

  • Drive measurable uplift in developer velocity through AI tooling adoption. Target : 30%+ reduction in code review cycle time and 40%+ increase in test coverage automation by Q3 2026.


LLM & GenAI Product Features

  • Own delivery of GenAI-powered product capabilities : intelligent content, semantic search, personalisation, and conversational UX in production, at scale.


AI-Augmented Observability

  • Implement AI-driven monitoring and anomaly detection pipelines. Reduce MTTR by leveraging predictive alerting, intelligent runbooks, and auto-remediation scripts.


Team AI Fluency :

  • Build mandatory AI literacy across all engineering levels.
  • Every engineer understands prompt engineering basics, AI ethics guardrails, and responsible AI deployment practices.


Responsible AI Governance :

  • Partner with Security, Legal, and Data Privacy to ensure all AI deployments meet compliance standards, bias mitigation requirements, and explainability benchmarks.


TECHNOLOGY STACK & DOMAIN FAMILIARITY REQUIRED :


  • Languages: Java/ Go/ Python/ Node.js /PHP /Rust (must be hands-on in at least 2)
  • Cloud: AWS / GCP / Azure (multi-cloud exposure strongly preferred)
  • AI & GenAI: OpenAI / Anthropic / Gemini APIs /LangChain /LlamaIndex / RAG / Vector DBs / GitHub
  • Copilot: Cursor /Hugging Face
  • Containers: Docker /Kubernetes /Helm /Service Mesh (Istio / Linkerd)
  • Databases: PostgreSQL /MongoDB / Redis / Cassandra / Elasticsearch / Pinecone (Vector DB)
  • Messaging: Apache Kafka /RabbitMQ /AWS SQS/SNS /Google Pub/Sub
  • MLOps & DataOps: MLflow /Kubeflow / SageMaker / Vertex AI /Airflow /dbt
  • Observability: Datadog /Prometheus /Grafana /OpenTelemetry / Jaeger /ELK Stack
  • CI/CD & IaC: GitHub Actions ArgoCD / Jenkins / Terraform /Ansible /Backstage (IDP)


QUALIFICATIONS & CANDIDATE PROFILE :

Education :

  • B.E. / B.Tech or M.E. / M.Tech from a Tier-I or Tier-II Institution - CS, IS, ECE, AI/ML streams strongly preferred
  • Demonstrated engineering depth and leadership impact may complement institution pedigree


Experience :

  • 10 to 14 years of progressive engineering experience, with at least 3 years in a formal Engineering Manager or equivalent people-leadership role
  • Proven track record of managing and scaling engineering teams (615+ engineers) in a fast-growing SaaS or digital product environment
  • Hands-on backend engineering background must be able to read, write, and critique production code
  • Direct experience driving AI/ML feature delivery or AI tooling adoption within engineering organisations
  • Exposure across start-up, mid-size, and large-scale product organisations, preferred adaptability is a core requirement
  • Strong CS fundamentals: distributed systems, algorithms, system design, and software architecture
  • Demonstrated career stability minimum of 2 years of average tenure per organisation.


The Ideal Engineering Manager in 2026 :

  • Leads with context, not control, empowers engineers while maintaining accountability and quality
  • Is fluent in both people language and technical language, switches registers naturally with engineers and executives alike
  • Sees AI as a force multiplier for the team, not a threat. Actively experiments with and advocates for AI tooling
  • Measures success by team outcomes, not personal output. Takes pride in what the team ships, not what they build alone
  • Creates feedback loops obsessively between product and engineering, between seniors and juniors, between metrics and decisions
  • Has strong opinions, loosely held, brings conviction to discussions but updates on evidence
  • Invests in engineering excellence as seriously as delivery velocity knows that quality and speed are not opposites


WHY THIS ROLE STANDS APART :


AI Transformation at Scale :

  • Lead one of the most significant AI adoption programmes in India's digital media sector.
  • Our decisions will shape how hundreds of engineers work in 2026 and beyond.


Hands-On & Strategic Balance :

  • A rare EM role that actively encourages technical depth.
  • Stay close to the code while owning the people agenda - the best of both worlds.


Established Platform, Real Scale :

  • 5001,000 engineers, proven product-market fit, and the org maturity to execute.
  • This is not a greenfield startup gamble it is a serious company with serious ambition.


Clear Leadership Growth Path :

  • A visible, direct path toward Director / VP of Engineering.
  • Senior leadership is invested in growing its next generation of technology executives.


Read more
House Of Shipping
Sanikha M
Posted by Sanikha M
Chennai
3 - 8 yrs
₹8L - ₹15L / yr
Google Cloud Platform (GCP)
skill iconNodeJS (Node.js)
skill iconPython
skill iconJava
API
+1 more

Key Responsibilities

  • Design, develop, and maintain microservices and APIs running on GKE, Cloud Run, App Engine, and Cloud Functions.
  • Build secure, scalable REST and GraphQL APIs to support Our Client front-end applications and integrations.
  • Work with the GCP Architect to ensure back-end design aligns with enterprise architecture and security best practices.
  • Implement integration layers between GCP-hosted services, AlloyDB, Cloud Spanner, Cloud SQL, and third-party APIs.
  • Deploy services using Gemini Code Assist, CLI tools, and Git-based CI/CD pipelines.
  • Optimize service performance, scalability, and cost efficiency.
  • Implement authentication, authorization, and role-based access control using GCP Identity Platform / IAM.
  • Work with AI/ML services (e.g., Vertex AI, Document AI, NLP APIs) to enable intelligent back-end capabilities.
  • Collaborate with front-end developers to design efficient data contracts and API payloads.
  • Participate in code reviews and enforce clean, maintainable coding standards.

Experience & Qualifications

  • 6–8 years of back-end development experience, with at least 3+ years in senior/lead analyst roles.
  • Proficiency in one or more back-end programming languages: Node.js, Python, or Java.
  • Strong experience with GCP microservices deployments on GKE, App Engine, Cloud Run, and Cloud Functions.
  • Deep knowledge of AlloyDB, Cloud Spanner, and Cloud SQL for schema design and query optimization.
  • Experience in API development (REST/GraphQL) and integration best practices.
  • Familiarity with Gemini Code Assist for code generation and CLI-based deployments.
  • Understanding of Git-based CI/CD workflows and DevOps practices.
  • Experience integrating AI tools into back-end workflows.
  • Strong understanding of cloud security and compliance requirements.
  • Excellent communication skills for working in a distributed/global team environment.


Read more
AI GTM Platform for Faster B2B Pipeline Growth

AI GTM Platform for Faster B2B Pipeline Growth

Agency job
via Peak Hire Solutions by Dhara Thakkar
Remote only
4 - 10 yrs
₹74L - ₹130L / yr
Artificial Intelligence (AI)
skill iconScala
skill iconPython
AI Agents
API
+9 more

Senior BackEnd Engineer


The ideal candidate will have a strong background in building scalable applications, a deep understanding of back-end technologies, and experience with cloud infrastructure. As a Back End Engineer, you will be responsible for designing, developing, and maintaining a scalable workflow management system. You will work closely with cross-functional teams to build robust and efficient applications that meet the needs of our users. Your expertise in Scala, Python, AI Agents/APIs, and GCP will be crucial in ensuring our system is reliable, performant, and scalable.


Key Responsibilities:

Back-End Development:

  • Build and maintain back-end services and APIs using Scala.
  • Implement and optimize Orchestration workflow system involving database queries and operations.
  • Build API integrations with Third Party APIs and services.
  • Ensure robust and scalable server-side logic.


Cloud Integration:

  • Deploy, manage, and monitor applications on Google Cloud Platform (GCP).
  • Utilize GCP services to enhance application performance and scalability.
  • Implement cloud-based solutions for data storage, processing, and analytics.


Collaboration And Communication:

  • Work closely with cross-functional teams to define, design, and ship new features.
  • Participate in code reviews and contribute to sharing team knowledge.
  • Document development processes, coding standards, and project requirements.


Qualifications:

  • Educational Background:
  • Completed a masters/bachelor degree in Computer Science, Engineering, or a related field.
  • Technical Skills:
  • Proficiency in Scala programming language.
  • Strong experience with React and ReactJS.
  • Familiarity with Google Cloud Platform (GCP) and its services.
  • Knowledge of front-end development tools and best practices.
  • Understanding of RESTful API design and implementation.
  • Soft Skills:
  • Excellent problem-solving skills and attention to detail.
  • Strong communication and collaboration abilities.
  • Eagerness to learn and adapt to new technologies and challenges.


Preferred Qualifications:

  • Experience with version control systems such as Git.
  • Familiarity with CI/CD pipelines and DevOps practices.
  • Understanding of workflow management systems and their requirements.
  • Experience with containerization technologies like Docker.

 

Must have Skills

  • Scala - 4 Years
  • React.Js - 1 Years
  • RESTful API - 4 Years
  • Docker - 2 Years
  • Python - 3 Years
  • Artificial Intelligence - 2 Years

 

Read more
AI GTM Platform for Faster B2B Pipeline Growth

AI GTM Platform for Faster B2B Pipeline Growth

Agency job
via Peak Hire Solutions by Dhara Thakkar
Remote only
4 - 10 yrs
₹75L - ₹120L / yr
skill iconReact.js
skill iconJavascript
RESTful APIs
API
ReAct (Reason + Act)
+7 more

Senior FrontEnd Software Engineer

The ideal candidate will have a strong background in building scalable web applications, a deep understanding of both front-end technologies, and experience with cloud infrastructure. As a Front-end Engineer, you will be responsible for designing, developing, and maintaining a workflow management system. You will work closely with cross-functional teams to build robust and efficient applications that meet the needs of our users. Your expertise in ReactJS, MUI and API Integrations with the backend will be crucial in ensuring our system is intuitive, user-friendly, reliable and performant.

Key Responsibilities:

Develop and Maintain Front-End Components:

  • Design, develop, and optimize user interfaces using React and ReactJS.
  • Ensure a seamless and responsive user experience.
  • Collaborate with the design team to implement best practices in UI/UX design. Cloud Integration:
  • Deploy, manage, and monitor applications on Google Cloud Platform (GCP).
  • Utilize GCP services to enhance application performance and scalability.
  • Implement cloud-based solutions for data storage, processing, and analytics. Collaboration and Communication:
  • Work closely with cross-functional teams to define, design, and ship new features.
  • Participate in code reviews and contribute to sharing team knowledge.
  • Document development processes, coding standards, and project requirements.


Qualifications:

  • Educational Background:
  • Completed a master's/bachelor's degree in Computer Science, Engineering, or a related field.
  • Technical Skills:
  • Proficiency in JavaScript.
  • Strong experience with React, ReactJS and MUI.
  • Familiarity with Google Cloud Platform (GCP) and its services.
  • Knowledge of front-end development tools and best practices.
  • Understanding of RESTful API design and implementation.
  • Soft Skills:
  • Excellent problem-solving skills and attention to detail.
  • Strong communication and collaboration abilities.
  • Eagerness to learn and adapt to new technologies and challenges.


Preferred Qualifications:

  • Experience with version control systems such as Git.
  • Familiarity with CI/CD pipelines and DevOps practices.
  • Understanding of workflow management systems and their requirements.
  • Experience with containerization technologies like Docker.

 

Must have Skills

  • React.Js - 4 Years
  • JavaScript - 4 Years
  • RESTful API - 1 Years
  • Material UI - 3 Years

 

Read more
Techjays
SREEHARIVASU S
Posted by SREEHARIVASU S
Remote only
5 - 10 yrs
₹30L - ₹50L / yr
Design patterns
Data Structures
Relational Database (RDBMS)
skill iconGit
Linux/Unix
+3 more

What makes Techjays an inspiring place to work

At Techjays, we are helping companies reimagine how they build, operate, and scale with AI at the core.

We operate as part of the 1% of companies globally that can truly leverage AI the right way and  not just as experimentation, but as secure, scalable, production-grade systems that drive measurable business outcomes.

Our strength lies in combining deep backend engineering with AI system design, building AI-native platforms, intelligent workflows, and cloud architectures that are reliable, observable, and enterprise-ready.

Our team includes engineers and leaders who have built and scaled products at global technology organizations such as Google, Akamai, NetApp, ADP, Cognizant Consulting, and Capgemini. Today, we function as a high-agency, execution-focused team building advanced AI systems for global clients.

We are looking for a strong backend engineer who can design and build secure, scalable Python systems that power AI-native applications.

You will work on AI-enabled platforms, production systems, and scalable backend services that support LLM integrations, RAG pipelines, and intelligent workflows.


Years of Experience: 5 - 8 years


Location: Remote/ Coimbatore


Key Skills:

  • Backend Development (Expert): Python, Django/Flask, RESTful APIs, Websockets
  • Cloud Technologies (Proficient): AWS (EC2, S3, Lambda), GCP (Compute Engine, Cloud Storage, Cloud Functions), CI/CD pipelines with Jenkins/GitLab CI or Github Actions
  • Databases (Advanced): PostgreSQL, MySQL, MongoDB
  • AI/ML (Familiar): Basic understanding of Machine Learning concepts, experience with RAG, Vector Databases (Pinecone or ChromaDB or others)
  • Tools (Expert): Git, Docker, Linux

Roles and Responsibilities:

  • Design, development, and implementation of highly scalable and secure backend services using Python and Django.
  • Architect and develop complex features for our AI-powered platforms
  • Write clean, maintainable, and well-tested code, adhering to best practices and coding standards.
  • Collaborate with cross-functional teams, including front-end developers, data scientists, and product managers, to deliver high-quality software.
  • Mentor junior developers and provide technical guidance.

What We’re Looking For Beyond Skills

  • Builder mindset — you think in systems, not just tickets
  • Ownership — you take features from idea to production
  • Structured thinking in ambiguous environments
  • Clear communication and collaborative approach
  • Ability to work in a fast-paced, evolving startup environment


What We Offer

  • Competitive compensation
  • Flexible work environment (Remote / Coimbatore office)
  • Paid holidays & flexible time off
  • Medical insurance (Self & Family up to ₹4 Lakhs per person)
  • Opportunity to work on production-grade AI systems
  • Exposure to global clients and high-impact projects
  • A culture that values clarity, integrity, and continuous growth

If you want to build AI-native systems that are used in the real world,  not just prototypes, Techjays is the place to do it.

Survey Form Link


Read more
WITS Innovation Lab
Prabhnoor Kaur
Posted by Prabhnoor Kaur
Delhi, Gurugram, Noida, Ghaziabad, Faridabad
2 - 5 yrs
₹3L - ₹7L / yr
Terraform
skill iconKubernetes
skill iconJenkins
Ansible
skill iconAmazon Web Services (AWS)
+8 more

We are looking for a skilled DevOps Engineer with hands-on experience in cloud platforms, CI/CD pipelines, container orchestration, and infrastructure automation. The ideal candidate is someone who loves solving reliability challenges, automating everything, and ensuring seamless delivery across environments.

Key Responsibilities

  • Design, implement, and maintain CI/CD pipelines using GitHub Actions, Jenkins, and GitHub.
  • Manage and optimize infrastructure on AWS/GCP, ensuring scalability, security, and high availability.
  • Deploy and manage containerized applications using Docker and Kubernetes.
  • Build, automate, and manage infrastructure as code using Terraform.
  • Configure and manage automation tools and workflows using Ansible.
  • Monitor system performance, troubleshoot production issues, and ensure smooth operations.
  • Implement best practices for code management, release processes, and DevOps standards.
  • Collaborate closely with development teams to improve build pipelines and deployment workflows.
  • Write scripts in Python/Bash to automate operational tasks.

Required Skills & Experience

  • 2+ years of hands-on experience as a DevOps Engineer or in a similar role.
  • Strong expertise in AWS or GCP cloud services.
  • Solid understanding of Kubernetes (deployment, scaling, service mesh, packaging).
  • Proficiency with Terraform for infrastructure automation.
  • Experience with Git, GitHub, and GitHub Actions for source control and CI/CD.
  • Good knowledge of Jenkins pipelines and automation.
  • Hands-on experience with Ansible for configuration management.
  • Strong scripting skills using Python or Bash.
  • Understanding of monitoring, logging, and security best practices.


Read more
Chennai
5 - 8 yrs
₹5L - ₹10L / yr
Google Cloud Platform (GCP)
CI/CD
FOSSA
Terraform

Role Summary

We are looking for a skilled DevSecOps Engineer to design, implement, and secure scalable CI/CD pipelines and cloud infrastructure on Google Cloud Platform. The role focuses on secure application delivery using Cloud Run, GKE, Terraform, and integrated DevSecOps practices to ensure compliance, reliability, and performance.

Key Responsibilities

  • Design and manage secure CI/CD pipelines using Cloud Build, Jenkins, or Tekton
  • Provision and manage GCP infrastructure using Terraform (IaC)
  • Deploy and manage containerized applications on Cloud Run and GKE
  • Implement container security, vulnerability scanning, SAST/DAST, and dependency scanning
  • Enforce IAM, VPC, and cloud security best practices
  • Monitor, log, and troubleshoot environments for performance and reliability
  • Enable development teams with DevSecOps frameworks and governance standards

Relevant Skills

  • Cloud: Google Cloud Platform (GKE, Cloud Run, IAM, VPC, Cloud Build, Artifact Registry)
  • CI/CD Tools: Jenkins, Tekton, Cloud Build
  • Infrastructure as Code: Terraform
  • Containers & Orchestration: Docker, Kubernetes (GKE)
  • Security Tools: Checkmarx (SAST/DAST), FOSSA, container vulnerability scanning tools
  • Monitoring & Observability: GCP Operations Suite (Cloud Monitoring & Logging)
  • Version Control: Git, branch and release management strategies
  • Other: DevSecOps practices, compliance automation, release orchestration


Read more
USA Based IT Company

USA Based IT Company

Agency job
Bengaluru (Bangalore)
3 - 11 yrs
₹13L - ₹15L / yr
Google Cloud Platform (GCP)
azure
M365
MICROSOFT 365
ACCOUNT MANAGER
+9 more

Job Title: Account Manager – USA Market

Experience: 3–8 Years


Department: Sales

US Market experience mandatory


Any gender


Shift timing: 6 PM to 3 AM


Max CTC: 15 LPA(both position)


Inhouse desk job


Individual Contributor role


B2B SaaS Company Experience Mandatory


5 days of working from the office.



Role Overview

We are seeking a results-driven Account Manager to manage and grow client relationships in the

U.S. market, with a strong preference for candidates with experience in cloud migration services.

The ideal candidate will have a proven track record in B2B sales, account growth, and consultative selling within IT services or cloud solutions.

You will be responsible for managing existing accounts, identifying expansion opportunities, driving

revenue growth, and positioning cloud migration solutions (M365/Azure/GCP) that align with client

business objectives.


Key Responsibilities

 Manage and grow assigned accounts within the USA market.

 Act as the primary point of contact for client stakeholders.

 Identify upsell and cross-sell opportunities, particularly in:

o Cloud migration & modernization

o Infrastructure transformation

o Managed cloud services

 Drive end-to-end sales cycles from requirement gathering to deal closure.

 Collaborate with pre-sales, cloud architects, and delivery teams to craft tailored cloud migration solutions.

 Build long-term relationships with CXOs, IT Directors, and decision-makers.

 Prepare account plans, revenue forecasts, and pipeline reports.

 Meet and exceed quarterly and annual revenue targets.

 Negotiate commercial terms and manage contract renewals.

 Stay up to date with cloud trends, competitive landscape, and US market dynamics.


Required Qualifications

 3–8 years of experience in: Account management / IT services sales / technology consulting

o Handling USA clients (mandatory)

 Proven experience selling cloud solutions and/or IT services.

 Understanding of:

o M365, Azure, or Google Cloud platforms

o Cloud migration strategies (lift & shift, re-platform, re-architect)

o Application modernization & ; infrastructure services

 Strong consultative selling and negotiation skills.

 Experience managing multi-million-dollar accounts (preferred).

 Excellent communication and presentation skills.

 Ability to work in US time zones as required.

Read more
Flipr
Arsalan Mobin
Posted by Arsalan Mobin
Bengaluru (Bangalore)
3 - 6 yrs
₹10L - ₹13L / yr
VAPT
Web application security
Cyber Security
DevSecOps
CI/CD
+13 more

About the role:

We are looking for a skilled and driven Security Engineer to join our growing security team. This role requires a hands-on professional who can evaluate and strengthen the security posture of our

applications and infrastructure across Web, Android, iOS, APIs, and cloud-native environments.


The ideal candidate will also lead technical triage from our bug bounty program, integrate security into the DevOps lifecycle, and contribute to building a security-first engineering culture.


Required Skills & Experience:

● 3 to 6 years of solid hands-on experience in the VAPT domain

● Solid understanding of Web, Android, and iOS application security

● Experience with DevSecOps tools and integrating security into CI/CD

● Strong knowledge of cloud platforms (AWS/GCP/Azure) and their security models

● Familiarity with bug bounty programs and responsible disclosure practices

● Familiarity with tools like Burp Suite, MobSF, OWASP ZAP, Terraform, Checkov..etc

● Good knowledge of API security

● Scripting experience (Python, Bash, or similar) for automation tasks

Preferred Qualifications:

● OSCP, CEH, AWS Security Specialty, or similar certifications

● Experience working in a regulated environment (e.g., FinTech, InsurTech)


Responsibilities:

● Perform Security reviews, Vulnerability Assessments & Penetration Testing for Web,

Android, iOS, and API endpoints

● Perform Threat Modelling & anticipate potential attack vectors and improve security

architecture on complex or cross-functional components

● Identify and remediate OWASP Top 10 and mobile-specific vulnerabilities

● Conduct secure code reviews and red team assessments

● Integrate SAST, DAST, SCA, and secret scanning tools into CI/CD pipelines

● Automate security checks using tools like SonarQube, Snyk, Trivy, etc.

● Maintain and manage vulnerability scanning infrastructure

● Perform security assessments of AWS, Azure, and GCP environments, with an emphasis

on container security, particularly for Docker and Kubernetes.

● Implement guardrails for IAM, network segmentation, encryption, and cloud monitoring

● Contribute to infrastructure hardening for containers, Kubernetes, and virtual machines

● Triage bug bounty reports and coordinate remediation with engineering teams

● Act as the primary responder for external security disclosures

● Maintain documentation and metrics related to bug bounty and penetration testing

activities

● Collaborate with developers and architects to ensure secure design decisions

● Lead security design reviews for new features and products

● Provide actionable risk assessments and mitigation plans to stakeholders

Read more
Wissen Technology

at Wissen Technology

4 recruiters
Janane Mohanasankaran
Posted by Janane Mohanasankaran
Mumbai, Pune
7 - 13 yrs
Best in industry
skill iconJava
skill iconSpring Boot
Microservices
RESTful APIs
skill iconAmazon Web Services (AWS)
+1 more

JOB DESCRIPTION:


Location: Pune, Mumbai

Mode of Work : 3 days from Office


DSA(Collections, Hash maps, trees, Linkedlist and Arrays, etc), Core Oops concepts(Multithreading, Multi Processing, Polymorphism, Inheritance etc) Annotations in Spring and Spring boot, Java 8 Vital features, database Optimization, Microsevices and Rest API

  • Design, develop, and maintain low-latency, high-performance enterprise applications using Core Java (Java 5.0 and above).
  • Implement and integrate APIs using Spring Framework and Apache CXF.
  • Build microservices-based architecture for scalable and distributed systems.
  • Collaborate with cross-functional teams for high/low-level design, development, and deployment of software solutions.
  • Optimize performance through efficient multithreading, memory management, and algorithm design.
  • Ensure best coding practices, conduct code reviews, and perform unit/integration testing.
  • Work with RDBMS (preferably Sybase) for backend data integration.
  • Analyze complex business problems and deliver innovative technology solutions in the financial/trading domain.
  • Work in Unix/Linux environments for deployment and troubleshooting.
Read more
NonStop io Technologies Pvt Ltd
Kalyani Wadnere
Posted by Kalyani Wadnere
Pune
8 - 15 yrs
Best in industry
skill iconJavascript
skill iconReact.js
skill iconNodeJS (Node.js)
TypeScript
skill iconAmazon Web Services (AWS)
+6 more

About NonStop io Technologies:

NonStop io Technologies is a value-driven company with a strong focus on process-oriented software engineering. We specialize in Product Development and have a decade's worth of experience in building web and mobile applications across various domains. NonStop io Technologies follows core principles that guide its operations and believes in staying invested in a product's vision for the long term. We are a small but proud group of individuals who believe in the 'givers gain' philosophy and strive to provide value in order to seek value. We are committed to and specialize in building cutting-edge technology products and serving as trusted technology partners for startups and enterprises. We pride ourselves on fostering innovation, learning, and community engagement. Join us to work on impactful projects in a collaborative and vibrant environment.


Brief Description:

We are looking for an Engineering Manager who combines technical depth with leadership strength. This role involves leading one or more product engineering pods, driving architecture decisions, ensuring delivery excellence, and working closely with stakeholders to build scalable web and mobile technology solutions. As a key part of our leadership team, you’ll play a pivotal role in mentoring engineers, improving processes, and fostering a culture of ownership, innovation, and continuous learning.


Roles and Responsibilities:

● Team Management: Lead, coach, and grow a team of 15-20 software engineers, tech leads, and QA engineers

● Technical Leadership: Guide the team in building scalable, high-performance web and mobile applications using modern frameworks and technologies

● Architecture Ownership: Architect robust, secure, and maintainable technology solutions aligned with product goals

● Project Execution: Ensure timely and high-quality delivery of projects by driving engineering best practices, agile processes, and cross-functional collaboration

● Stakeholder Collaboration: Act as a bridge between business stakeholders, product managers, and engineering teams to translate requirements into technology plans

● Culture & Growth: Help build and nurture a culture of technical excellence, accountability, and continuous improvement

● Hiring & Onboarding: Contribute to recruitment efforts, onboarding, and learning & development of team members.


Requirements:

● 8+ years of software development experience, with 2+ years in a technical leadership or engineering manager role

● Proven experience in architecting and building web and mobile applications at scale

● Hands-on knowledge of technologies such as JavaScript/TypeScript, Angular, React, Node.js, .NET, Java, Python, or similar stacks

● Solid understanding of cloud platforms (AWS/Azure/GCP) and DevOps practices

● Strong interpersonal skills with a proven ability to manage stakeholders and lead diverse teams

● Excellent problem-solving, communication, and organizational skills

● Nice to have:

  • Prior experience in working with startups or product-based companies
  • Experience mentoring tech leads and helping shape engineering culture
  • Exposure to AI/ML, data engineering, or platform thinking


Why Join Us?

● Opportunity to work on a cutting-edge healthcare product

● A collaborative and learning-driven environment

● Exposure to AI and software engineering innovations

● Excellent work ethics and culture.



If you're passionate about technology and want to work on impactful projects, we'd love to hear from you!

Read more
NonStop io Technologies Pvt Ltd
Kalyani Wadnere
Posted by Kalyani Wadnere
Pune
2 - 5 yrs
Best in industry
Data Structures
Google Cloud Platform (GCP)
skill iconAmazon Web Services (AWS)
Windows Azure
Scikit-Learn
+3 more

About NonStop io Technologies

NonStop io Technologies is a value-driven company with a strong focus on process-oriented software engineering. We specialize in Product Development and have a decade's worth of experience in building web and mobile applications across various domains. NonStop io Technologies follows core principles that guide its operations and believes in staying invested in a product's vision for the long term. We are a small but proud group of individuals who believe in the 'givers gain' philosophy and strive to provide value in order to seek value. We are committed to and specialize in building cutting-edge technology products and serving as trusted technology partners for startups and enterprises. We pride ourselves on fostering innovation, learning, and community engagement. Join us to work on impactful projects in a collaborative and vibrant environment.


Brief Description:

We're seeking an AI/ML Engineer to join our team. As AI/ML Engineer, you will be responsible for designing, developing, and implementing artificial intelligence (AI) and machine learning (ML) solutions to solve real-world business problems. You will work closely with engineering teams, including software engineers, domain experts, and product managers, to deploy and integrate Applied AI/ML solutions into the products that are being built at NonStop io. Your role will involve researching cutting-edge algorithms and data processing techniques, and implementing scalable solutions to drive innovation and improve the overall user experience.


Responsibilities

● Applied AI/ML engineering; Building engineering solutions on top of the AI/ML tooling available in the industry today. Eg: Engineering APIs around OpenAI

● AI/ML Model Development: Design, develop, and implement machine learning models and algorithms that address specific business challenges, such as natural language processing, computer vision, recommendation systems, anomaly detection, etc.

● Data Preprocessing and Feature Engineering: Cleanse, preprocess, and transform raw data into suitable formats for training and testing AI/ML models. Perform feature engineering to extract relevant features from the data

● Model Training and Evaluation: Train and validate AI/ML models using diverse datasets to achieve optimal performance. Employ appropriate evaluation metrics to assess model accuracy, precision, recall, and other relevant metrics

● Data Visualization: Create clear and insightful data visualizations to aid in understanding data patterns, model behaviour, and performance metrics

● Deployment and Integration: Collaborate with software engineers and DevOps teams to deploy AI/ML models into production environments and integrate them into various applications and systems

● Data Security and Privacy: Ensure compliance with data privacy regulations and implement security measures to protect sensitive information used in AI/ML processes

● Continuous Learning: Stay updated with the latest advancements in AI/ML research, tools, and technologies, and apply them to improve existing models and develop novel solutions

● Documentation: Maintain detailed documentation of the AI/ML development process, including code, models, algorithms, and methodologies for easy understanding and future reference.


Qualifications & Skills

● Bachelor's, Master's, or PhD in Computer Science, Data Science, Machine Learning, or a related field. Advanced degrees or certifications in AI/ML are a plus

● Proven experience as an AI/ML Engineer, Data Scientist, or related role, ideally with a strong portfolio of AI/ML projects

● Proficiency in programming languages commonly used for AI/ML. Preferably Python

● Familiarity with popular AI/ML libraries and frameworks, such as TensorFlow, PyTorch, scikit-learn, etc.

● Familiarity with popular AI/ML Models such as GPT3, GPT4, Llama2, BERT etc.

● Strong understanding of machine learning algorithms, statistics, and data structures

● Experience with data preprocessing, data wrangling, and feature engineering

● Knowledge of deep learning architectures, neural networks, and transfer learning

● Familiarity with cloud platforms and services (e.g., AWS, Azure, Google Cloud) for scalable AI/ML deployment

● Solid understanding of software engineering principles and best practices for writing maintainable and scalable code

● Excellent analytical and problem-solving skills, with the ability to think critically and propose innovative solutions

● Effective communication skills to collaborate with cross-functional teams and present complex technical concepts to non-technical stakeholders

Read more
CLOUDSUFI

at CLOUDSUFI

3 recruiters
Ayushi Dwivedi
Posted by Ayushi Dwivedi
Remote only
8 - 12 yrs
₹30L - ₹40L / yr
Google Cloud Platform (GCP)
skill iconPython
Enterprise integration

Technical Project Manager

Current Location - Bangalore

Remote (with quarterly visit to Noida)


You can share your resume at ayushi.dwivedi at the rate cloudsufi.com


Role Overview

We are seeking a highly technical and execution-oriented Technical Project Manager (TPM) to lead the delivery of core Platform Engineering capabilities and critical Enterprise Application Integrations (EAI) for CLOUDSUFI customers. Unlike a traditional PM, this role is deeply embedded in the "how" of technical delivery—ensuring that complex, cloud-native infrastructure projects are executed on time, within scope, and with high technical integrity.

The ideal candidate acts as the bridge between high-level architectural design and day-to-day engineering execution, possessing deep expertise in GCP (Google Cloud Platform) and modern integration patterns.


Key Responsibilities

  • Technical Execution & Delivery: Lead the end-to-end project lifecycle for platform engineering and EAI initiatives. Convert high-level roadmaps into actionable technical workstreams, ensuring milestones are met across multi-disciplinary teams.
  • Sprint & Release Management: Facilitate technical grooming, sprint planning, and daily stand-ups. Manage the velocity and throughput of the platform engineering team, ensuring that technical debt is balanced against feature delivery.
  • Dependency & Risk Mitigation: Proactively identify and resolve technical blockers, resource constraints, and cross-team dependencies. Maintain a rigorous risk register for complex integration projects involving third-party systems.
  • Technical Scoping & Documentation: Collaborate with Architects to translate business requirements into detailed technical specifications, data flow diagrams, and API documentation. Ensure the technical team has a clear, unambiguous path to implementation.
  • Stakeholder Coordination: Serve as the primary technical point of contact for external customers and internal business units. Communicate project status, technical risks, and architectural trade-offs to both executive and technical audiences.
  • Quality & Operational Excellence: Define and track project-based KPIs such as deployment frequency, mean time to recovery (MTTR), and integration success rates. Ensure all deliveries meet CLOUDSUFI’s high standards for security and scalability.
  • GCP Ecosystem Oversight: Direct the implementation of services within the Google Cloud ecosystem, ensuring projects leverage GCP best practices for cost-optimization and performance.

Experience and Qualifications

  • Experience: 8+ years of experience in Technical Project Management or Engineering Management, with at least 3 years specifically focused on Cloud Infrastructure, Platform Engineering, or EAI.
  • Technical Depth: * GCP Mandatory: Deep hands-on familiarity with Google Cloud Platform services (GKE, Pub/Sub, Cloud Functions, Apigee).
  • Integrations: Proven track record of delivering large-scale EAI projects (API Gateways, Event-Driven Architecture, Service Mesh).
  • Cloud-Native: Strong understanding of Kubernetes, Docker, CI/CD pipelines (GitLab/Jenkins), and Infrastructure as Code (Terraform).
  • Project Leadership: Exceptional ability to lead "deep-tech" teams. You should be able to challenge technical estimates and understand code-level blockers without necessarily writing the code yourself.
  • Agile Mastery: Expert-level proficiency in Scrum and Kanban. Advanced skills in Jira (setting up complex workflows, dashboards, and automation) and Confluence.
  • Education: Bachelor’s or Master’s degree in Computer Science, Information Technology, or a related engineering field.
  • Certifications: * Required: PMP, PRINCE2, or CSM (Certified Scrum Master).
  • Preferred: Google Cloud Professional Cloud Architect or Professional Data Engineer certifications.


Read more
Hyderabad
4 - 8 yrs
₹20L - ₹30L / yr
Generative AI
Artificial Intelligence (AI)
skill iconMachine Learning (ML)
Large Language Models (LLM)
Retrieval Augmented Generation (RAG)
+8 more

We are seeking a talented AI/ML Engineer with strong hands-on experience in Generative AI and Large Language Models (LLMs) to join our Business Intelligence team. The role involves designing, developing, and deploying advanced AI/ML and GenAI-driven solutions to unlock business insights and enhance data-driven decision-making.


Key Responsibilities:

• Collaborate with business analysts and stakeholders to identify AI/ML and Generative AI use cases.

• Design and implement ML models for predictive analytics, segmentation, anomaly detection, and forecasting.

• Develop and deploy Generative AI solutions using LLMs (GPT, LLaMA, Mistral, etc.).

• Build and maintain Retrieval-Augmented Generation (RAG) pipelines and semantic search systems.

• Work with vector databases (FAISS, Pinecone, ChromaDB) for embedding storage and retrieval.

• Develop end-to-end AI/ML pipelines from data preprocessing to deployment.

• Integrate AI/ML and GenAI solutions into BI dashboards and reporting tools.

• Optimize models for performance, scalability, and reliability.

• Maintain documentation and promote knowledge sharing within the team.


Mandatory Requirements:

• 4+ years of relevant experience as an AI/ML Engineer.

• Hands-on experience in Generative AI and Large Language Models (LLMs) – Mandatory.

• Experience implementing RAG pipelines and prompt engineering techniques.

• Strong programming skills in Python.

• Experience with ML frameworks (TensorFlow, PyTorch, scikit-learn).

• Experience with vector databases (FAISS, Pinecone, ChromaDB).

• Strong understanding of SQL and database systems.

• Experience integrating AI solutions into BI tools (Power BI, Tableau).

• Strong analytical, problem-solving, and communication skills. Good to Have

• Experience with cloud platforms (AWS, Azure, GCP).

• Experience with Docker or Kubernetes.

• Exposure to NLP, computer vision, or deep learning use cases.

• Experience in MLOps and CI/CD pipelines

Read more
Global Digital Transformation Solutions Provider

Global Digital Transformation Solutions Provider

Agency job
via Peak Hire Solutions by Dhara Thakkar
Pune, Trivandrum , Thiruvananthapuram
8 - 10 yrs
₹20L - ₹24L / yr
skill iconJava
skill iconPython
API
Google Cloud Platform (GCP)
skill iconAmazon Web Services (AWS)
+13 more

Job Details

Job Title: Lead Software Engineer - Java, Python, API Development

Industry: Global digital transformation solutions provider

Domain - Information technology (IT)

Experience Required: 8-10 years

Employment Type: Full Time

Job Location: Pune & Trivandrum/ Thiruvananthapuram

CTC Range: Best in Industry

 

Job Description

Job Summary

We are seeking a Lead Software Engineer with strong hands-on expertise in Java and Python to design, build, and optimize scalable backend applications and APIs. The ideal candidate will bring deep experience in cloud technologies, large-scale data processing, and leading the design of high-performance, reliable backend systems.

 

Key Responsibilities

  • Design, develop, and maintain backend services and APIs using Java and Python
  • Build and optimize Java-based APIs for large-scale data processing
  • Ensure high performance, scalability, and reliability of backend systems
  • Architect and manage backend services on cloud platforms (AWS, GCP, or Azure)
  • Collaborate with cross-functional teams to deliver production-ready solutions
  • Lead technical design discussions and guide best practices

 

Requirements

  • 8+ years of experience in backend software development
  • Strong proficiency in Java and Python
  • Proven experience building scalable APIs and data-driven applications
  • Hands-on experience with cloud services and distributed systems
  • Solid understanding of databases, microservices, and API performance optimization

 

Nice to Have

  • Experience with Spring Boot, Flask, or FastAPI
  • Familiarity with Docker, Kubernetes, and CI/CD pipelines
  • Exposure to Kafka, Spark, or other big data tools

 

Skills

Java, Python, API Development, Data Processing, AWS Backend

 

Skills: Java, API development, Data Processing, AWS backend, Python,

 

Must-Haves

Java (8+ years), Python (8+ years), API Development (8+ years), Cloud Services (AWS/GCP/Azure), Database & Microservices

8+ years of experience in backend software development

Strong proficiency in Java and Python

Proven experience building scalable APIs and data-driven applications

Hands-on experience with cloud services and distributed systems

Solid understanding of databases, microservices, and API performance optimization

Mandatory Skills: Java API AND AWS

 

******

Notice period - 0 to 15 days only

Job stability is mandatory

Location: Pune, Trivandrum

Read more
NonStop io Technologies Pvt Ltd
Kalyani Wadnere
Posted by Kalyani Wadnere
Pune
3 - 5 yrs
Best in industry
skill iconReact.js
skill iconAngular (2+)
skill iconVue.js
skill iconPython
skill iconJava
+11 more

About NonStop io Technologies:

NonStop io Technologies is a value-driven company with a strong focus on process-oriented software engineering. We specialize in Product Development and have a decade's worth of experience in building web and mobile applications across various domains. NonStop io Technologies follows core principles that guide its operations and believes in staying invested in a product's vision for the long term. We are a small but proud group of individuals who believe in the 'givers gain' philosophy and strive to provide value in order to seek value. We are committed to and specialize in building cutting-edge technology products and serving as trusted technology partners for startups and enterprises. We pride ourselves on fostering innovation, learning, and community engagement. Join us to work on impactful projects in a collaborative and vibrant environment.


Brief Description:

We are looking for a passionate and experienced Full Stack Engineer to join our engineering team. The ideal candidate will have strong experience in both frontend and backend development, with the ability to design, build, and scale high-quality applications. You will collaborate with cross-functional teams to deliver robust and user-centric solutions.


Roles and Responsibilities:

● Design, develop, and maintain scalable web applications

● Build responsive and high-performance user interfaces

● Develop secure and efficient backend services and APIs

● Collaborate with product managers, designers, and QA teams to deliver features

● Write clean, maintainable, and testable code

● Participate in code reviews and contribute to engineering best practices

● Optimize applications for speed, performance, and scalability

● Troubleshoot and resolve production issues

● Contribute to architectural decisions and technical improvements.


Requirements:

● 3 to 5 years of experience in full-stack development

● Strong proficiency in frontend technologies such as React, Angular, or Vue

● Solid experience with backend technologies such as Node.js, .NET, Java, or Python

● Experience in building RESTful APIs and microservices

● Strong understanding of databases such as PostgreSQL, MySQL, MongoDB, or SQL Server

● Experience with version control systems like Git

● Familiarity with CI CD pipelines

● Good understanding of cloud platforms such as AWS, Azure, or GCP

● Strong understanding of software design principles and data structures

● Experience with containerization tools such as Docker

● Knowledge of automated testing frameworks

● Experience working in Agile environments


Why Join Us?

● Opportunity to work on a cutting-edge healthcare product

● A collaborative and learning-driven environment

● Exposure to AI and software engineering innovations

● Excellent work ethic and culture


If you're passionate about technology and want to work on impactful projects, we'd love to hear from you!

Read more
NonStop io Technologies Pvt Ltd
Kalyani Wadnere
Posted by Kalyani Wadnere
Pune
4 - 7 yrs
Best in industry
DevOps
Reliability engineering
skill iconAmazon Web Services (AWS)
Windows Azure
Google Cloud Platform (GCP)
+5 more

About NonStop io Technologies:

NonStop io Technologies is a value-driven company with a strong focus on process-oriented software engineering. We specialize in Product Development and have a decade's worth of experience in building web and mobile applications across various domains. NonStop io Technologies follows core principles that guide its operations and believes in staying invested in a product's vision for the long term. We are a small but proud group of individuals who believe in the 'givers gain' philosophy and strive to provide value in order to seek value. We are committed to and specialize in building cutting-edge technology products and serving as trusted technology partners for startups and enterprises. We pride ourselves on fostering innovation, learning, and community engagement. Join us to work on impactful projects in a collaborative and vibrant environment.


Brief Description:

We are looking for a skilled and proactive DevOps Engineer to join our growing engineering team. The ideal candidate will have hands-on experience in building, automating, and managing scalable infrastructure and CI CD pipelines. You will work closely with development, QA, and product teams to ensure reliable deployments, performance, and system security.


Roles and Responsibilities:

● Design, implement, and manage CI CD pipelines for multiple environments

● Automate infrastructure provisioning using Infrastructure as Code tools

● Manage and optimize cloud infrastructure on AWS, Azure, or GCP

● Monitor system performance, availability, and security

● Implement logging, monitoring, and alerting solutions

● Collaborate with development teams to streamline release processes

● Troubleshoot production issues and ensure high availability

● Implement containerization and orchestration solutions such as Docker and Kubernetes

● Enforce DevOps best practices across the engineering lifecycle

● Ensure security compliance and data protection standards are maintained


Requirements:

● 4 to 7 years of experience in DevOps or Site Reliability Engineering

● Strong experience with cloud platforms such as AWS, Azure, or GCP - Relevant Certifications will be a great advantage

● Hands-on experience with CI CD tools like Jenkins, GitHub Actions, GitLab CI, or Azure DevOps

● Experience working in microservices architecture

● Exposure to DevSecOps practices

● Experience in cost optimization and performance tuning in cloud environments

● Experience with Infrastructure as Code tools such as Terraform, CloudFormation, or ARM

● Strong knowledge of containerization using Docker

● Experience with Kubernetes in production environments

● Good understanding of Linux systems and shell scripting

● Experience with monitoring tools such as Prometheus, Grafana, ELK, or Datadog

● Strong troubleshooting and debugging skills

● Understanding of networking concepts and security best practices


Why Join Us?

● Opportunity to work on a cutting-edge healthcare product

● A collaborative and learning-driven environment

● Exposure to AI and software engineering innovations

● Excellent work ethic and culture


If you're passionate about technology and want to work on impactful projects, we'd love to hear from you!

Read more
House Of Shipping
Chennai
10 - 14 yrs
₹10L - ₹15L / yr
MuleSoft
Warehouse Management System (WMS)
API
Google Cloud Platform (GCP)
JSON

Key Responsibilities 

  • Lead the design and development of MuleSoft APIs following API-led connectivity principles (System, Process, Experience layers). 
  • Architect and implement complex integrations between OMS/WMS platforms (Manhattan preferred) and external systems including ERP, TMS, marketplaces, and shopping carts. 
  • Drive e-commerce integration initiatives for order ingestion, inventory synchronization, returns processing, and shipment tracking. 
  • Deploy and integrate OMS solutions hosted on Google Cloud Platform, leveraging services such as Cloud Run, Pub/Sub, and Cloud Storage. 
  • Manage Apigee API Gateway configurations, including proxies, policies, authentication, and analytics. 
  • Develop and maintain DataWeave transformations for multi-format data (JSON, XML, CSV, EDI). 
  • Mentor junior MuleSoft developers and enforce best practices for integration design, coding standards, and performance optimization. 
  • Participate in CI/CD pipeline setup and manage automated deployments for MuleSoft applications. 
  • Collaborate with product, architecture, and QA teams to ensure solutions meet business, performance, and security requirements. 
  • Monitor and troubleshoot integration flows to ensure high availability, scalability, and reliability


 

Required Qualifications 

  • Bachelor’s degree in Computer Science, Information Systems, or related field. 
  • 5+ years of experience in MuleSoft Anypoint Platform development (Mule 4). 
  • Proven experience with OMS/WMS integrations (Manhattan preferred) in supply chain or logistics domains. 
  • Strong experience integrating shopping carts and marketplaces (Shopify, Magento, BigCommerce, Amazon, Walmart). 
  • Proficiency in Apigee API Gateway (proxy design, security, analytics). 
  • Experience with Google Cloud Platform services for integration deployments. 
  • Strong DataWeave transformation skills for JSON, XML, CSV, and EDI data mapping. 
  • Expertise in REST/SOAP API design and integration best practices. 
  • Familiarity with B2B EDI transactions (888, 840, 850, 856, 810). 


Read more
Global Digital Transformation Solutions Provider

Global Digital Transformation Solutions Provider

Agency job
via Peak Hire Solutions by Dhara Thakkar
Trivandrum, Thiruvananthapuram
9 - 12 yrs
₹21L - ₹27L / yr
skill iconJava
Spring
Apache Kafka
SQL
skill iconPostgreSQL
+16 more

JOB DETAILS:

Job Title: Java Lead-Java, MS, Kafka-TVM - Java (Core & Enterprise), Spring/Micronaut, Kafka

Industry: Global Digital Transformation Solutions Provider

Salary: Best in Industry

Experience: 9 to 12 years

Location: Trivandrum, Thiruvananthapuram

 

Job Description

Experience

  • 9+ years of experience in Java-based backend application development
  • Proven experience building and maintaining enterprise-grade, scalable applications
  • Hands-on experience working with microservices and event-driven architectures
  • Experience working in Agile and DevOps-driven development environments

 

Mandatory Skills

  • Advanced proficiency in core Java and enterprise Java concepts
  • Strong hands-on experience with Spring Framework and/or Micronaut for building scalable backend applications
  • Strong expertise in SQL, including database design, query optimization, and performance tuning
  • Hands-on experience with PostgreSQL or other relational database management systems
  • Strong experience with Kafka or similar event-driven messaging and streaming platforms
  • Practical knowledge of CI/CD pipelines using GitLab
  • Experience with Jenkins for build automation and deployment processes
  • Strong understanding of GitLab for source code management and DevOps workflows

 

Responsibilities

  • Design, develop, and maintain robust, scalable, and high-performance backend solutions
  • Develop and deploy microservices using Spring or Micronaut frameworks
  • Implement and integrate event-driven systems using Kafka
  • Optimize SQL queries and manage PostgreSQL databases for performance and reliability
  • Build, implement, and maintain CI/CD pipelines using GitLab and Jenkins
  • Collaborate with cross-functional teams including product, QA, and DevOps to deliver high-quality software solutions
  • Ensure code quality through best practices, reviews, and automated testing

 

Good-to-Have Skills

  • Strong problem-solving and analytical abilities
  • Experience working with Agile development methodologies such as Scrum or Kanban
  • Exposure to cloud platforms such as AWS, Azure, or GCP
  • Familiarity with containerization and orchestration tools such as Docker or Kubernetes

 

Skills: java, spring boot, kafka development, cicd, postgresql, gitlab

 

Must-Haves

Java Backend (9+ years), Spring Framework/Micronaut, SQL/PostgreSQL, Kafka, CI/CD (GitLab/Jenkins)

Advanced proficiency in core Java and enterprise Java concepts

Strong hands-oacn experience with Spring Framework and/or Micronaut for building scalable backend applications

Strong expertise in SQL, including database design, query optimization, and performance tuning

Hands-on experience with PostgreSQL or other relational database management systems

Strong experience with Kafka or similar event-driven messaging and streaming platforms

Practical knowledge of CI/CD pipelines using GitLab

Experience with Jenkins for build automation and deployment processes

Strong understanding of GitLab for source code management and DevOps workflows

 

 

*******

Notice period - 0 to 15 days only

Job stability is mandatory

Location: only Trivandrum

F2F Interview on 21st Feb 2026

 

Read more
Deqode

at Deqode

1 recruiter
Samiksha Agrawal
Posted by Samiksha Agrawal
Remote only
4 - 6 yrs
₹4L - ₹18L / yr
Google Cloud Platform (GCP)
databricks
Apache Spark

Job Title: Data Engineer – GCP (Fullstack)

Location: Remote (Chennai Preferred)

Shift: Day Shift

Experience: 4+ Years


Role Overview

We are seeking a skilled Data Engineer / Platform Engineer to drive value delivery within cross-functional squads by leveraging strong technical expertise. The role involves designing, building, and supporting scalable data and application solutions using GCP, Databricks, Apache Spark, and cloud-native services, while following Agile and engineering best practices.


Key Responsibilities

  • Design, build, and maintain backend services and APIs using C#, deployed on GCP Cloud Run.
  • Develop and support scalable data and application solutions using Databricks.
  • Implement and manage data governance, security, and lineage using Unity Catalog.
  • Utilize Apache Spark for large-scale data processing and performance optimization.
  • Build, optimize, and maintain robust data pipelines and transformations.
  • Work closely with cross-functional teams in Agile squads for solution delivery.
  • Implement CI/CD pipelines (preferably using Azure DevOps).
  • Manage Infrastructure as Code (IaC) using Terraform on GCP.
  • Work with Firestore (NoSQL) and relational databases like PostgreSQL/MySQL.
  • Perform debugging, troubleshooting, and performance tuning of applications and data workloads.


Required Skills & Expertise

  • 4+ years of experience in Data Engineering / Platform Engineering.
  • Strong hands-on experience with Databricks and Apache Spark.
  • Experience with Unity Catalog for governance and access control.
  • Strong knowledge of GCP services, especially Cloud Run.
  • Proficiency in building REST APIs using C#.
  • Experience with CI/CD pipelines (Azure DevOps preferred).
  • Experience with Terraform (IaC on GCP).
  • Hands-on experience with Firestore and relational databases.
  • Strong analytical, problem-solving, and debugging skills.
  • Experience working in Agile environments.




Read more
Well established Fintech Co.

Well established Fintech Co.

Agency job
via Infinium Associate by Toshi Srivastava
Delhi, Gurugram, Noida, Ghaziabad, Faridabad
8 - 12 yrs
₹30L - ₹35L / yr
skill iconData Science
skill iconPython
Artificial Intelligence (AI)
Google Vertex AI
Google Cloud Platform (GCP)

We are looking for a visionary and hands-on Head of Data Science and AI with at least 6 years of experience to lead our data strategy and analytics initiatives. In this pivotal role, you will take full ownership of the end-to-end technology stack, driving a data-analytics-driven business roadmap that delivers tangible ROI. You will not only guide high-level strategy but also remain hands-on in model design and deployment, ensuring our data capabilities directly empower executive decision-making.

If you are passionate about leveraging AI and Data to transform financial services, we invite you to lead our data transformation journey.

Key Responsibilities

Strategic Leadership & Roadmap

  • End-to-End Tech Stack Ownership: Define, own, and evolve the complete data science and analytics technology stack to ensure scalability and performance.
  • Business Roadmap & ROI: Develop and execute a data analytics-driven business roadmap, ensuring every initiative is aligned with organizational goals and delivers measurable Return on Investment (ROI).
  • Executive Decision Support: Create and present high-impact executive decision packs, providing actionable insights that drive key business strategies.

Model Design & Deployment (Hands-on)

  • Hands-on Development: Lead by example with hands-on involvement in AI modeling, machine learning model design, and algorithm development using Python.
  • Deployment & Ops: Oversee and execute the deployment of models into production environments, ensuring reliability, scalability, and seamless integration with existing systems.
  • Leverage expert-level knowledge of Google Cloud Agentic AI, Vertex AI and BigQuery to build advanced predictive models and data pipelines.
  • Develop business dashboards for various sales channels and drive data driven decision making to improve sales and reduce costs. 

Governance & Quality

  • Data Governance: Establish and enforce robust data governance frameworks, ensuring data accuracy, security, consistency, and compliance across the organization.
  • Best Practices: Champion best practices in coding, testing, and documentation to build a world-class data engineering culture.

Collaboration & Innovation

  • Work closely with Product, Engineering, and Business leadership to identify opportunities for AI/ML intervention.
  • Stay ahead of industry trends in AI, Generative AI, and financial modeling to keep Bajaj Capital at the forefront of innovation.

Must-Have Skills & Experience

Experience:

  • At least 7 years of industry experience in Data Science, Machine Learning, or a related field.
  • Proven track record of applying AI and leading data science teams or initiatives that resulted in significant business impact.

Technical Proficiency:

  • Core Languages: Proficiency in Python is mandatory, with strong capabilities in libraries such as Pandas, NumPy, Scikit-learn, TensorFlow/PyTorch.
  • Cloud Data Stack: Expert-level command of Google Cloud Platform (GCP), specifically Agentic AI, Vertex AI and BigQuery.
  • AI & Analytics Stack: Deep understanding of the modern AI and Data Analytics stack, including data warehousing, ETL/ELT pipelines, and MLOps.
  • Visualization: PowerBI in combination with custom web/mobile applications.

Leadership & Soft Skills:

  • Ability to translate complex technical concepts into clear business value for stakeholders.
  • Strong ownership mindset with the ability to manage end-to-end project lifecycles.
  • Experience in creating governance structures and executive-level reporting.

Good-to-Have / Plus

  • Domain Expertise: Prior experience in the BFSI domain (Wealth Management, Insurance, Mutual Funds, or Fintech).
  • Certifications: Google Professional Data Engineer or Google Professional Machine Learning Engineer certifications.
  • Advanced AI: Experience with Generative AI (LLMs), RAG architectures, and real-time analytics.


Read more
Healthcare Industry

Healthcare Industry

Agency job
via Peak Hire Solutions by Dhara Thakkar
Bengaluru (Bangalore)
6 - 10 yrs
₹25L - ₹30L / yr
MLOps
Generative AI
skill iconPython
Natural Language Processing (NLP)
skill iconMachine Learning (ML)
+22 more

JOB DETAILS:

* Job Title: Principal Data Scientist

* Industry: Healthcare

* Salary: Best in Industry

* Experience: 6-10 years

* Location: Bengaluru

 

Preferred Skills: Generative AI, NLP & ASR, Transformer Models, Cloud Deployment, MLOps

 

Criteria:

  1. Candidate must have 7+ years of experience in ML, Generative AI, NLP, ASR, and LLMs (preferably healthcare).
  2. Candidate must have strong Python skills with hands-on experience in PyTorch/TensorFlow and transformer model fine-tuning.
  3. Candidate must have experience deploying scalable AI solutions on AWS/Azure/GCP with MLOps, Docker, and Kubernetes.
  4. Candidate must have hands-on experience with LangChain, OpenAI APIs, vector databases, and RAG architectures.
  5. Candidate must have experience integrating AI with EHR/EMR systems, ensuring HIPAA/HL7/FHIR compliance, and leading AI initiatives.

 

Job Description

Principal Data Scientist

(Healthcare AI | ASR | LLM | NLP | Cloud | Agentic AI)

 

Job Details

  • Designation: Principal Data Scientist (Healthcare AI, ASR, LLM, NLP, Cloud, Agentic AI)
  • Location: Hebbal Ring Road, Bengaluru
  • Work Mode: Work from Office
  • Shift: Day Shift
  • Reporting To: SVP
  • Compensation: Best in the industry (for suitable candidates)

 

Educational Qualifications

  • Ph.D. or Master’s degree in Computer Science, Artificial Intelligence, Machine Learning, or a related field
  • Technical certifications in AI/ML, NLP, or Cloud Computing are an added advantage

 

Experience Required

  • 7+ years of experience solving real-world problems using:
  • Natural Language Processing (NLP)
  • Automatic Speech Recognition (ASR)
  • Large Language Models (LLMs)
  • Machine Learning (ML)
  • Preferably within the healthcare domain
  • Experience in Agentic AI, cloud deployments, and fine-tuning transformer-based models is highly desirable

Role Overview

This position is part of company, a healthcare division of Focus Group specializing in medical coding and scribing.

We are building a suite of AI-powered, state-of-the-art web and mobile solutions designed to:

  • Reduce administrative burden in EMR data entry
  • Improve provider satisfaction and productivity
  • Enhance quality of care and patient outcomes

Our solutions combine cutting-edge AI technologies with live scribing services to streamline clinical workflows and strengthen clinical decision-making.

The Principal Data Scientist will lead the design, development, and deployment of cognitive AI solutions, including advanced speech and text analytics for healthcare applications. The role demands deep expertise in generative AI, classical ML, deep learning, cloud deployments, and agentic AI frameworks.

 

Key Responsibilities

AI Strategy & Solution Development

  • Define and develop AI-driven solutions for speech recognition, text processing, and conversational AI
  • Research and implement transformer-based models (Whisper, LLaMA, GPT, T5, BERT, etc.) for speech-to-text, medical summarization, and clinical documentation
  • Develop and integrate Agentic AI frameworks enabling multi-agent collaboration
  • Design scalable, reusable, and production-ready AI frameworks for speech and text analytics

Model Development & Optimization

  • Fine-tune, train, and optimize large-scale NLP and ASR models
  • Develop and optimize ML algorithms for speech, text, and structured healthcare data
  • Conduct rigorous testing and validation to ensure high clinical accuracy and performance
  • Continuously evaluate and enhance model efficiency and reliability

Cloud & MLOps Implementation

  • Architect and deploy AI models on AWS, Azure, or GCP
  • Deploy and manage models using containerization, Kubernetes, and serverless architectures
  • Design and implement robust MLOps strategies for lifecycle management

Integration & Compliance

  • Ensure compliance with healthcare standards such as HIPAA, HL7, and FHIR
  • Integrate AI systems with EHR/EMR platforms
  • Implement ethical AI practices, regulatory compliance, and bias mitigation techniques

Collaboration & Leadership

  • Work closely with business analysts, healthcare professionals, software engineers, and ML engineers
  • Implement LangChain, OpenAI APIs, vector databases (Pinecone, FAISS, Weaviate), and RAG architectures
  • Mentor and lead junior data scientists and engineers
  • Contribute to AI research, publications, patents, and long-term AI strategy

 

Required Skills & Competencies

  • Expertise in Machine Learning, Deep Learning, and Generative AI
  • Strong Python programming skills
  • Hands-on experience with PyTorch and TensorFlow
  • Experience fine-tuning transformer-based LLMs (GPT, BERT, T5, LLaMA, etc.)
  • Familiarity with ASR models (Whisper, Canary, wav2vec, DeepSpeech)
  • Experience with text embeddings and vector databases
  • Proficiency in cloud platforms (AWS, Azure, GCP)
  • Experience with LangChain, OpenAI APIs, and RAG architectures
  • Knowledge of agentic AI frameworks and reinforcement learning
  • Familiarity with Docker, Kubernetes, and MLOps best practices
  • Understanding of FHIR, HL7, HIPAA, and healthcare system integrations
  • Strong communication, collaboration, and mentoring skills

 

 

Read more
CLOUDSUFI
Delhi, Gurugram, Noida, Ghaziabad, Faridabad
5 - 12 yrs
₹25L - ₹45L / yr
Artificial Intelligence (AI)
Generative AI
Large Language Models (LLM) tuning
Retrieval Augmented Generation (RAG)
Vertex
+2 more

About Us :


CLOUDSUFI, a Google Cloud Premier Partner, a Data Science and Product Engineering organization building Products and Solutions for Technology and Enterprise industries. We firmly believe in the power of data to transform businesses and make better decisions. We combine unmatched experience in business processes with cutting edge infrastructure and cloud services. We partner with our customers to monetize their data and make enterprise data dance.


Our Values :


We are a passionate and empathetic team that prioritizes human values. Our purpose is to elevate the quality of lives for our family, customers, partners and the community.


Equal Opportunity Statement :


CLOUDSUFI is an equal opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all employees. All qualified candidates receive consideration for employment without regard to race, color, religion, gender, gender identity or expression, sexual orientation, and national origin status. We provide equal opportunities in employment, advancement, and all other areas of our workplace.


Role : Lead AI/Senior Engineer-AI


Location : Noida, Delhi/NCR


Experience : 5- 12 years


Education : BTech / BE / MCA / MSc Computer Science


Must Haves :


Conversational AI & NLU :


- Advanced proficiency with Dialogflow CX


- Intent classification, entity extraction, conversation flow design


- Experience building structured dialogue flows with routing logic CCAI platform familiarity


Agentic AI & Multi-Step Reasoning :


- Production experience with Google ADK (or LangChain/LangGraph equivalent)


- Multi-step reasoning and tool orchestration capability


- Tool-use patterns and function calling implementation


RAG Systems & Knowledge Management :


- Hands-on Vertex AI RAG Engine experience (or equivalent)


- Semantic search, chunking strategies, retrieval optimization


- Document processing pipelines (PDF parsing, chunking)


LLM/GenAI & Prompt Engineering :


- Production experience with Gemini models


- Advanced prompt engineering for customer support


- Langfuse experience for prompt management


Google Cloud Platform & Vertex AI :


- Advanced Vertex AI proficiency (Generative AI APIs, Agent Engine)


- Cloud Functions and Cloud Run deployment experience


- BigQuery for conversation analytics


API Integration :


- Genesys Cloud CX integration experience


- REST API design and webhook implementation


- Enterprise authentication patterns (OAuth 2.0)


Good To Have :


Conversational AI & NLU :


- Multi-language support implementation (Spanish/English)


- Telephony integration (speech recognition, TTS, DTMF)


- Barge-in handling and voice optimization


Agentic AI :


- Agent state management and session persistence


- Advanced fallback strategies and error recovery


- Dynamic tool selection and evaluation


RAG Systems :


- Re-ranking and advanced retrieval quality metrics


- Query expansion and context-aware retrieval


- Corpus organization strategies


LLM/GenAI :


- Prompt versioning, A/B testing, iterative refinement


- Prompt injection mitigation strategies


- In-context learning, few-shot, chain-of-thought techniques


LLMOps & Observability :


- Vertex AI Evaluation Service experience


- Groundedness, relevance, coherence, safety metrics


- Trace-level debugging with Cloud Trace


- Centralized logging strategies


Google Cloud :


- Application Integration connectors


- VPC Service Controls and enterprise security


- Cloud Pub/Sub for event-driven systems


Enterprise Integration :


- Third-party AI agent orchestration (SAP Joule, ServiceNow AI, Agentforce)


- Salesforce, SAP, ServiceNow integration patterns


- Context passage strategies for escalations


Architecture & System Design :


- Configuration-driven systems (Meta-Agent patterns)


- Microservices and containerization


- Scalable, multi-tenant system design


- Disaster recovery and failover strategies


Product Quality & KPIs :


- Customer support metrics expertise (CSAT, SSR, escalation rate)


- A/B testing and experimentation frameworks


- User feedback loop implementation


Deliverables :


- Architecture Design : End-to-end platform architecture, data flow diagrams, Dialogflow CX vs. ADK routing decisions


- Conversational Flows : 15+ dialogue flows covering billing, networking, appointments, troubleshooting, and escalations


- ADK Agent Implementation : Complex reasoning agents for technical support, account analysis, and context preparation


- RAG Pipeline : Document processing, chunking configuration, corpus organization (product docs, support articles, policies, promotions)


- Prompt Management : System prompts, Langfuse setup, playbook governance, version control


- Quality Framework : Evaluation pipeline, metrics dashboards, automated assessment, continuous improvement recommendations


- Integration Layer : Genesys handoff, webhook integrations, Application Integration setup, session management


- Testing & Validation : Conversation flow tests, performance testing (latency, throughput, 1000 concurrent users), security validation


- Response time <2 seconds (p95), 99.9% uptime, 1000 concurrent conversations


- Data encryption (TLS 1.2+, AES-256 at rest), PII redaction, 1-year data retention


- Graceful degradation and fallback mechanisms

Read more
Digital transformation excellence provider

Digital transformation excellence provider

Agency job
via Peak Hire Solutions by Dhara Thakkar
Mumbai
12 - 20 yrs
₹30L - ₹40L / yr
Product Management
Business-to-business
Analytics
Product engineering
Procurement management
+26 more

 JOB DETAILS:

* Job Title: Head of Engineering/Senior Product Manager

* Industry: Digital transformation excellence provider

* Salary: Best in Industry

* Experience: 12-20 years

* Location: Mumbai

 

Job Description

Role Overview

The VP / Head of Technology will lead company’s technology function across engineering, product development, cloud infrastructure, security, and AI-led initiatives. This role focuses on delivering scalable, high-quality technology solutions across company’s core verticals including eCommerce, Procurement & e-Sourcing, ERP integrations, Sustainability/ESG, and Business Services.

This leader will drive execution, ensure technical excellence, modernize platforms, and collaborate closely with business and delivery teams.

 

Roles and Responsibilities:

Technology Execution & Architecture Leadership

·        Own and execute the technology roadmap aligned with business goals.

·        Build and maintain scalable architecture supporting multiple verticals.

·        Enforce engineering best practices, code quality, performance, and security.

·        Lead platform modernization including microservices, cloud-native architecture, API-first systems, and integration frameworks.

 

Product & Engineering Delivery

·        Manage multi-product engineering teams across eCommerce platforms, procurement systems, ERP integrations, analytics, and ESG solutions.

·        Own the full SDLC — requirements, design, development, testing, deployment, support.

·        Implement Agile, DevOps, CI/CD for faster releases and improved reliability.

·        Oversee product/platform interoperability across all company systems.

 

Vertical-Specific Technology Leadership

Procurement Tech:

·        Lead architecture and enhancements of procurement and indirect spend platforms.

·        Ensure interoperability with SAP Ariba, Coupa, Oracle, MS Dynamics, etc.

 

eCommerce:

·        Drive development of scalable B2B/B2C commerce platforms, headless commerce, marketplace integrations, and personalization capabilities.

 

Sustainability/ESG:

·        Support development of GHG tracking, reporting systems, and sustainability analytics platforms.

 

Business Services:

·        Enhance operational platforms with automation, workflow management, dashboards, and AI-driven efficiency tools.

 

Data, Cloud, Security & Infrastructure

·        Own cloud infrastructure strategy (Azure/AWS/GCP).

·        Ensure adherence to compliance standards (SOC2, ISO 27001, GDPR).

·        Lead cybersecurity policies, monitoring, threat detection, and recovery planning.

·        Drive observability, cost optimization, and system scalability.

 

AI, Automation & Innovation

·        Integrate AI/ML, analytics, and automation into product platforms and service delivery.

·        Build frameworks for workflow automation, supplier analytics, personalization, and operational efficiency.

·        Lead R&D for emerging tech aligned to business needs.

 

Leadership & Team Management

·        Lead and mentor engineering managers, architects, developers, QA, and DevOps.

·        Drive a culture of ownership, innovation, continuous learning, and performance accountability.

·        Build capability development frameworks and internal talent pipelines.

 

Stakeholder Collaboration

·        Partner with Sales, Delivery, Product, and Business Teams to align technology outcomes with customer needs.

·        Ensure transparent reporting on project status, risks, and technology KPIs.

·        Manage vendor relationships, technology partnerships, and external consultants.

 

Education, Training, Skills, and Experience Requirements:

Experience & Background

·        16+ years in technology execution roles, including 5–7 years in senior leadership.

·        Strong background in multi-product engineering for B2B platforms or enterprise systems.

·        Proven delivery experience across: eCommerce, ERP integrations, procurement platforms, ESG solutions, and automation.

 

Technical Skills

·        Expertise in cloud platforms (Azure/AWS/GCP), microservices architecture, API frameworks.

·        Strong grasp of procurement tech, ERP integrations, eCommerce platforms, and enterprise-scale systems.

·        Hands-on exposure to AI/ML, automation tools, data engineering, and analytics stacks.

·        Strong understanding of security, compliance, scalability, performance engineering.

 

Leadership Competencies

·        Execution-focused technology leadership.

·        Strong communication and stakeholder management skills.

·        Ability to lead distributed teams, manage complexity, and drive measurable outcomes.

·        Innovation mindset with practical implementation capability.

 

Education

·        Bachelor’s or Master’s in Computer Science/Engineering or equivalent.

·        Additional leadership education (MBA or similar) is a plus, not mandatory.

 

Travel Requirements

·        Occasional travel for client meetings, technology reviews, or global delivery coordination.

 

Must-Haves

·        10+ years of technology experience, with with at least 6 years leading large (50-100+) multi product engineering teams.

·        Must have worked on B2B Platforms. Experience in Procurement Tech or Supply Chain

·        Min. 10+ Years of Expertise in Cloud-Native Architecture, Expert-level design in Azure, AWS, or GCP using Microservices, Kubernetes (K8s), and Docker.

·        Min. 8+ Years of Expertise in Modern Engineering Practices, Advanced DevOps, CI/CD pipelines, and automated testing frameworks (Selenium, Cypress, etc.).

·        Hands-on leadership experience in Security & Compliance.

·        Min. 3+ Years of Expertise in AI & Data Engineering, Practical implementation of LLMs, Predictive Analytics, or AI-driven automation

·        Strong technology execution leadership, with ownership of end-to-end technology roadmaps aligned to business outcomes.

·        Min. 6+ Years of Expertise in B2B eCommerce Logic Architecture of Headless Commerce, marketplace integrations, and complex B2B catalog management.

·        Strong product management exposure

·        Proven experience in leading end-to-end team operations

·        Relevant experience in product-driven organizations or platforms

·        Strong Subject Matter Expertise (SME)

 

Education: - Master degree.

 

**************

Joining time / Notice Period: Immediate - 45days.

Location: - Andheri,

5 days working (3 - 2 days’ work from office)

Read more
Virtana

at Virtana

3 candid answers
2 recruiters
Krutika Devadiga
Posted by Krutika Devadiga
Pune
5 - 10 yrs
Best in industry
skill iconPython
skill iconKubernetes
skill iconDocker
skill iconAmazon Web Services (AWS)
Google Cloud Platform (GCP)
+3 more

Role Overview:

Challenge convention and work on cutting edge technology that is transforming the way our customers manage their physical, virtual and cloud computing environments. Virtual Instruments seeks highly talented people to join our growing team, where your contributions will impact the development and delivery of our product roadmap. Our award-winning Virtana Platform provides the only real-time, system-wide, enterprise scale solution for providing visibility into performance, health and utilization metrics, translating into improved performance and availability while lowering the total cost of the infrastructure supporting mission-critical applications.


We are seeking an individual with knowledge in Systems Management and/or Systems Monitoring Software and/or Performance Management Software and Solutions with insight into integrated infrastructure platforms like Cisco UCS, infrastructure providers like Nutanix, VMware, EMC & NetApp and public cloud platforms like Google Cloud and AWS to expand the depth and breadth of Virtana Products.


Work Location: Pune/ Chennai


Job Type: Hybrid


Role Responsibilities:

  • The engineer will be primarily responsible for design and development of software solutions for the Virtana Platform
  • Partner and work closely with team leads, architects and engineering managers to design and implement new integrations and solutions for the Virtana Platform.
  • Communicate effectively with people having differing levels of technical knowledge.
  • Work closely with Quality Assurance and DevOps teams assisting with functional and system testing design and deployment
  • Provide customers with complex application support, problem diagnosis and problem resolution

 

Required Qualifications:

  • Minimum of 4+ years of experience in a Web Application centric Client Server Application development environment focused on Systems Management, Systems Monitoring and Performance Management Software.
  • Able to understand and comprehend integrated infrastructure platforms and experience working with one or more data collection technologies like SNMP, REST, OTEL, WMI, WBEM.
  • Minimum of 4 years of development experience with one of these high level languages like Python, Java, GO is required.
  • Bachelor's (B.E, B.Tech) or Master's degree (M.E, M.Tech. MCA) in computer science, Computer Engineering or equivalent
  •  2 years of development experience in public cloud environment using Kubernetes etc (Google Cloud and/or AWS)

 

Desired Qualifications:

  • Prior experience with other virtualization platforms like OpenShift is a plus
  • Prior experience as a contributor to engineering and integration efforts with strong attention to detail and exposure to Open-Source software is a plus
  • Demonstrated ability as a strong technical engineer who can design and code with strong communication skills
  • Firsthand development experience with the development of Systems, Network and performance Management Software and/or Solutions is a plus
  • Ability to use a variety of debugging tools, simulators and test harnesses is a plus

 

About Virtana:

Virtana delivers the industry's only broadest and deepest Observability Platform that allows organizations to monitor infrastructure, de-risk cloud migrations, and reduce cloud costs by 25% or more.

Over 200 Global 2000 enterprise customers, such as AstraZeneca, Dell, Salesforce, Geico, Costco, Nasdaq, and Boeing, have valued Virtana's software solutions for over a decade.

Our modular platform for hybrid IT digital operations includes Infrastructure Performance Monitoring and Management (IPM), Artificial Intelligence for IT Operations (AIOps), Cloud Cost Management (Fin Ops), and Workload Placement Readiness Solutions. Virtana is simplifying the complexity of hybrid IT environments with a single cloud-agnostic platform across all the categories listed above. The $30BIT Operations Management (ITOM) Software market is ripe for disruption, and Virtana is uniquely positioned for success.

Read more
Trential Technologies

at Trential Technologies

1 candid answer
Garima Jangid
Posted by Garima Jangid
Gurugram
5 - 8 yrs
₹30L - ₹45L / yr
skill iconNodeJS (Node.js)
skill iconJavascript
RabbitMQ
Apache Kafka
skill iconRedis
+14 more

About us:

Trential is engineering the future of digital identity with W3C Verifiable Credentials—secure, decentralized, privacy-first. We make identity and credentials verifiable anywhere, instantly.


We are looking for a Team lead to architect, build, and scale high-performance web applications that power our core products. You will lead the full development lifecycle—from system design to deployment—while mentoring the team and driving best engineering practices across frontend and backend stacks.


 Design & Implement: Lead the design, implementation and management of Trential products.

 Lead by example: Be the most senior and impactful engineer on the team, setting the technical bar through your direct contributions.

 Code Quality & Best Practices: Enforce high standards for code quality, security, and performance through rigorous code reviews, automated testing, and continuous delivery pipelines.

 Standards Adherence: Ensure all solutions comply with relevant open standards like W3C Verifiable Credentials (VCs), Decentralized Identifiers (DIDs) & Privacy Laws, maintaining global interoperability.

 Continuous Improvement: Lead the charge to continuously evaluate and improve the products & processes. Instill a culture of metrics-driven process improvement to boost team efficiency and product quality.

 Cross-Functional Collaboration: Work closely with the Co-Founders & Product Team to translate business requirements and market needs into clear, actionable technical specifications and stories. Represent Trential in interactions with external stakeholders for integrations.


What we're looking for:

 Experience: 5+ years of experience in software development, with at least 2 years as a Technical Lead.

 Technical Depth: Deep proficiency in JavaScript and experience in building and operating distributed, fault-tolerant systems.

 Cloud & Infrastructure: Hands-on experience with cloud platforms (AWS & GCP) and modern DevOps practices (e.g., CI/CD, Infrastructure as Code, Docker).

 Databases: Strong knowledge of SQL/NoSQL databases and data modeling for high-throughput, secure applications.


Preferred Qualifications (Nice to Have)

 Identity & Credentials: Knowledge of decentralized identity principles, Verifiable Credentials (W3C VCs), DIDs, and relevant protocols (e.g., OpenID4VC, DIDComm)

 Familiarity with data privacy and security standards (GDPR, SOC 2, ISO 27001) and designing systems complying to these laws.

 Experience integrating AI/ML models into verification or data extraction workflows

Read more
Remote only
3 - 8 yrs
₹20L - ₹30L / yr
ETL
Google Cloud Platform (GCP)
skill iconPython
Pipeline management
BigQuery

About Us:


CLOUDSUFI, a Google Cloud Premier Partner, is a global leading provider of data-driven digital transformation across cloud-based enterprises. With a global presence and focus on Software & Platforms, Life sciences and Healthcare, Retail, CPG, financial services, and supply chain, CLOUDSUFI is positioned to meet customers where they are in their data monetization journey.


Job Summary:


We are seeking a highly skilled and motivated Data Engineer to join our Development POD for the Integration Project. The ideal candidate will be responsible for designing, building, and maintaining robust data pipelines to ingest, clean, transform, and integrate diverse public datasets into our knowledge graph. This role requires a strong understanding of Cloud Platform (GCP) services, data engineering best practices, and a commitment to data quality and scalability.


Key Responsibilities:


  • ETL Development: Design, develop, and optimize data ingestion, cleaning, and transformation pipelines for various data sources (e.g., CSV, API, XLS, JSON, SDMX) using Cloud Platform services (Cloud Run, Dataflow) and Python.
  • Schema Mapping & Modeling: Work with LLM-based auto-schematization tools to map source data to our schema.org vocabulary, defining appropriate Statistical Variables (SVs) and generating MCF/TMCF files.
  • Entity Resolution & ID Generation: Implement processes for accurately matching new entities with existing IDs or generating unique, standardized IDs for new entities.
  • Knowledge Graph Integration: Integrate transformed data into the Knowledge Graph, ensuring proper versioning and adherence to existing standards. 
  • API Development: Develop and enhance REST and SPARQL APIs via Apigee to enable efficient access to integrated data for internal and external stakeholders.
  • Data Validation & Quality Assurance: Implement comprehensive data validation and quality checks (statistical, schema, anomaly detection) to ensure data integrity, accuracy, and freshness. Troubleshoot and resolve data import errors.
  • Automation & Optimization: Collaborate with the Automation POD to leverage and integrate intelligent assets for data identification, profiling, cleaning, schema mapping, and validation, aiming for significant reduction in manual effort.
  • Collaboration: Work closely with cross-functional teams, including Managed Service POD, Automation POD, and relevant stakeholders.

Qualifications and Skills:


  • Education: Bachelor's or Master's degree in Computer Science, Data Engineering, Information Technology, or a related quantitative field.
  • Experience: 3+ years of proven experience as a Data Engineer, with a strong portfolio of successfully implemented data pipelines.
  • Programming Languages: Proficiency in Python for data manipulation, scripting, and pipeline development.
  • Cloud Platforms and Tools: Expertise in Google Cloud Platform (GCP) services, including Cloud Storage, Cloud SQL, Cloud Run, Dataflow, Pub/Sub, BigQuery, and Apigee. Proficiency with Git-based version control.


Core Competencies:


  • Must Have - SQL, Python, BigQuery, (GCP DataFlow / Apache Beam), Google Cloud Storage (GCS)
  • Must Have - Proven ability in comprehensive data wrangling, cleaning, and transforming complex datasets from various formats (e.g., API, CSV, XLS, JSON)
  • Secondary Skills - SPARQL, Schema.org, Apigee, CI/CD (Cloud Build), GCP, Cloud Data Fusion, Data Modelling
  • Solid understanding of data modeling, schema design, and knowledge graph concepts (e.g., Schema.org, RDF, SPARQL, JSON-LD).
  • Experience with data validation techniques and tools.
  • Familiarity with CI/CD practices and the ability to work in an Agile framework.
  • Strong problem-solving skills and keen attention to detail.


Preferred Qualifications:


  • Experience with LLM-based tools or concepts for data automation (e.g., auto-schematization).
  • Familiarity with similar large-scale public dataset integration initiatives.
  • Experience with multilingual data integration.
Read more
Remote only
6 - 10 yrs
₹15L - ₹25L / yr
Cloud Architect
data Architect
skill iconData Analytics
Google Cloud Platform (GCP)
Apache Kafka

Role Summary

Provide architectural leadership across a large-scale, multi-cloud data ecosystem, helping design and guide scalable, future-ready data platforms.


Key Responsibilities

• Design and review data architectures across Google Cloud and Microsoft Azure (multi-cloud).

• Guide decisions around data platforms, pipelines, streaming, and integration patterns.

• Advise on abstraction layers, APIs, messaging/streaming (Kafka, MQ), and system interoperability.

• Partner with engineering teams to ensure designs are practical and executable.


Key Skills

• Deep experience with large-scale data platforms and distributed systems.

• Strong background in multi-cloud architectures (GCP + Azure).

• Expertise in data pipelines, streaming, and enterprise integration.

Read more
Technology Industry

Technology Industry

Agency job
via Peak Hire Solutions by Dhara Thakkar
Bengaluru (Bangalore)
5 - 8 yrs
₹38L - ₹50L / yr
skill iconJava
skill iconSpring Boot
CI/CD
Spring
Microservices
+16 more

Job Details

- Job Title: SDE-3

Industry: Technology

Domain - Information technology (IT)

Experience Required: 5-8 years

Employment Type: Full Time

Job Location: Bengaluru

CTC Range: Best in Industry

 

Role & Responsibilities

As a Software Development Engineer - 3, Backend Engineer at company, you will play a critical role in architecting, designing, and delivering robust backend systems that power our platform. You will lead by example, driving technical excellence and mentoring peers while solving complex engineering problems. This position offers the opportunity to work with a highly motivated team in a fast-paced and innovative environment.

 

Key Responsibilities:

Technical Leadership-

  • Design and develop highly scalable, fault-tolerant, and maintainable backend systems using Java and related frameworks.
  • Provide technical guidance and mentorship to junior developers, fostering a culture of learning and growth.
  • Review code and ensure adherence to best practices, coding standards, and security guidelines.

System Architecture and Design-

  • Collaborate with cross-functional teams, including product managers and frontend engineers, to translate business requirements into efficient technical solutions.
  • Own the architecture of core modules and contribute to overall platform scalability and reliability.
  • Advocate for and implement microservices architecture, ensuring modularity and reusability.

Problem Solving and Optimization-

  • Analyze and resolve complex system issues, ensuring high availability and performance of the platform.
  • Optimize database queries and design scalable data storage solutions.
  • Implement robust logging, monitoring, and alerting systems to proactively identify and mitigate issues.

Innovation and Continuous Improvement-

  • Stay updated on emerging backend technologies and incorporate relevant advancements into our systems.
  • Identify and drive initiatives to improve codebase quality, deployment processes, and team productivity.
  • Contribute to an advocate for a DevOps culture, supporting CI/CD pipelines and automated testing.

Collaboration and Communication-

  • Act as a liaison between the backend team and other technical and non-technical teams, ensuring smooth communication and alignment.
  • Document system designs, APIs, and workflows to maintain clarity and knowledge transfer across the team.

 

Ideal Candidate

  • Strong Java Backend Engineer.
  • Must have 5+ years of backend development with strong focus on Java (Spring / Spring Boot)
  • Must have been SDE-2 for at least 2.5 years
  • Hands-on experience with RESTful APIs and microservices architecture
  • Strong understanding of distributed systems, multithreading, and async programming
  • Experience with relational and NoSQL databases
  • Exposure to Kafka/RabbitMQ and Redis/Memcached
  • Experience with AWS / GCP / Azure, Docker, and Kubernetes
  • Familiar with CI/CD pipelines and modern DevOps practices
  • Product companies (B2B SAAS preferred)
  • have stayed for at least 2 years with each of the previous companies
  • (Education): B.Tech in computer science from Tier 1, Tier 2 colleges


Read more
Technology Industry

Technology Industry

Agency job
via Peak Hire Solutions by Dhara Thakkar
Bengaluru (Bangalore)
9 - 12 yrs
₹50L - ₹70L / yr
skill iconJava
Microservices
CI/CD
MySQL
MySQL DBA
+9 more

Job Details

- Job Title: Staff Engineer

Industry: Technology

Domain - Information technology (IT)

Experience Required: 9-12 years

Employment Type: Full Time

Job Location: Bengaluru

CTC Range: Best in Industry

 

Role & Responsibilities

As a Staff Engineer at company, you will play a critical role in defining and driving our backend architecture as we scale globally. You’ll own key systems that handle high volumes of data and transactions, ensuring performance, reliability, and maintainability across distributed environments.

 

Key Responsibilities-

  • Own one or more core applications end-to-end, ensuring reliability, performance, and scalability.
  • Lead the design, architecture, and development of complex, distributed systems, frameworks, and libraries aligned with company’s technical strategy.
  • Drive engineering operational excellence by defining robust roadmaps for system reliability, observability, and performance improvements.
  • Analyze and optimize existing systems for latency, throughput, and efficiency, ensuring they perform at scale.
  • Collaborate cross-functionally with Product, Data, and Infrastructure teams to translate business requirements into technical deliverables.
  • Mentor and guide engineers, fostering a culture of technical excellence, ownership, and continuous learning.
  • Establish and uphold coding standards, conduct design and code reviews, and promote best practices across teams.
  • Stay ahead of the curve on emerging technologies, frameworks, and patterns to strengthen company’s technology foundation.
  • Contribute to hiring by identifying and attracting top-tier engineering talent.

 

Ideal Candidate

  • Strong staff engineer profile
  • Must have 9+ years in backend engineering with Java, Spring/Spring Boot, and microservices building large and schalable systems
  • Must have been SDE-3 / Tech Lead / Lead SE for at least 2.5 years
  • Strong in DSA, system design, design patterns, and problem-solving
  • Proven experience building scalable, reliable, high-performance distributed systems
  • Hands-on with SQL/NoSQL databases, REST/gRPC APIs, concurrency & async processing
  • Experience in AWS/GCP, CI/CD pipelines, and observability/monitoring
  • Excellent ability to explain complex technical concepts to varied stakeholders
  • Product companies (B2B SAAS preferred)
  • Must have stayed for at least 2 years with each of the previous companies
  • (Education): B.Tech in computer science from Tier 1, Tier 2 colleges


Read more
Global digital transformation solutions provider

Global digital transformation solutions provider

Agency job
via Peak Hire Solutions by Dhara Thakkar
Trivandrum, Kochi (Cochin), Chennai, Thiruvananthapuram
5 - 7 yrs
₹19L - ₹28L / yr
skill iconJava
skill iconSpring Boot
Microservices
Architecture
Google Cloud Platform (GCP)
+22 more

Job Details

- Job Title: Lead I - Software Engineering - Java, Spring Boot, Microservices

- Industry: Global digital transformation solutions provider

- Domain - Information technology (IT)

- Experience Required: 5-7 years

- Employment Type: Full Time

- Job Location: Trivandrum, Chennai, Kochi, Thiruvananthapuram

- CTC Range: Best in Industry

 

Job Description

Job Title: Senior Java Developer Experience: 5+ years

Job Summary:

We are looking for a Senior Java Developer with strong experience in Spring Boot and Microservices to work on high-performance applications for a leading financial services client. The ideal candidate will have deep expertise in Java backend development, cloud (preferably GCP), and strong problem-solving abilities.

 

Key Responsibilities:

• Develop and maintain Java-based microservices using Spring Boot

• Collaborate with Product Owners and teams to gather and review requirements

• Participate in design reviews, code reviews, and unit testing

• Ensure application performance, scalability, and security

• Contribute to solution architecture and design documentation

• Support Agile development processes including daily stand-ups and sprint planning

• Mentor junior developers and lead small modules or features

 

Required Skills:

• Java, Spring Boot, Microservices architecture

• GCP (or other cloud platforms like AWS)

• REST/SOAP APIs, Hibernate, SQL, Tomcat

• CI/CD tools: Jenkins, Bitbucket

• Agile methodologies (Scrum/Kanban)

• Unit testing (JUnit), debugging and troubleshooting

• Good communication and team leadership skills

 

Preferred Skills:

• Frontend familiarity (Angular, AJAX)

• Experience with API documentation tools (Swagger)

• Understanding of design patterns and UML

• Exposure to Confluence, Jira

 

Mandatory Skills Required:

Strong proficiency in Java, spring boot, Microservices, GCP/AWS.

Experience Required: Minimum 5+ years of relevant experience

Java/J2EE (5+ years), Spring/Spring Boot (5+ years), Microservices (5+ years), AWS/GCP/Azure (mandatory), CI/CD (Jenkins, SonarQube, Git)

Java, Spring Boot, Microservices architecture

GCP (or other cloud platforms like AWS)

REST/SOAP APIs, Hibernate, SQL, Tomcat

CI/CD tools: Jenkins, Bitbucket

Agile methodologies (Scrum/Kanban)

Unit testing (JUnit), debugging and troubleshooting

Good communication and team leadership skills

 

******

Notice period - 0 to 15 days only (Immediate and who can join by Feb)

Job stability is mandatory

Location: Trivandrum, Kochi, Chennai

Virtual Interview - 14th Feb 2026

Read more
Global digital transformation solutions provider

Global digital transformation solutions provider

Agency job
via Peak Hire Solutions by Dhara Thakkar
Kochi (Cochin), Trivandrum
4 - 6 yrs
₹11L - ₹17L / yr
skill iconAmazon Web Services (AWS)
skill iconPython
Data engineering
SQL
ETL
+22 more

JOB DETAILS:

* Job Title: Associate III - Data Engineering

* Industry: Global digital transformation solutions provide

* Salary: Best in Industry

* Experience: 4-6 years

* Location: Trivandrum, Kochi

Job Description

Job Title:

Data Services Engineer – AWS & Snowflake

 

Job Summary:

As a Data Services Engineer, you will be responsible for designing, developing, and maintaining robust data solutions using AWS cloud services and Snowflake.

You will work closely with cross-functional teams to ensure data is accessible, secure, and optimized for performance.

Your role will involve implementing scalable data pipelines, managing data integration, and supporting analytics initiatives.

 

Responsibilities:

• Design and implement scalable and secure data pipelines on AWS and Snowflake (Star/Snowflake schema)

• Optimize query performance using clustering keys, materialized views, and caching

• Develop and maintain Snowflake data warehouses and data marts.

• Build and maintain ETL/ELT workflows using Snowflake-native features (Snowpipe, Streams, Tasks).

• Integrate Snowflake with cloud platforms (AWS, Azure, GCP) and third-party tools (Airflow, dbt, Informatica)

• Utilize Snowpark and Python/Java for complex transformations

• Implement RBAC, data masking, and row-level security.

• Optimize data storage and retrieval for performance and cost-efficiency.

• Collaborate with stakeholders to gather data requirements and deliver solutions.

• Ensure data quality, governance, and compliance with industry standards.

• Monitor, troubleshoot, and resolve data pipeline and performance issues.

• Document data architecture, processes, and best practices.

• Support data migration and integration from various sources.

 

Qualifications:

• Bachelor’s degree in Computer Science, Information Technology, or a related field.

• 3 to 4 years of hands-on experience in data engineering or data services.

• Proven experience with AWS data services (e.g., S3, Glue, Redshift, Lambda).

• Strong expertise in Snowflake architecture, development, and optimization.

• Proficiency in SQL and Python for data manipulation and scripting.

• Solid understanding of ETL/ELT processes and data modeling.

• Experience with data integration tools and orchestration frameworks.

• Excellent analytical, problem-solving, and communication skills.

 

Preferred Skills:

• AWS Glue, AWS Lambda, Amazon Redshift

• Snowflake Data Warehouse

• SQL & Python

 

Skills: Aws Lambda, AWS Glue, Amazon Redshift, Snowflake Data Warehouse

 

Must-Haves

AWS data services (4-6 years), Snowflake architecture (4-6 years), SQL (proficient), Python (proficient), ETL/ELT processes (solid understanding)

Skills: AWS, AWS lambda, Snowflake, Data engineering, Snowpipe, Data integration tools, orchestration framework

Relevant 4 - 6 Years

python is mandatory

 

******

Notice period - 0 to 15 days only (Feb joiners’ profiles only)

Location: Kochi

F2F Interview 7th Feb

 

 

Read more
MNC

MNC

Agency job
via rekha by Rekja Gorle
Mumbai
5 - 10 yrs
₹10L - ₹25L / yr
Windows Azure
DevOps
Microsoft Windows Azure
skill iconKubernetes
Google Cloud Platform (GCP)
+6 more

We are hiring a Senior DevOps Engineer (5–10 years experience) with strong hands-on expertise in AWS, CI/CD, Docker, Kubernetes, and Linux. The role involves designing, automating, and managing scalable cloud infrastructure and deployment pipelines. Experience with Terraform/Ansible, monitoring tools, and security best practices is required. Immediate joiners preferred.

Read more
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Find more jobs
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort