Cutshort logo
Google cloud platform gcp jobs

50+ Google Cloud Platform (GCP) Jobs in India

Apply to 50+ Google Cloud Platform (GCP) Jobs on CutShort.io. Find your next job, effortlessly. Browse Google Cloud Platform (GCP) Jobs and apply today!

icon
Service Co

Service Co

Agency job
via Vikash Technologies by Rishika Teja
Delhi, Gurugram, Noida, Ghaziabad, Faridabad
3 - 8 yrs
₹20L - ₹30L / yr
skill iconPython
skill iconJava
skill iconScala
ETL
Google Cloud Platform (GCP)
+3 more

Hiring for Data Engineer


Exp : 3 - 8 yrs

Edu : Any Graduates

Work Location : Noida WFO


Skills :


3+ years experience in building data platforms and data engineering solutions and Data Architecture.


Strong programming skills in Python, Java, or Scala.Experience with cloud platforms (GCP preferred) and big data technologies (Hadoop, Big Query etc.)


Proven experience in designing and building data migration platforms, including planning, execution, and validation of data migrations.


Proficiency in SQL and experience with data modeling, 


ETL processes, and data warehousing solutions.


Knowledge of popular data migration tools, ETL technologies, and frameworks (Airflow, Apache Beam, KNIME etc)




Read more
Remote only
3 - 15 yrs
₹8L - ₹12L / yr
FastAPI
skill iconPython
RESTful APIs
SQL
NOSQL Databases
+5 more

Summary:

We are seeking a highly skilled Python Backend Developer with proven expertise in FastAPI to join our team as a full-time contractor for 12 months. The ideal candidate will have 5+ years of experience in backend development, a strong understanding of API design, and the ability to deliver scalable, secure solutions. Knowledge of front-end technologies is an added advantage. Immediate joiners are preferred. This role requires full-time commitment—please apply only if you are not engaged in other projects.

Job Type:

Full-Time Contractor (12 months)

Location:

Remote

Experience:

3+ years in backend development

Key Responsibilities:

  • Design, develop, and maintain robust backend services using Python and FastAPI.
  •  Implement and manage Prisma ORM for database operations.
  • Build scalable APIs and integrate with SQL databases and third-party services.
  • Deploy and manage backend services using Azure Function Apps and Microsoft Azure Cloud.
  • Collaborate with front-end developers and other team members to deliver high-quality web applications.
  • Ensure application performance, security, and reliability.
  • Participate in code reviews, testing, and deployment processes.

Required Skills:

  • Expertise in Python backend development with strong experience in FastAPI.
  • Solid understanding of RESTful API design and implementation.
  • Proficiency in SQL databases and ORM tools (preferably Prisma)
  • Hands-on experience with Microsoft Azure Cloud and Azure Function Apps.
  • Familiarity with CI/CD pipelines and containerization (Docker).
  • Knowledge of cloud architecture best practices.

Added Advantage:

  • Front-end development knowledge (React, Angular, or similar frameworks).
  • Exposure to AWS/GCP cloud platforms.
  • Experience with NoSQL databases.

Eligibility:

  • Minimum 3 years of professional experience in backend development.
  • Available for full-time engagement.
  • Please excuse if you are currently engaged in other projects—we require dedicated availability.


Read more
Techjays

at Techjays

1 candid answer
Sri Krishna Thangamani
Posted by Sri Krishna Thangamani
Coimbatore
3 - 5 yrs
₹7L - ₹11L / yr
Large Language Models (LLM)
Retrieval Augmented Generation (RAG)
Vector database
Design patterns
skill iconPython
+9 more

Expected Date of Joining Immediate/ 30 days  


What makes Techjays an inspiring place to work

At Techjays, we are helping companies reimagine how they build, operate, and scale with AI at the core.

We operate as part of the 1% of companies globally that can truly leverage AI the right way and not just as experimentation, but as secure, scalable, production-grade systems that drive measurable business outcomes.

Our strength lies in combining deep backend engineering with AI system design, building AInative platforms, intelligent workflows, and cloud architectures that are reliable, observable, and enterprise-ready. Our team includes engineers and leaders who have built and scaled products at global technology organizations such as Google, Akamai, NetApp, ADP, Cognizant Consulting, and Capgemini. Today, we function as a high-agency, execution-focused team building advanced AI systems for global clients.

We are looking for a strong backend engineer who can design and build secure, scalable Python systems that power AI-native applications. You will work on AI-enabled platforms, production systems, and scalable backend services that support LLM integrations, RAG pipelines, and intelligent workflows. Years of Experience: 3 - 5 years Location: Coimbatore Key Skills: ● Backend Development (Familiar): Python, Django/Flask, RESTful APIs, Websockets ● Cloud Technologies (Familiar): AWS (EC2, S3, Lambda), GCP (Compute Engine, Cloud Storage, Cloud Functions), CI/CD pipelines with Jenkins/GitLab CI or Github Actions ● Databases (Familiar): PostgreSQL, MySQL, MongoDB ● AI/ML (Familiar): Basic understanding of Machine Learning concepts, assist in building and integrating Agentic AI workflows, familiar with RAG, Vector Databases (Pinecone or ChromaDB or others) ● Tools: Git, Docker, Linux Roles and Responsibilities: ● Develop and maintain backend services using Python and Django/Flask under guidance. ● Assist in building scalable and secure APIs and backend systems for AI-driven applications. ● Write clean, efficient, and maintainable code following best practices. ● Collaborate with cross-functional teams including frontend developers, data scientists, and product teams. ● Participate in code reviews, debugging, and performance optimization. ● Support integration of AI/ML components such as LLMs and RAG pipelines. ● Continuously learn and improve technical skills in backend and AI technologies. What We’re Looking for Beyond Skills: ● Builder mindset — you think in systems, not just tickets ● Ownership — you take features from idea to production ● Structured thinking in ambiguous environments ● Clear communication and collaborative approach ● Ability to work in a fast-paced, evolving startup environment What We Offer: ● Competitive compensation ● Paid holidays & flexible time off ● Medical insurance (Self & Family up to ₹4 Lakhs per person) ● Opportunity to work on production-grade AI systems ● Exposure to global clients and high-impact projects ● A culture that values clarity, integrity, and continuous growth If you want to build AI-native systems that are used in the real world, not just prototypes, Techjays is the place to do it.  

Read more
Redtring
Keshav Senthil
Posted by Keshav Senthil
Hyderabad
1 - 3 yrs
₹8L - ₹12L / yr
skill iconKotlin
skill iconJava
skill iconSpring Boot
skill iconReact.js
skill iconAmazon Web Services (AWS)
+6 more

Software Engineer (Backend) – Kotlin & React

About Us

We are a high-agency startup building elegant technological solutions to real-world problems.

Our mission is to build world-class systems from scratch that are lean, fast, and intelligent. We are currently operating in stealth mode, developing deeply technical products involving Kotlin, React, Azure, AWS, GCP, Google Maps integrations, and algorithmically intensive backends.

We are building a team of builders — not ticket takers. If you want to design systems, make real decisions, and own your work end-to-end, this is the place for you.

Role Overview

As a Software Engineer, you will take full ownership of building and scaling critical product systems. You will work directly with the founding team to transform complex real-world problems into scalable technical solutions.

This role is ideal for engineers who enjoy thinking deeply about systems, writing clean code, and building products from 0 → 1.

Key Responsibilities

System Development & Architecture

  • Design, develop, and maintain scalable backend services, primarily using Kotlin or JVM-based languages (Java/Scala).
  • Architect systems that are robust, high-performance, and production-ready.
  • Apply strong data structures, algorithms, and system design principles to solve complex engineering challenges.

Full Stack Development

  • Build fast, maintainable front-end applications using React.
  • Ensure seamless integration between frontend systems and backend services.

Cloud Infrastructure

  • Design and manage cloud architecture using AWS, Azure, and/or Google Cloud Platform (GCP).
  • Implement scalable deployment pipelines, monitoring, and infrastructure optimization.

Product & Technical Collaboration

  • Work closely with founders and product stakeholders to translate business problems into technical solutions.
  • Contribute actively to product and engineering roadmap decisions.

Performance Optimization

  • Continuously improve system performance, scalability, and reliability.
  • Implement efficient algorithms and system optimizations to gain a technical advantage.

Engineering Excellence

  • Write clean, well-tested, and maintainable code.
  • Maintain strong engineering standards across the codebase.

Required Skills & Qualifications

We value capability and ownership over years of experience. Whether you have 10 years of experience or none, what matters is your ability to build and solve hard problems.

Core Requirements

  • Strong computer science fundamentals (Data Structures, Algorithms, System Design).
  • Experience with Kotlin or JVM languages such as Java or Scala.
  • Experience building modern React applications.
  • Hands-on experience with cloud platforms (AWS / Azure / GCP).
  • Experience designing and deploying scalable distributed systems.
  • Strong problem-solving and analytical thinking.

Preferred / Bonus Skills

  • Experience with Google Maps APIs or geospatial integrations.
  • Prior startup experience.
  • Contributions to open-source projects.
  • Personal side projects demonstrating strong engineering ability.

Ideal Candidate

You will thrive in this role if you:

  • Take ownership of problems, not just tasks.
  • Are comfortable working in high-ambiguity environments.
  • Have a builder mindset and enjoy creating systems from scratch.
  • Learn quickly and execute with speed and precision.

This Role May Not Be For You If

  • You prefer strict task assignments and detailed specifications before starting work.
  • You want to focus only on coding tickets without product involvement.
  • You prefer large teams with multiple layers of management.

Why Join Us

  • Build 0 → 1 products with massive ownership.
  • Work in a flat organization with no unnecessary hierarchy.
  • Collaborate directly with founders and core product builders.
  • Your contributions will have immediate and visible impact.
  • Flexible remote work environment.
  • Opportunity to shape the technology, culture, and future of the company.

If you are passionate about building powerful systems, solving complex problems, and owning your work, we would love to hear from you.


https://loopx.redstring.co.in/form/69e4c676e8ea0f6af4ed066e

Read more
Quantiphi

at Quantiphi

3 candid answers
1 video
Nikita Sinha
Posted by Nikita Sinha
Bengaluru (Bangalore)
7 - 12 yrs
Best in industry
Google Cloud Platform (GCP)
skill iconLeadership
DevOps
cicd
skill iconKubernetes

Must have skills:

● Experience: 6+ years of hands-on experience in Cloud Platform Engineering, DevOps, or Site Reliability Engineering (SRE).

● Multi-Cloud Infrastructure: Proficiency in architecting, deploying, and maintaining cloud infrastructure across GCP and Azure (VPC, IAM, Cloud Storage/Blob, Cloud Run/Functions, Pub/Sub, GKE/AKS, Cloud SQL).

● Container Orchestration: Extensive experience with Kubernetes (GKE or AKS) and Docker for managing and scaling containerized applications.

● Infrastructure as Code (IaC) & Automation: Strong proficiency using Terraform along with Python and Bash/Shell scripting for infrastructure automation.

● CI/CD Automation: Experience building and managing CI/CD pipelines using Jenkins, GitHub Actions, GitLab CI, or ArgoCD.

● Observability & Monitoring: Experience using tools such as Datadog, Prometheus, Grafana, or Splunk for monitoring, logging, and alerting.

● Secrets & Security Management: Experience managing sensitive credentials using HashiCorp Vault, GCP Secret Manager, or Azure Key Vault.

● Architecture & Networking: Understanding of microservices architecture, service-oriented architecture, event-driven systems (Pub/Sub), and cloud networking principles.


Good to have skills:

● AI/ML Infrastructure: Familiarity with infrastructure for ML workloads such as Vertex AI, Azure Machine Learning, GPU node pools, or Vector Databases.

● Advanced Kubernetes: Working knowledge of Kyverno for policy management, Karpenter for cluster autoscaling, or building Kubernetes operators using Go.

● Multi-Cloud Management: Familiarity with Crossplane for managing multi-cloud environments and building cloud-native platforms.

● Cloud Reliability & FinOps: Understanding of disaster recovery, fault tolerance, and cost allocation practices through resource tagging.

● Domain & Compliance: Experience working in regulated environments such as BFSI or Insurance.

Read more
Quantiphi

at Quantiphi

3 candid answers
1 video
Nikita Sinha
Posted by Nikita Sinha
Bengaluru (Bangalore)
10 - 15 yrs
Best in industry
MLOps
Google Cloud Platform (GCP)
Datawarehousing
ETL
Artificial Intelligence (AI)


We are looking for a highly experienced and technically strong AI Engineering Manager to lead and mentor a team of Machine Learning and AI Engineers. This role will drive the execution, delivery, and operational excellence of enterprise AI/ML products and platforms within the Life Insurance and Financial Services sector.

The role operates primarily in a GCP cloud environment and requires bridging architectural design with hands-on engineering execution. The candidate will manage engineering workloads, guide technical decisions, ensure high code quality, and drive project timelines for enterprise-grade AI solutions.

Key Responsibilities

Team Leadership & Project Management

  • Own end-to-end delivery of AI/ML projects including sprint planning, backlog grooming, workload management, and resource allocation.
  • Provide technical guidance and mentorship to AI/ML engineers, including code reviews and best practices for MLOps, software engineering, and cloud infrastructure.
  • Act as the primary technical point of contact for product managers, architects, and business stakeholders.
  • Define and enforce engineering standards for development, testing, CI/CD, and monitoring of AI services.
  • Recruit, onboard, mentor, and conduct performance reviews for the AI engineering team.
  • Collaborate with Data Engineering, DevOps, and other teams to integrate AI models into enterprise systems and data pipelines.
  • Identify and manage technical risks, dependencies, and delivery blockers to maintain project velocity.

Technical Delivery

  • Implement and operationalize MLOps pipelines for model training, versioning, deployment, monitoring, and explainability.
  • Guide teams in leveraging AI platform capabilities such as RAG pipelines, LLM gateways, and vector databases to build business use cases.
  • Ensure security, scalability, and performance of production AI services and underlying cloud infrastructure.

Must-Have Skills & Requirements

  • 8+ years of experience in Software Engineering, Machine Learning Engineering, or Data Science.
  • Minimum 3+ years in a team management or leadership role.
  • Proven experience managing and mentoring engineering teams.
  • Strong experience with project management methodologies (Scrum/Kanban) and tools such as Jira.
  • Hands-on experience with production-scale MLOps and GenAIOps implementations.
  • Deep expertise in cloud platforms, preferably GCP.
  • Strong understanding of modern data architecture including vector databases, data warehousing, and ETL/ELT pipelines.
  • Solid software engineering fundamentals including API design, system architecture, Git, and CI/CD or DevSecOps pipelines.
  • Experience with GenAI technologies such as LLM orchestration frameworks, prompt engineering, and RAG architectures.
  • Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field.

Good-to-Have / Preferred Skills

  • Experience managing distributed or multi-geographical engineering teams.
  • Knowledge of regulatory requirements within the BFSI or insurance domain (data privacy, Responsible AI).
  • Azure cloud or multi-cloud project experience.
  • Experience with streaming data platforms and real-time AI processing.
  • Cloud certifications such as GCP Professional Cloud Architect or ML Engineer.


Read more
Deqode

at Deqode

1 recruiter
purvisha Bhavsar
Posted by purvisha Bhavsar
Bengaluru (Bangalore)
5 - 7 yrs
₹4L - ₹12L / yr
skill iconJava
skill iconPython
skill iconNodeJS (Node.js)
Windows Azure
Google Cloud Platform (GCP)
+3 more

👉 Job Title: Backend Developer

🌟 Experience: 5-7 Years

💡Location: Bangalore

👉 Notice Period :- Immediate Joiners

💡 Work Mode - 5 Days work from Office

( Candidate Serving notice period are preffered)


Role Summary

We are looking for a Backend Engineer to join the Platform Implementation Team, responsible for building scalable, secure, and high-performance backend systems for a multi-cloud Data & AI platform. You will design microservices, develop REST APIs, and enable seamless data integration across enterprise systems like CRM and ERP.


💫 Key Responsibilities

✅ Design and develop scalable microservices and RESTful APIs

✅ Build event-driven architectures for asynchronous processing

✅ Integrate backend systems with cloud platforms (GCP/Azure)

✅ Ensure secure, reliable, and optimized data handling

✅ Collaborate with cross-functional teams (UI, Data, Platform)

✅ Follow best practices in coding, testing, CI/CD, and containerization


💫 Mandatory Skills (Top 3)

✅ Strong backend programming experience (Python / Node.js / Java)

✅ Expertise in API development & Microservices architecture

✅ Hands-on experience with Cloud platforms (GCP or Azure)





Read more
Bengaluru (Bangalore)
5 - 9 yrs
Upto ₹25L / yr (Varies
)
skill iconJava
Apache Kafka
skill iconAmazon Web Services (AWS)
Google Cloud Platform (GCP)
Microservices
+1 more

Role Overview

We are seeking a highly skilled Senior Java Developer to join our engineering team. In this role, you will be responsible for designing and developing high-performance, scalable, and resilient microservices. You will work at the intersection of complex backend logic, real-time data streaming, and cloud infrastructure to deliver seamless user experiences. 


Key Responsibilities

  • System Design: Architect and develop robust, scalable, and maintainable backend services using Java and Spring Boot.
  • Scalability: Build distributed systems capable of handling high traffic and large datasets with low latency.
  • Database Management: Design and optimize complex schemas in both Relational (SQL) and NoSQL databases, ensuring data integrity and performance.
  • Event-Driven Architecture: Implement real-time messaging and data pipelines using Apache Kafka.
  • Cloud Infrastructure: Deploy and manage services on cloud platforms (AWS or GCP), leveraging managed services to improve system reliability.
  • Collaboration: Work closely with cross-functional teams to define requirements, participate in code reviews, and mentor junior developers.  


Technical Requirements

  • Core Java: Deep expertise in Java (8 or higher), including concurrency, multithreading, and JVM tuning.
  • Frameworks: Strong experience with Spring Boot, Spring Cloud, and Hibernate/JPA.
  • Messaging: Proven experience with Apache Kafka for event streaming and asynchronous processing.
  • Cloud: Proficiency in AWS (EC2, S3, RDS, Lambda) or GCP (GCE, GCS, Cloud SQL, Pub/Sub).
  • Databases: Solid knowledge of PostgreSQL, MySQL, or Oracle, alongside NoSQL experience (e.g., MongoDB, Cassandra, or Redis).
  • DevOps & Tools: Familiarity with Docker, Kubernetes, and CI/CD pipelines (Jenkins, GitLab CI, or GitHub Actions).  


Preferred Qualifications

  • Experience with Microservices Architecture and Domain-Driven Design (DDD).
  • Understanding of distributed caching strategies and load balancing.
  • Strong problem-solving skills and a "clean code" mentality.  
Read more
Improving
Remote only
4 - 8 yrs
Best in industry
skill iconAmazon Web Services (AWS)
Google Cloud Platform (GCP)
skill iconPython
skill iconJenkins
skill iconKubernetes

What are we looking for??

  1. You have a good understanding and work experience in AKS, Kubernetes, and EKS.
  2. You are able to manage multi region clusters for disaster recovery.
  3. You have a good understanding of AWS stack.
  4. You have experience of production level in Kubernetes. 
  5. You are comfortable coding/programming and can do so whenever required. 
  6. You have worked with programmable infrastructure in some way - Built a CI/CD pipeline, Provisioned infrastructure programmatically or Provisioned monitoring and logging infrastructure for large sets of machines.
  7. You love automating things, sometimes even what seems like you can’t automate - such as one of our engineers used Ansible to set up the Ubuntu workstation and runs a playbook every time something has to be installed.
  8. You don’t throw around words such as “high availability” or “resilient systems” without understanding at least their basics. Because you know that words are easy to talk about but there is a fair amount of work to build such a system in practice.
  9. You love coaching people - about the 12-factor apps or the latest tool that reduced your time of doing a task by X times and so on. You lead by example when it comes to technical work and community.
  10. You understand the areas you have worked on very well but, you are curious about many systems that you may not have worked on and want to fiddle with them.
  11. You know that understanding applications and the runtime technologies gives you a better perspective - you never looked at them as two different things.

What you will be learning and doing?

  1. You will be working with customers trying to transform their applications and adopt cloud-native technologies. The technologies used will be Kubernetes, Prometheus, Service Mesh, Distributed tracing and public cloud technologies or on-premise infrastructure.
  2. The problems and solutions are continuously evolving in space but fundamentally you will be solving problems with simplest and scalable automation.
  3. You will be building open source tools for problems that you think are common across customers and industry. No one ever benefited from re-inventing the wheel, did they? 
  4. You will be hacking around open source projects, understand their capabilities, limitations and apply the right tool for the right job.
  5. You will be educating the customers - from their operations engineers to developers on scalable ways to build and operate applications in modern cloud-native infrastructure.


Read more
Quantiphi

at Quantiphi

3 candid answers
1 video
Nikita Sinha
Posted by Nikita Sinha
Bengaluru (Bangalore), Mumbai, Trivandrum
3 - 6 yrs
Upto ₹30L / yr (Varies
)
Google Cloud Platform (GCP)
DevOps
CI/CD
skill iconKubernetes
skill iconGitHub
+2 more

Role & Responsibilities

  • Develop and deliver automation software to build and improve platform functionality
  • Ensure reliability, availability, and manageability of applications and cloud platforms
  • Champion adoption of Infrastructure as Code (IaC) practices
  • Design and build self-service, self-healing, monitoring, and alerting platforms
  • Automate development and testing workflows through CI/CD pipelines (Git, Jenkins, SonarQube, Artifactory, Docker containers)
  • Build and manage container hosting platforms using Kubernetes

Requirements

  • Strong experience deploying and maintaining GCP cloud infrastructure
  • Well-versed in service-oriented and cloud-based architecture design patterns
  • Knowledge of cloud services including compute, storage, networking, messaging, and automation tools (e.g., CloudFormation/Terraform equivalents)
  • Experience with relational and NoSQL databases (Postgres, Cassandra)
  • Hands-on experience with automation/configuration tools (Puppet, Chef, Ansible, Terraform)

Additional Skills

  • Strong Linux system administration and troubleshooting skills
  • Programming/scripting exposure (Bash, Python, Core Java, or Scala)
  • CI/CD pipeline experience (Jenkins, Git, Maven, etc.)
  • Experience integrating solutions in multi-region environments
  • Familiarity with Agile/Scrum/DevOps methodologies
Read more
Searce Inc

at Searce Inc

3 recruiters
Karthika Senthilkumar
Posted by Karthika Senthilkumar
Coimbatore
7 - 12 yrs
Best in industry
skill iconMachine Learning (ML)
Artificial Intelligence (AI)
Google Cloud Platform (GCP)
Large Language Models (LLM) tuning
Retrieval Augmented Generation (RAG)
+3 more

Your Responsibilities

what you will wake up to solve.

  • Process-First AI Strategy: Principal Technical Expert: Act as a hands-on leader and the core technical authority tasked with "futurifying" client businesses through advanced AI. Take full ownership of the AI Engineering squad, transforming ambitious concepts into high-impact, tangible realities.
  • Engineering & Intelligent Deployment: Execute the full-lifecycle development of innovative AI/ML solutions, including hands-on design, coding, testing, and deployment of robust, scalable systems that prioritize technical excellence and business relevance.
  • Scalability & Architectural Optimization: Directly build and optimize high-performance AI architectures and core system components to ensure solutions are reliable, production-ready, and optimized for long-term operational success.
  • Impact-Driven Technical Expertise: Deliver intelligent client outcomes through direct technical contribution, maintaining an "Always Beta" mindset and a relentless focus on solving complex engineering challenges.
  • Leadership through Action: Lead by example rather than control, coaching and mentoring a high-performing squad of "happier Do-ers" to foster a vibrant culture of continuous innovation and technical excellence.
  • Strategic Integration & Collaboration: Partner across internal teams to translate chaotic business challenges into precise technical requirements, ensuring seamless solution integration and adoption for global clients.
  • The "Agentic" Shift: You will lead the transition from simple predictive models to Agentic Workflows. You will build systems where AI agents can plan, reason, and execute complex tasks autonomously to solve intricate business problems.
  • Talent & Culture: You will mentor a high-performance squad of AI Engineers and Data Scientists. You will teach them to look beyond the algorithm and understand the business outcome.


Functional Skills

Scaling Intelligent Workforce through Delivery Excellence

  • Deep Technical Acumen: Operates at the cutting edge of AI, applying advanced technical knowledge to engineer and implement groundbreaking solutions, and guide the squad in developing future capabilities.
  • Client Advocacy & Revenue Growth: Skill in cultivating and maintaining trusted client partnerships. Drives strategic engagement that results in repeat business and expanded client portfolios within the region.
  • Contract & Risk Governance: High proficiency in reviewing and managing complex project agreements (SoW), mitigating delivery risks, and navigating commercial negotiations to safeguard BU profitability.
  • Structured Problem-Solving: Simplifies chaotic technical challenges for the squad by breaking them into solvable chunks using first-principles thinking.
  • Squad Delivery Ownership: Follows through on the squad's solution execution—owning technical outcomes from ideation to deployment with rigor, precision, and pride, ensuring tangible, real-world business value.


Technical Oversight & Execution Charter

  • Technical Troubleshooting & Crisis Resolution: Actively manages technical roadblocks within the squad, personally intervening to troubleshoot ML or MLOps constraints. You ensure the protection of sprint timelines and the guaranteed performance of deployed models through hands-on problem-solving.
  • Cloud-Native Technical Command: Maintains deep, functional knowledge of modern AI system design (e.g., RAG Frameworks, Agentic Workflows, and Inference Optimization) across GCP and AWS. You hold the responsibility to validate squad-level technical roadmaps, ensuring they are technically feasible and production-hardened.
  • End-to-End Project Management: Expertly manage all aspects of a project, including scope, budget, timelines, and stakeholder communication. Accountable for the entire delivery, not just the technical parts.
  • Talent Strategy & Mentorship: Drive the hiring and development of specialized talent. You will be responsible for defining and optimizing effective team structures while proactively fostering an environment that champions creative problem-solving and technical agility.


Tech Superpowers

  • Deep AI Engineering Mastery & Guidance: Possesses profound, hands-on expertise in engineering, optimizing, and deploying foundational models, custom AI solutions, and complex multi-modal systems. You'll also guide your squad in understanding model architectures, training methodologies, and ethical AI development from the ground up, ensuring their collective proficiency.
  • Intelligent Systems Architecture & Oversight: You'll directly contribute to and oversee the coding and implementation of robust, scalable, and production-grade AI platforms and MLOps components for your squad's projects. You'll translate abstract technical requirements into high-performance, maintainable AI system designs, always considering reliability, security, and future extensibility across the squad's work.
  • Cloud-Native AI capability: More than cloud-certified, you are deeply cloud-capable in applied AI engineering. You proficiently leverage and guide your team in utilizing leading cloud AI/ML ecosystems to build, deploy, and manage AI solutions.
  • Technical Integrity & Ethical Governance: Establishes and audits mandatory technical quality benchmarks, ensuring strict adherence to rigorous policies regarding model validation, automated testing coverage, and ethical governance.


Experience & Relevance

  • A value-driven AI/ML Engineering Manager with 8+ years of experience in building and scaling end-to-end AI engineering and solution delivery.
  • Leadership Track Record: Proven track record as a hands-on builder, and lead, contributing to the design, development, and deployment of complex, enterprise-grade AI/ML platforms and solutions. Expert in leveraging Google Cloud's AI/ML ecosystem (Vertex AI, BigQuery ML, GKE for MLOps) to deliver highly performant, scalable, and impactful AI transformations.
  • Delivery & Advisory Record: Experience in building and optimizing intelligent systems and personally driving the technical execution from conception to scalable deployment.
  • Applied AI & Domain Expertise: Hands-On AI Deployment: Extensive hands-on experience deploying AI-powered workflows, copilots, and automation solutions in production environments.
  • Client-Facing Lead: Demonstrated hands-on experience as an AI/ML Product Manager, Data Science Manager, or Technical Architect in client-facing capacities. This involves directly building, implementing, and advising on complex AI solutions, consistently acting as the trusted technical authority for strategic clients.


Bonus Points (you will thrive if you have)

  • Founder’s Energy: Bias for action, thrive in ambiguity, relentless focus on outcomes.
  • Low-Code/No-Code Fluency: Experience with AI integrations via Power Platform or similar.
  • AI Copilots & Extensions: Built plugins, copilots, or agentic automation frameworks.
  • Thought Leadership DNA: Industry content creation, technical blogs, public speaking.
  • Ethical Compass: Strong commitment to responsible AI practices.
  • Engineer at Heart: Background in product development or engineering before moving into architecture.
Read more
Redtring
Keshav Senthil
Posted by Keshav Senthil
Hyderabad
3 - 6 yrs
₹15L - ₹20L / yr
skill iconJava
skill iconKotlin
skill iconAmazon Web Services (AWS)
skill iconRedis
Apache Kafka
+7 more

About Us:


We are hiring for a pre seed funded startup called Zeromoblt (https://zeromoblt.com/), a high-agency Hyderabad-based startup revolutionizing student transportation with lean, intelligent tech stacks.


Our mission: architect world-class systems from scratch—fast, scalable, and algorithmically sharp—using Kotlin, React, AWS (EC2, IoT, IAM), Google Maps, and multi-cloud setups. Stealth mode operations mean you're building 0→1 products with founders, not fixing tickets.


What You'll Do

  • Lead end-to-end ownership of complex systems: design, build, deploy, monitor, and iterate at scale.
  • Architect high-performance backends in Kotlin (or JVM langs) that handle real-time routing and IoT data.
  • Craft scalable React UIs that power ops dashboards and parent-facing apps.
  • Drive cloud decisions across AWS, Azure/GCP—optimising costs for our bootstrap runway.
  • Apply DSA/system design to solve hard problems like dynamic route optimization and predictive scaling.
  • Shape the engineering roadmap: propose, prioritise, and ship features with founders.
  • Mentor juniors while executing solo on high-impact bets—no layers, just results.


We're Looking For

  • 3-6 years of hands-on engineering where you've owned and shipped production systems (prove it with code/stories).
  • Elite CS fundamentals: advanced DSA, system design (distributed systems a must), design patterns.
  • Mastery of Kotlin/Java + modern React; real AWS experience (EC2, IAM, CLI—you know our stack).
  • Proven "leap-taker": startup grit, side projects, or open-source that screams hunger.
  • Figure-it-out velocity: you thrive in chaos, learn our domain overnight, and deliver 10x faster than peers.


This Role Is Not For You If…

  • You need structured roadmaps, PM hand-holding, or big-tech process.
  • Comfort > impact: stable salary over equity upside and chaos.
  • You've never worn all hats (dev, ops, product) in a resource-constrained environment.


Why Join Us

  • Massive ownership: lead tech for 10k+ students, direct founder access, shape ZeroMoblt's scale.
  • Flat, high-trust team: flexible Hyderabad/remote, no bureaucracy.
  • Hungry culture: we hire hustlers scaling from 700 to 10k students—your wins are visible daily.
  • Hungry to Leap? Apply now!
Read more
BigThinkCode Technologies
Divya Mohandass
Posted by Divya Mohandass
Chennai
3 - 5 yrs
₹7L - ₹16L / yr
dagster
SQL
Data engineering
Google BigQuery
Google Cloud Platform (GCP)
+2 more

About the role:

We are looking for a skilled Data Engineer with hands-on expertise in Dagster orchestration or GCP with Bigquery and Apache Airflow, modern data pipeline development, and architecture implementation. The ideal candidate will design, build, and optimize scalable data pipelines with strong SQL proficiency, data modelling expertise.


Key Responsibilities

• Design, develop, and maintain scalable data pipelines using Dagster.

• Build and manage Dagster components such as: o Ops / Assets o Schedules o Sensors o Jobs o Resource definitions

• Implement and maintain Medallion Architecture (Bronze, Silver, Gold layers).

• Write optimized and production-grade SQL scripts for transformations and data validation.

• GCP, Big query, Apache Airflow – expertise is must if not familiar with Dagster and orchestration.


Must Have

• 3+ years of experience in Data Engineering.

• Strong hands-on experience with Dagster and workflow orchestration. • Strong hands-on experience with GCP, Big query and Apache Airflow. • Solid understanding of data pipeline design patterns.

• Experience implementing Medallion Architecture.

• Advanced SQL skills (complex joins, CTEs, performance tuning).

• Experience working with GCP cloud data platform.


Why Join Us:

• Collaborative work environment.

• Exposure to modern tools and scalable application architectures.

• Medical cover for employee and eligible dependents.

• Tax beneficial salary structure.

• Comprehensive leave policy

• Competency development training programs.

Read more
Planview

at Planview

3 candid answers
3 recruiters
Bisman Gill
Posted by Bisman Gill
Bengaluru (Bangalore)
10yrs+
Upto ₹57L / yr (Varies
)
Site survey
Google Cloud Platform (GCP)
Linux administration
Microsoft Windows Server administration
CI/CD
+11 more

 Company Overview:

     Planview has one mission: to build the future of connected work with market-leading portfolio management and work management solutions. Planview is a recognized innovator and industry leader, our solutions enable organizations to connect the business from ideas to impact, empowering companies to accelerate the achievement of what matters most. Our solutions span every class of work, resource, and organization to address the varying needs of diverse and distributed teams, departments, and enterprises.


As a Sr CloudOps Engineer II, you will oversee teams of Engineers and be a champion for configuration management, technologies in the cloud, and continuous improvement. You will work closely with global leaders to ensure that our applications, infrastructure, and processes are scalable, secure, and supportable. By leveraging your production experience and development skills you will work hand in hand with Engineers (Dev, DevOps, DBOps) to design and implement solutions that improve delivery of value to customers, reduce costs, and eliminate toil.


     Responsibilities (What you will do):

  •  Guide the professional development of Engineers and support the teams to accomplish business goals
  • Work closely with leaders in the Israel to align on priorities and architect, deliver, and manage our products
  • Build systems that are secure, scalable, and self-healing.
  • Manage and improve deployment pipelines.
  • Triage and remediate production issues.
  • Participate in on-call rotations for escalations.


Qualifications (What you will bring):

  •   Bachelor's degree is CS or equivalent experience in related field.
  • 2+ years managing Engineering teams.
  • 8+ years of experience as a site reliability or platform engineer, preferably in a fast-scaling environment
  • 5+ years administering Linux and Windows environments.
  • 3+ years programming / scripting experience (e.g., Python, JavaScript, PowerShell)
  • Strong technical knowledge in OS’s (Linux and Windows), virtualizations, storage systems, networking, and firewall implementations
  • Maintaining production environments in the On Premise (90%) and Cloud (10%) (e.g., AWS, Google Cloud, Azure)
  • Solid understanding of networking principles and how it applies to data flow and security.
  • Automating deployments of cloud based available services (e.g., AWS EC2 / RDS, Docker, Kubernetes)
  • Experience managing CI/CD infrastructures, with a strong proficiency in platforms like bitbucket and Jenkins to streamline deployment pipelines and ensure efficient software delivery.
  • Management of resources using Infrastructure as Code tools (e.g., CloudFormation, Terraform, Chef)
  • Knowledge of observability tools such as LogicMonitor, New Relic, Prometheus, and Coralogix, as well as their implementation.
  • Worked within Agile and Lean software development teams.
  • Experience working in globally distributed teams.
  • Ability to look on the big picture and manage risks.
Read more
Searce Inc

at Searce Inc

3 recruiters
Vaivashhya VN
Posted by Vaivashhya VN
Coimbatore
7 - 15 yrs
Best in industry
skill iconMachine Learning (ML)
Artificial Intelligence (AI)
Retrieval Augmented Generation (RAG)
Large Language Models (LLM) tuning
Generative AI (GenAI)
+5 more

Your Responsibilities

what you will wake up to solve.

  • Process-First AI Strategy: Principal Technical Expert: Act as a hands-on leader and the core technical authority tasked with "futurifying" client businesses through advanced AI. Take full ownership of the AI Engineering squad, transforming ambitious concepts into high-impact, tangible realities.
  • Engineering & Intelligent Deployment: Execute the full-lifecycle development of innovative AI/ML solutions, including hands-on design, coding, testing, and deployment of robust, scalable systems that prioritize technical excellence and business relevance.
  • Scalability & Architectural Optimization: Directly build and optimize high-performance AI architectures and core system components to ensure solutions are reliable, production-ready, and optimized for long-term operational success.
  • Impact-Driven Technical Expertise: Deliver intelligent client outcomes through direct technical contribution, maintaining an "Always Beta" mindset and a relentless focus on solving complex engineering challenges.
  • Leadership through Action: Lead by example rather than control, coaching and mentoring a high-performing squad of "happier Do-ers" to foster a vibrant culture of continuous innovation and technical excellence.
  • Strategic Integration & Collaboration: Partner across internal teams to translate chaotic business challenges into precise technical requirements, ensuring seamless solution integration and adoption for global clients.
  • The "Agentic" Shift: You will lead the transition from simple predictive models to Agentic Workflows. You will build systems where AI agents can plan, reason, and execute complex tasks autonomously to solve intricate business problems.
  • Talent & Culture: You will mentor a high-performance squad of AI Engineers and Data Scientists. You will teach them to look beyond the algorithm and understand the business outcome.


Functional Skills


1. Scaling Intelligent Workforce through Delivery Excellence

  • Deep Technical Acumen: Operates at the cutting edge of AI, applying advanced technical knowledge to engineer and implement groundbreaking solutions, and guide the squad in developing future capabilities.
  • Client Advocacy & Revenue Growth: Skill in cultivating and maintaining trusted client partnerships. Drives strategic engagement that results in repeat business and expanded client portfolios within the region.
  • Contract & Risk Governance: High proficiency in reviewing and managing complex project agreements (SoW), mitigating delivery risks, and navigating commercial negotiations to safeguard BU profitability.
  • Structured Problem-Solving: Simplifies chaotic technical challenges for the squad by breaking them into solvable chunks using first-principles thinking.
  • Squad Delivery Ownership: Follows through on the squad's solution execution—owning technical outcomes from ideation to deployment with rigor, precision, and pride, ensuring tangible, real-world business value.



2. Technical Oversight & Execution Charter

  • Technical Troubleshooting & Crisis Resolution: Actively manages technical roadblocks within the squad, personally intervening to troubleshoot ML or MLOps constraints. You ensure the protection of sprint timelines and the guaranteed performance of deployed models through hands-on problem-solving.
  • Cloud-Native Technical Command: Maintains deep, functional knowledge of modern AI system design (e.g., RAG Frameworks, Agentic Workflows, and Inference Optimization) across GCP and AWS. You hold the responsibility to validate squad-level technical roadmaps, ensuring they are technically feasible and production-hardened.
  • End-to-End Project Management: Expertly manage all aspects of a project, including scope, budget, timelines, and stakeholder communication. Accountable for the entire delivery, not just the technical parts.
  • Talent Strategy & Mentorship: Drive the hiring and development of specialized talent. You will be responsible for defining and optimizing effective team structures while proactively fostering an environment that champions creative problem-solving and technical agility.


Tech Superpowers

  • Deep AI Engineering Mastery & Guidance: Possesses profound, hands-on expertise in engineering, optimizing, and deploying foundational models, custom AI solutions, and complex multi-modal systems. You'll also guide your squad in understanding model architectures, training methodologies, and ethical AI development from the ground up, ensuring their collective proficiency.
  • Intelligent Systems Architecture & Oversight: You'll directly contribute to and oversee the coding and implementation of robust, scalable, and production-grade AI platforms and MLOps components for your squad's projects. You'll translate abstract technical requirements into high-performance, maintainable AI system designs, always considering reliability, security, and future extensibility across the squad's work.
  • Cloud-Native AI capability: More than cloud-certified, you are deeply cloud-capable in applied AI engineering. You proficiently leverage and guide your team in utilizing leading cloud AI/ML ecosystems to build, deploy, and manage AI solutions.
  • Technical Integrity & Ethical Governance: Establishes and audits mandatory technical quality benchmarks, ensuring strict adherence to rigorous policies regarding model validation, automated testing coverage, and ethical governance.


Experience & Relevance

  • A value-driven AI/ML Engineering Manager with 8+ years of experience in building and scaling end-to-end AI engineering and solution delivery.
  • Leadership Track Record: Proven track record as a hands-on builder, and lead, contributing to the design, development, and deployment of complex, enterprise-grade AI/ML platforms and solutions. Expert in leveraging Google Cloud's AI/ML ecosystem (Vertex AI, BigQuery ML, GKE for MLOps) to deliver highly performant, scalable, and impactful AI transformations.
  • Delivery & Advisory Record: Experience in building and optimizing intelligent systems and personally driving the technical execution from conception to scalable deployment.
  • Applied AI & Domain Expertise: Hands-On AI Deployment: Extensive hands-on experience deploying AI-powered workflows, copilots, and automation solutions in production environments.
  • Client-Facing Lead: Demonstrated hands-on experience as an AI/ML Product Manager, Data Science Manager, or Technical Architect in client-facing capacities. This involves directly building, implementing, and advising on complex AI solutions, consistently acting as the trusted technical authority for strategic clients.


Bonus Points (you will thrive if you have)

  • Founder’s Energy: Bias for action, thrive in ambiguity, relentless focus on outcomes.
  • Low-Code/No-Code Fluency: Experience with AI integrations via Power Platform or similar.
  • AI Copilots & Extensions: Built plugins, copilots, or agentic automation frameworks.
  • Thought Leadership DNA: Industry content creation, technical blogs, public speaking.
  • Ethical Compass: Strong commitment to responsible AI practices.
  • Engineer at Heart: Background in product development or engineering before moving into architecture.


Why you’ll love being a ‘Searcian’

NOT your ‘usual’ management consultancy; we ‘solve differently’.

  • We are happier. No really happier’: A vibrant, inclusive, and supportive work environment. We even have a dedicated role for ‘Better Living’.
  • The Company You Keep (Says Everything): solvers. engineers. tinkerers. improvers. futurists operating across 12 countries.
  • No room for CAVEers (Constantly Against Virtually Everything people). Instead, we make room for a meditation room in our offices.
  • No bloat: 27 people meeting with 23 clueless people. Not happening here. We also don't do the meetings to plan for pre-meetings.
  • No bureaucracy. Zero entropy. Real decision-making velocity: We’re large enough to solve the world’s most complex business challenges, yet small and agile enough to value individual humans. With us, you’re a name, not an employee ID number lost in a sea of 37,000 people where it takes a year just to decide ‘who will decide’.
  • Ideas over Hierarchy: We reject HiPPOs (Highest Paid Person’s Opinion). The most well-reasoned ideas win - regardless of whose name is on them. That dangerous phrase, "We’ve always done it this way," dies here.
  • Own-the-outcome: The buck stops with you. Doesn’t matter if you are an intern. (Psst: A ‘real intern’ actually drafted this JD.)
  • Expert ‘wholesome generalists’, Not ‘one-nut-tighteners’. At Searce, you see the whole picture — how the car is designed, built, and driven — not just how to tighten the third nut on a red 1962 Ford Falcon owned by Vinny’s cousin. Real impact comes from knowing why that nut matters to the person behind the wheel.
  • You ‘do stuff’ that matters. Not just “follow up on the deck we shared.”
  • Gain more years in your Searce-perience: We operate at a 3.65x experience velocity—yes, we measured it. (and charted it to the scale too)


Join the ‘real solvers’

ready to futurify?

If you are excited by the possibilities of what an AI-native engineering-led, modern tech consultancy can do to futurify businesses, apply here and experience the ‘Art of the possible’. Don’t Just Send a Resume. Send a Statement.

Read more
Quantiphi

at Quantiphi

3 candid answers
1 video
Nikita Sinha
Posted by Nikita Sinha
Bengaluru (Bangalore), Mumbai, Trivandrum
4 - 7 yrs
Upto ₹35L / yr (Varies
)
Large Language Models (LLM) tuning
skill iconDeep Learning
Google Cloud Platform (GCP)
Google Vertex AI
Windows Azure
+2 more

Build, deploy, and maintain production-grade AI/ML solutions for Fortune 500 enterprise clients on Google Cloud Platform. Hands-on role focused on shipping scalable AI systems across GenAI, agentic workflows, traditional ML, and computer vision.


Key Responsibilities:


Generative AI & Agentic Systems

  • Design and build GenAI applications (RAG, agentic workflows, multi-agent systems)
  • Develop intelligent systems with memory, planning, and reasoning capabilities
  • Implement prompt engineering, context optimization, and evaluation frameworks
  • Build observable and reliable multi-agent architectures

Traditional ML & Computer Vision

  • Develop ML pipelines (forecasting, recommendation, classification, regression)
  • Build production-grade computer vision solutions (document AI, image analysis)
  • Perform feature engineering, model optimization, and benchmarking

MLOps & Production Engineering

  • Own end-to-end ML lifecycle (CI/CD, testing, versioning, deployment)
  • Build scalable APIs, microservices, and data pipelines
  • Monitor models, detect drift, and implement A/B testing frameworks

Knowledge Solutions

  • Architect knowledge graphs and semantic search systems
  • Implement hybrid retrieval (vector + keyword search)

Client Collaboration

  • Present technical solutions to enterprise clients
  • Collaborate with architects, data engineers, and business teams

Required Skills & Experience

  • 3–6 years of hands-on ML Engineering experience
  • Strong Python and software engineering fundamentals
  • Experience shipping production ML systems on cloud (GCP preferred)
  • Experience across GenAI, Traditional ML, Computer Vision
  • MLOps experience and RAG-based systems

Preferred

  • GCP Professional ML Engineer certification
  • Knowledge graphs / semantic search experience
  • Experience in regulated industries (Healthcare / BFSI)
  • Open-source or technical publications
Read more
Superclaims
Akshith Daithala
Posted by Akshith Daithala
Hyderabad
1 - 3 yrs
₹5L - ₹7.5L / yr
skill iconPython
FastAPI
skill iconPostgreSQL
SQLAlchemy
LangGraph
+11 more

About Superclaims

Superclaims modernizes health insurance claims adjudication with intelligent automation. We help insurers and TPAs replace manual, document-heavy workflows with faster, more accurate decisions at scale.


Role: Python Backend Developer

We are looking for a Python Backend Developer who is excited to build AI-powered automation products in a fast-paced startup environment.


What you'll do

- Build and maintain scalable backend systems and APIs

- Develop intelligent data extraction pipelines using AI/ML

- Design and implement agentic workflows with LangGraph

- Design efficient database schemas and optimize queries in PostgreSQL

- Integrate and work with LLMs (OpenAI, Gemini, or similar)

- Collaborate with product, frontend, and data teams to deliver end-to-end features

- Write clean, tested, and well-documented code


Must-have skills

- Strong proficiency in Python and a modern web framework (FastAPI or similar)

- Experience with PostgreSQL and an ORM (SQLAlchemy preferred)

- Solid understanding of RESTful API design and best practices

- Hands-on experience or strong familiarity with LangGraph

- Experience working with LLMs (OpenAI, Gemini, or similar providers)

- Comfort with Git/version control and collaborative development workflows


Nice-to-have skills

- Experience with Docker and containerized deployments

- Knowledge of Redis for caching or background tasks

- Exposure to cloud platforms (GCP, AWS, or Azure)

- Experience with vector databases and retrieval-augmented generation

- Basic prompt engineering skills

- Experience with object storage (S3/MinIO)


What we're looking for

- 1+ years of Python backend development experience (open to exceptional freshers)

- Fast learner with genuine curiosity about AI/ML and automation

- Prior startup experience preferred

- Ownership mindset, bias for action, and comfort with ambiguity

- Ready to relocate to Hyderabad (work location)


How to apply

Please share:

- Your resume

- GitHub/Portfolio link

- A brief note on why you're interested in AI-powered automation and Superclaims

Read more
Searce Inc

at Searce Inc

3 recruiters
Karthika Senthilkumar
Posted by Karthika Senthilkumar
Coimbatore
7 - 10 yrs
Best in industry
Data engineering
skill iconPython
SQL
Google Cloud Platform (GCP)

Who are we ?


Searce means ‘a fine sieve’ & indicates ‘to refine, to analyze, to improve’. It signifies our way of working: To improve to the finest degree of excellence, ‘solving for better’ every time. Searcians are passionate improvers & solvers who love to question the status quo.


The primary purpose of all of us, at Searce, is driving intelligent, impactful & futuristic business outcomes using new-age technology. This purpose is driven passionately by HAPPIER people who aim to become better, everyday.


Tech Superpowers


End-to-End Ecosystem Thinker: You build modular, reusable data products across ingestion, transformation (ETL/ELT), and consumption layers. You ensure the entire data lifecycle is governed, scalable, and optimized for high-velocity delivery.


The MDS Architect. You reimagine business with the Modern Data Stack (MDS) to deliver Data Mesh implementations and real value. You treat every dataset as a measurable "Data Product with a clear focus on ROI and time-to-insight.


Distributed Compute & Scale Savant: You craft resilient architectures that survive petabyte scale volume and data skew without "breaking the bank. You prove your designs with cost-performance benchmarks, not just slideware.


Al-Ready Orchestrator: You engineer the bridge between structured data and Unstructured/Vector stores. By mastering pipelines for RAG models and GenAl, you turn raw data into the fuel for intelligent, automated workflows.


The Quality Craftsman (Builder @ Heart): You are an outcome-focused leader who lives in the code. From embedding GDPR/PII privacy-by-design to optimizing SQL, Python, and Spark daily, you ensure integrity is baked into every table


Experience & Relevance


Engineering Depth: 7-10 years of professional experience in end-to-end data product development. You have a portfolio that proves your ability to build complex, high-velocity pipelines for both Batch and Streaming workloads


Cloud-Native Fluency: Deep, hands-on experience designing and deploying scalable data solutions on at least one major cloud platform (AWS, GCP, or Azure). You are comfortable navigating the nuances of EMR, BigQuery, or Synapse at scale.


Al-Native Workflow: You don't just build for Al you build with Al. You must be proficient in using Al coding assistants (e.g.. GitHub Copilot) to accelerate your delivery and have a track record of building the data foundations required for Generative Al.


Architectural Portfolio: Evidence of leading 2-3 large-scale transformations-including platform migrations, data lakehouse builds, or real-time analytics architectures.


Foster a culture of technical excellence by mentoring and inspiring a team of Data analysts and engineers. Lead deep-dive code reviewa, prompte best-practice data modeling and ensure the squad adopts modern engineering standards like CI/CD For data


Client-Facing Acumen: You have direct experience in a consultative, client-facing role. You can confidently translate a CEO's business vision into a Lead Engineer's technical specification without losing anything in translation.


The "Solver" Mindset: A track record of solving 'impossible data problems-whether it's fixing massive data skew, optimizing spiraling cloud costs, or architecting 99.9% available data services.



Read more
Searce Inc

at Searce Inc

3 recruiters
Vaivashhya VN
Posted by Vaivashhya VN
Coimbatore
7 - 10 yrs
Best in industry
Data engineering
Data migration
Datawarehousing
ETL
SQL
+6 more

Who are we ?


Searce means ‘a fine sieve’ & indicates ‘to refine, to analyze, to improve’. It signifies our way of working: To improve to the finest degree of excellence, ‘solving for better’ every time. Searcians are passionate improvers & solvers who love to question the status quo.


The primary purpose of all of us, at Searce, is driving intelligent, impactful & futuristic business outcomes using new-age technology. This purpose is driven passionately by HAPPIER people who aim to become better, everyday.


Tech Superpowers


End-to-End Ecosystem Thinker: You build modular, reusable data products across ingestion, transformation (ETL/ELT), and consumption layers. You ensure the entire data lifecycle is governed, scalable, and optimized for high-velocity delivery.


The MDS Architect. You reimagine business with the Modern Data Stack (MDS) to deliver Data Mesh implementations and real value. You treat every dataset as a measurable "Data Product with a clear focus on ROI and time-to-insight.


Distributed Compute & Scale Savant: You craft resilient architectures that survive petabyte scale volume and data skew without "breaking the bank. You prove your designs with cost-performance benchmarks, not just slideware.


Al-Ready Orchestrator: You engineer the bridge between structured data and Unstructured/Vector stores. By mastering pipelines for RAG models and GenAl, you turn raw data into the fuel for intelligent, automated workflows.


The Quality Craftsman (Builder @ Heart): You are an outcome-focused leader who lives in the code. From embedding GDPR/PII privacy-by-design to optimizing SQL, Python, and Spark daily, you ensure integrity is baked into every table


Experience & Relevance


Engineering Depth: 7-10 years of professional experience in end-to-end data product development. You have a portfolio that proves your ability to build complex, high-velocity pipelines for both Batch and Streaming workloads


Cloud-Native Fluency: Deep, hands-on experience designing and deploying scalable data solutions on at least one major cloud platform (AWS, GCP, or Azure). You are comfortable navigating the nuances of EMR, BigQuery, or Synapse at scale.


Al-Native Workflow: You don't just build for Al you build with Al. You must be proficient in using Al coding assistants (e.g.. GitHub Copilot) to accelerate your delivery and have a track record of building the data foundations required for Generative Al.


Architectural Portfolio: Evidence of leading 2-3 large-scale transformations-including platform migrations, data lakehouse builds, or real-time

analytics architectures.


Client-Facing Acumen: You have direct experience in a consultative, client-facing role. You can confidently translate a CEO's business vision into a Lead Engineer's technical specification without losing anything in translation.


The "Solver" Mindset: A track record of solving 'impossible data problems-whether it's fixing massive data skew, optimizing spiraling cloud costs, or architecting 99.9% available data services.

Read more
Blitzy

at Blitzy

2 candid answers
1 product
Bisman Gill
Posted by Bisman Gill
Pune
5yrs+
Upto ₹50L / yr (Varies
)
skill iconPython
skill iconKubernetes
Terraform
Google Cloud Platform (GCP)
skill iconAmazon Web Services (AWS)
+2 more

About the role

We are looking for talented Senior Backend Engineers (5+ years of experience) to join our team and take ownership of different parts of our stack. You will be working alongside a team of Engineers locally and directly with the U.S. Engineering team on all aspects of product/application development. You will leverage your experiences and abilities to inform decisions across product development and technology. You will help us build the foundation of our 2nd Headquarters in Pune: its culture, its processes, and its practices. There are a ton of interesting problems to solve, so come hungry. If your colleagues describe you as curious, driven, kind, and creative you are a culture fit.

What Success Looks Like

  • You write, review and ship code in production. Your employer or client's success depends on the software you build
  • You use Generative AI tools on a daily basis to enhance the quality and efficacy of your software and non-software deliverables
  • You are a self-starter and enjoy working with minimal supervision
  • You evaluate and make technical architecture decisions with a long-term view, optimizing for speed, quality, and safety
  • You take pride in the product you create and the code that you write
  • Your team can rely on you to get them out of a sticky situation in production
  • You can work well on a team of sales executives, designers and engineers in an in-person environment
  • You are passionate about the enterprise software development lifecycle and feel strongly about improving it
  • You are a first principles engineer who exercises curiosity about the technologies you work with
  • You can learn quickly about technologies, software and code that you are not familiar with, often from rudimentary documentation
  • You take ownership of the code that you write, and you help the team operate with everything that you build, throughout its lifecycle
  • You communicate openly and solicit feedback on important decisions, keeping the team aligned on your rationale
  • You exercise an optimistic mindset and are willing to go the extra mile to make things work

Areas of Ownership

Our hiring process is designed for you to demonstrate a generalist set of capabilities, with a specialization in Backend Technologies.

Required Technical Experience (MUST HAVE):

  • Expertise in Python -
  • Deep hands-on experience with Terraform -
  • Proficiency in Kubernetes -
  • Experience with cloud platforms (GCP strongly preferred, AWS/Azure acceptable) -

Additional experience with some of the following:

  • Backend Frameworks and Technologies (Node.js, NuxtJS, Express.js)
  • Programming languages (JavaScript, TypeScript, Java, C++, Go)
  • RPCs (REST, gRPC or GraphQL)
  • Databases (SQL, NoSQL, Postgres, MongoDB, or Firebase)
  • CI/CD (Jenkins, CircleCI, GitLab or similar)
  • Source code versioning tools such as Git or Perforce
  • Microservices architecture

Ways to stand out

  • Familiarity with AI Platforms
  • Extensive experience with building enterprise-scale applications with >99% SLAs
  • Deep expertise across the full required stack: Python, Terraform, Kubernetes, and GCP

You'll Get...

  • Competitive Salary
  • Medical Insurance Benefits
  • Employer Provident Fund contributions with Gratuity after 5 years of service
  • Company-sponsored US onsite trips for high performers, based on business requirements
  • Potential international transfer support for top performers, based on business requirements
  • Technology (hardware, software, trainings, etc.) equipment and/or allowance
  • The opportunity to re-shape an entire industry
  • Beautiful office environment
  • Meal allowance and/or food provision on site

Culture

Who we are: Our Co-Founder and CTO is a Serial Gen AI Inventor who grew up in Pune, India, is a BITS Pilani graduate, and worked at NVIDIA's Pune office for 6 years. There, he was promoted 5 times in 6 years and was transferred to the NVIDIA Headquarters in Santa Clara, California. After making significant contributions to NVIDIA, he proceeded to attend Harvard for his dual Masters in Engineering and MBA from HBS. Our other Co-Founder/CEO is a successful Serial Entrepreneur who has built multiple companies. As a team, we work very hard, have a curious mind-set, and believe in a low-ego high output approach.


Read more
Arcis India
Sarita Jena
Posted by Sarita Jena
Mumbai
6 - 8 yrs
₹12L - ₹20L / yr
skill iconJava
skill iconSpring Boot
Quarkus
Microservices
Webservices
+17 more

6 + years of hands-on development experience and in-depth knowledge of , Spring Java, Spring boot, Quarkus and nice to have front-end technologies like Angular, React JS

● Excellent Engineering skills in designing and implementing scalable solutions

● Good knowledge of CI/CD Pipeline with strong focus on TDD

Strong communication skills and ownership

● Exposure to Cloud, Kubernetes, Docker, Microservices is highly desired.

● Experience in working on public cloud environments like AWS, Azure, GCP w.r.t. solutions development, deployment & adoption of cloud-based technology components like IaaS / PaaS offerings

● Proficiency in PL/SQL and Database development.

Strong in J2EE & OOPS Design Patterns.

Read more
Vikgol
Madhuri D R
Posted by Madhuri D R
Remote only
3 - 6 yrs
₹8L - ₹15L / yr
Linux/Unix
TCP/IP
DNS
Voice Over IP (VoIP)
skill iconAmazon Web Services (AWS)
+16 more

Job role: Systems Engineer (L2)

Location: Remote/Bengaluru

Experience: 3-6 years


About the Role:

We are looking for a Systems Engineer (L2) to join our growing infrastructure team. You will be responsible for managing, optimizing, and scaling our cloud communication platform that handles billions of messages and voice calls annually.


Key Responsibilities:

 — Design, deploy, and maintain scalable cloud infrastructure — AWS/GCP/Azure.

 — Manage and optimize networking components — routers, switches, firewalls, load balancers.

 — Handle incident response — monitor systems, identify issues, resolve production problems.

 — Implement DevOps best practices — CI/CD pipelines, automation, containerization.

 — Collaborate with backend and product teams on system architecture.

— Performance tuning — ensure high availability and reliability of platform.

— Security management — implement security protocols and compliance standards.


Required Skills:

Technical:

  • Linux/Unix administration — strong fundamentals
  • Networking — TCP/IP, DNS, BGP, VoIP protocols
  • Cloud platforms — AWS/GCP/Azure — minimum 2 years
  • DevOps tools — Docker, Kubernetes, Jenkins, CI/CD
  • Monitoring tools — Grafana, Prometheus, Kibana, Datadog
  • Scripting — Python, Bash, Shell
  • Databases — MySQL, PostgreSQL, Redis


Soft skills:

  • Strong problem-solving under pressure
  • Good communication — English written and verbal
  • Team player — collaborative mindset


Good to Have:

  • Experience in telecom/CPaaS/cloud communications industry
  • Knowledge of VoIP, SIP, RTP protocols
  • AI/ML operations experience
  • CCNA/AWS certifications


Read more
Wissen Technology

at Wissen Technology

4 recruiters
Robin Silverster
Posted by Robin Silverster
Bengaluru (Bangalore)
7 - 11 yrs
₹10L - ₹36L / yr
SRE
Reliability engineering
Google Cloud Platform (GCP)

Location - Bangalore Skill/Experience Expectations: 1. Total Experience 7-11 yrs 2. 3-4 years in managing scalable production environment 3. 2-4 yr experience in managing Google cloud infrastructure 4. proficient in terraform and any programming language 5. Expert in designing and managing observability solutions 6. 5 yr experience in DevOps and SRE practices and troubleshooting critical incidents.

Read more
Mid Size Product Engineering Services Company

Mid Size Product Engineering Services Company

Agency job
via Vidpro Consultancy Services by Vidyadhar Reddy
Remote, Bengaluru (Bangalore), Chennai, Hyderabad
20 - 26 yrs
₹65L - ₹120L / yr
skill iconVue.js
skill iconAngularJS (1.x)
skill iconAngular (2+)
skill iconReact.js
skill iconJavascript
+18 more

This role will report to the Chief Technology Officer


You Will Be Responsible For


* Driving decision-making on enterprise architecture and component-level software design to our software platforms' timely build and delivery.

* Leading a team in building a high-performing and scalable SaaS product.

* Conducting code reviews to maintain code quality and follow best practices

* DevOps practice development on promoting automation, including asset creation, enterprise strategy definition, and training teams

* Developing and building microservices leveraging cloud services

* Working on application security aspects

* Driving innovation within the engineering team, translating product roadmaps into clear development priorities, architectures, and timely release plans to drive business growth.

* Creating a culture of innovation that enables the continued growth of individuals and the company

* Working closely with Product and Business teams to build winning solutions

* Led talent management, including hiring, developing, and retaining a world-class team


Ideal Profile


* You possess a Degree in Engineering or a related field and have at least 20+ years of experience as a Software Engineer, with a 10+ years of experience leading teams and at least 4 Years of experience in building a SaaS / Fintech platform.

* Proficiency in MERN / Java / Full Stack.

* Led a team in optimizing the performance and scalability of a product

* You have extensive experience with DevOps environment and CI/CD practices and can train teams.

* You're a hands-on leader, visionary, and problem solver with a passion for excellence.

* You can work in fast-paced environments and communicate asynchronously with geographically distributed teams.


What's on Offer?


* Exciting opportunity to drive the Engineering efforts of a reputed organisation

* Work alongside & learn from best in class talent

* Competitive compensation + ESOPs

Read more
Searce Inc

at Searce Inc

3 recruiters
Srishti Dani
Posted by Srishti Dani
Mumbai, Pune, Bengaluru (Bangalore)
7 - 10 yrs
Best in industry
Data migration
Datawarehousing
ETL
SQL
Google Cloud Platform (GCP)
+7 more

Lead Data Engineer


What are we looking for

real solver?

Solver? Absolutely. But not the usual kind. We're searching for the architects of the audacious & the pioneers of the possible. If you're the type to dismantle assumptions, re-engineer ‘best practices,’ and build solutions that make the future possible NOW, then you're speaking our language.


Your Responsibilities

What you will wake up to solve.

  • Lead Technical Design & Data Architecture: Architect and lead the end-to-end development of scalable, cloud-native data platforms. You’ll guide the squad on critical architectural decisions—choosing between Batch vs. Streaming or ETL vs. ELT—while remaining 100% hands-on, contributing high-quality, production-grade code.
  • Build High-Velocity Data Pipelines: Drive the implementation of robust data transports and ingestion frameworks using Python, SQL, and Spark. You will build integration layers that connect heterogeneous sources (SaaS, RDBMS, NoSQL) into unified, high-availability environments like BigQuery, Snowflake, or Redshift.
  • Mentor & Elevate the Squad: Foster a culture of technical excellence by mentoring and inspiring a team of data analysts and engineers. Lead deep-dive code reviews, promote best-practice data modeling (Star/Snowflake schema), and ensure the squad adopts modern engineering standards like CI/CD for data.
  • Drive AI-Ready Data Strategy: Be the expert in designing data foundations optimized for AI and Machine Learning. You will champion the use of GCP (Dataflow, Pub/Sub, BigQuery) and AWS (Lambda, Glue, EMR) to create "clean room" environments that fuel advanced analytics and generative AI models.
  • Partner with Clients as a Technical DRI: Act as the Directly Responsible Individual for client success. Translate ambiguous business questions into elegant data services, manage project deliverables using Agile methodologies, and ensure that the data provided is accurate, consistent, and mission-critical.
  • Troubleshoot & Optimize for Scale: Own the reliability of the reporting layer. You will proactively monitor pipelines, troubleshoot complex transformation bottlenecks, and propose ways to improve platform performance and cost-efficiency.
  • Innovate and Build Reusable IP: Spearhead the creation of reusable data frameworks, custom operators, and transformation libraries that accelerate future projects and establish Searce’s unique technical advantage in the market.


Welcome to Searce


The AI-Native tech consultancy that's rewriting the rules.

Searce is an AI-native, engineering-led, modern tech consultancy that empowers clients to futurify their business by delivering intelligent, impactful, real business outcomes. Searce solvers co-innovate with clients as their trusted transformational partners ensuring sustained competitive advantage. Searce clients realize smarter, faster, better business outcomes delivered by AI-native Searce solver squads. 


Functional Skills 

the solver personas.

  • The Data Architect: This persona deconstructs ambiguous business goals into scalable, elegant data blueprints. They don't just move data; they design the foundation—from schema design to partitioning strategies—that allows data scientists and analysts to thrive, foreseeing technical bottlenecks and making pragmatic trade-offs.
  • The Player-Coach: As a hands-on leader, this persona leads from the front by writing exemplary, production-grade SQL and Python while simultaneously mentoring and elevating the skills of the squad. Their success is measured by the team's ability to deliver high-quality, maintainable code and their growth as engineers.
  • The Pragmatic Innovator: This individual balances a passion for modern data tech (like Generative AI and Real-time Streaming) with a sharp focus on business outcomes. They champion new tools where they add real value but are disciplined enough to choose stable, cost-effective solutions to meet deadlines and deliver robust products.
  • The Client-Facing Technologist: This persona acts as the crucial technical bridge between the data squad and the client. They build trust by listening actively, explaining complex data concepts (like data latency or idempotency) in simple terms, and demonstrating how engineering decisions align with the client’s strategic goals.
  • The Quality Craftsman: This individual possesses an unwavering commitment to data integrity and treats data engineering as a craft. They are the guardian of the reporting layer, advocating for robust testing, data validation frameworks, and clean, modular code to ensure the long-term reliability of the data platform.


Experience & Relevance 

  • Engineering Depth: 7-10 years of professional experience in end-to-end data product development. You have a portfolio that proves your ability to build complex, high-velocity pipelines for both Batch and Streaming workloads.
  • Cloud-Native Fluency: Deep, hands-on experience designing and deploying scalable data solutions on at least one major cloud platform (AWS, GCP, or Azure). You are comfortable navigating the nuances of EMR, BigQuery, or Synapse at scale.
  • AI-Native Workflow: You don’t just build for AI; you build with AI. You must be proficient in using AI coding assistants (e.g., GitHub Copilot) to accelerate your delivery and have a track record of building the data foundations required for Generative AI.
  • Architectural Portfolio: Evidence of leading 2-3 large-scale transformations—including platform migrations, data lakehouse builds, or real-time analytics architectures.
  • Client-Facing Acumen: You have direct experience in a consultative, client-facing role. You can confidently translate a CEO’s business vision into a Lead Engineer’s technical specification without losing anything in translation.


Join the ‘real solvers’

ready to futurify?

If you are excited by the possibilities of what an AI-native engineering-led, modern tech consultancy can do to futurify businesses, apply here and experience the ‘Art of the possible’. Don’t Just Send a Resume. Send a Statement.



Read more
Searce Inc

at Searce Inc

3 recruiters
Jatin Gereja
Posted by Jatin Gereja
Bengaluru (Bangalore), Mumbai, Pune
10 - 18 yrs
Best in industry
Google Cloud Platform (GCP)
skill iconAmazon Web Services (AWS)
Enterprise Data Warehouse (EDW)
Data modeling
Big Data
+9 more

Director - Data engineering


What are we looking for

real solver?

Solver? Absolutely. But not the usual kind. We're searching for the architects of the audacious & the pioneers of the possible. If you're the type to dismantle assumptions, re-engineer ‘best practices,’ and build solutions that make the future possible NOW, then you're speaking our language.


Your Responsibilities

what you will wake up to solve.

1. Delivery & Tactical Rigor

  • Methodology Implementation: Implement and manage a unified, 'DataOps-First' methodology for data engineering delivery (ETL/ELT pipelines, Data Modeling, MLOps, Data Governance) within assigned business units. This ensures predictable outcomes and trusted data integrity by reducing architecture variability at the project level.
  • Operational Stewardship: Drive initiatives to optimize team utilization and enhance operational efficiency within the practice. You manage the commercial success of your squads, ensuring data delivery models (from migration to modern data stack implementation) are executed profitably, scalably, and cost-effectively.
  • Execution & Technical Resolution
  • Technical Escalation: Serve as the primary escalation point for delivery issues, personally leading the resolution of complex data integration bottlenecks and pipeline failures to protect client timelines and data reliability standards.
  • Quality Enforcement
  • Quality Oversight: Execute and monitor technical data quality standards, ensuring engineering teams adhere to strict policies regarding data lineage, automated quality checks (observability), security/privacy compliance (GDPR/CCPA/PII), and active catalog management.

2. Strategic Growth & Practice Scaling

  • Talent & Scaling Execution: Execute the strategy for data engineering talent acquisition and development within your business units. Implement objective metrics to assess and grow the 'Data-Native' DNA of your teams, ensuring squads are consistently equipped to handle petabyte-scale environments and high-impact delivery.
  • Offerings Alignment: Drive the adoption of standardized regional offerings (e.g., Modern Data Platform, Data Mesh, Lakehouse Implementation). Ensure your teams leverage the profitable frameworks defined by the practice to accelerate time-to-insight and eliminate architectural fragmentation in client environments.
  • Innovation & IP Development: Lead the practical integration of Vector Databases and LLM-ready architectures into project delivery. Champion the hands-on development of IP and reusable accelerators (e.g., automated ingestion engines) that improve delivery speed and enhance data availability across your portfolio.

3. Leadership & Unit Management

  • Unit Leadership: Directly lead, mentor, and manage the Engineering Managers and Lead Architects within your business unit. Hold your teams accountable for project-level operational consistency, technical talent development, and strict adherence to the practice's data governance standards.
  • Stakeholder Communication: Clearly articulate the business unit’s operational performance, technical quality metrics, and delivery progress to the C-suite Stakeholders and regional client leadership, bridging the gap between technical execution and business value.
  • Ecosystem Alignment: Maintain strong technical relationships with key partner contacts (Snowflake, Databricks, AWS/GCP). Align team delivery capabilities with current product roadmaps and ensure squad-level participation in training, certifications, and partner-led enablement opportunities.


Welcome to Searce

The ‘process-first’, AI-native modern tech consultancy that's rewriting the rules.

We don’t do traditional.

As an engineering-led consultancy, we are dedicated to relentlessly improving the real business outcomes. Our solvers co-innovate with clients to futurify operations and make processes smarter, faster & better.


Functional Skills

1. Delivery Management & Operational Excellence

  • Methodology Execution: Expert capability in implementing and enforcing a unified delivery methodology (DataOps, Agile, Mesh Principles) within specific business units. Proven track record of auditing squad-level adherence to ensure consistency across the project lifecycle.
  • Operational Performance: High proficiency in managing day-to-day operational metrics, including squad utilization, resource forecasting, and productivity tracking. Skilled at optimizing team performance to meet profitability and efficiency targets.
  • SOW & Risk Mitigation: Proven experience in operationalizing Statement of Work (SOW) requirements and identifying technical delivery risks early. Expert at mitigating scope creep and data-specific bottlenecks (e.g., latency, ingestion gaps) before they impact client outcomes.
  • Technical Escalation Leadership: Demonstrated ability to lead "war room" efforts to resolve complex pipeline failures or data integrity issues. Skilled at providing clear, rapid remediation plans and communicating technical status directly to regional stakeholders.

2. Architectural Implementation & Technical Oversight

  • Modern Stack Proficiency: Deep, hands-on expertise in implementing Cloud-Native architectures (Lakehouse, Data Mesh, MPP) on Snowflake, Databricks, or hyperscalers. Ability to conduct deep-dive architectural reviews and course-correct design decisions at the squad level to ensure scalability.
  • Operationalizing Governance: Proven experience in embedding data quality and observability (completeness, freshness, accuracy) directly into the CI/CD pipeline. Responsible for technical enforcement of regulatory compliance (GDPR/PII) and maintaining the integrity of data catalogs across active projects.
  • Applied Domain Expertise: Practical experience leading the delivery of high-growth solutions, specifically Generative AI infrastructure (RAG, Vector DBs), Real-Time Streaming, and large-scale platform migrations with a focus on zero-downtime execution.
  • DataOps & Engineering Standards: Expert-level mastery of DataOps, including the setup and management of orchestration frameworks (Airflow, Dagster) and Infrastructure as Code (IaC). You ensure that automation is a baseline requirement, not an afterthought, for all delivery teams.

3. Unit Management & Commercial Execution

  • Unit & Team Management: Proven success in leading and mentoring Engineering Managers and Lead Architects. Responsible for the operational metrics, technical output, and career development of the business unit's talent pool.
  • Offerings Implementation & Scoping: Expertise in translating service offerings (e.g., Data Maturity Assessments, Lakehouse Builds) into accurate project scopes, technical estimates, and resource plans to ensure delivery is both profitable and competitive.
  • Talent Growth & Mentorship: Functional ability to implement growth frameworks for data engineering roles. Focus on hands-on coaching and scaling high-performance technical talent to meet the demands of complex, petabyte-scale environments.
  • Partner Enablement: Functional competence in managing regional technical relationships with major partners (Snowflake, Databricks, GCP/AWS). Drives squad-level certifications, joint technical enablement, and alignment with partner product roadmaps.

Tech Superpowers

  • Modern Data Architect – Reimagines business with the Modern Data Stack (MDS) to deliver data mesh implementations, insights, & real value to clients.
  • End-to-End Ecosystem Thinker – Builds modular, reusable data products across ingestion, transformation (ETL/ELT), governance, and consumption layers.
  • Distributed Compute Savant – Crafts resilient, high-throughput architectures that survive petabyte-scale volume and data skew without breaking the bank.
  • Governance & Integrity Guardian – Embeds data quality, complete lineage, and privacy-by-design (GDPR/PII) into every table, view, and pipeline.
  • AI-Ready Orchestrator – Engineers pipelines that bridge structured data with Unstructured/Vector stores, powering RAG models and Generative AI workflows.
  • Product-Minded Strategist – Balances architectural purity with time-to-insight; treats every dataset as a measurable "Data Product" with clear ROI.
  • Pragmatic Stack Curator – Chooses the simplest tools that compound reliability; fluent in SQL, Python, Spark, dbt, and Cloud Warehouses.
  • Builder @ Heart – Writes, reviews, and optimizes queries daily; proves architectures with cost-performance benchmarks, not slideware. Business-first, data-second, outcome focused technology leader.

Experience & Relevance

  • Executive Experience: Minimum 10+ years of progressive experience in data engineering and analytics, with at least 3 years in a Senior Manager or Director -level role managing multiple technical teams and owning significant operational and efficiency metrics for a large data service line.
  • Delivery Standardization: Demonstrated success in defining and implementing globally consistent, repeatable delivery methodologies (DataOps/Agile Data Warehousing) across diverse teams.
  • Architectural Depth: Must retain deep, current expertise in Modern Data Stack architectures (Lakehouse, MPP, Mesh) and maintain the ability to personally validate high-level architectural and data pipeline design decisions.
  • Operational Leadership: Proven expertise in managing and scaling large professional services organizations, demonstrated ability to optimize utilization, resource allocation, and operational expense.
  • Domain Expertise: Strong background in Enterprise Data Platforms, Applied AI/ML, Generative AI integration, or large-scale Cloud Data Migration.
  • Communication: Exceptional executive-level presentation and negotiation skills, particularly in communicating complex operational, data quality, and governance metrics to C-level stakeholders.

Join the ‘real solvers’

ready to futurify?

If you are excited by the possibilities of what an AI-native engineering-led, modern tech consultancy can do to futurify businesses, apply here and experience the ‘Art of the possible’. Don’t Just Send a Resume. Send a Statement.

Read more
Searce Inc

at Searce Inc

3 recruiters
Tejashree Kokare
Posted by Tejashree Kokare
Bengaluru (Bangalore), Pune, Mumbai
6 - 15 yrs
Best in industry
Google Cloud Platform (GCP)
Data engineering
Data warehouse architecture
Data architecture
Data modeling
+6 more

Solutions Architect - Data Engineering


Modern tech solutions advisory & 'futurify' consulting as a Searce lead fds (‘forward deployed solver’) architecting scalable data platforms and robust data engineering solutions that power intelligent insights and fuel AI innovation.

If you’re a tech-savvy, consultative seller with the brain of a strategist, the heart of a builder, and the charisma of a storyteller — we’ve got a seat for you at the front of the table.

You're not a sales lead. You're the transformation driver.


What are we looking for

real solver?

Solver? Absolutely. But not the usual kind. We're searching for the architects of the audacious & the pioneers of the possible. If you're the type to dismantle assumptions, re-engineer ‘best practices,’ and build solutions that make the future possible NOW, then you're speaking our language.

  • Improver. Solver. Futurist.
  • Great sense of humor.
  • ‘Possible. It is.’ Mindset.
  • Compassionate collaborator. Bold experimenter. Tireless iterator.
  • Natural creativity that doesn’t just challenge the norm, but solves to design what’s better.
  • Thinks in systems. Solves at scale.


This Isn’t for Everyone. But if you’re the kind who questions why things are done a certain way— and then identifies 3 better ways to do it — we’d love to chat with you.


Your Responsibilities

what you will wake up to solve.


You are not just a Solutions Architect; you are a futurifier of our data universe and the primary enabler of our AI ambitions. With a deep-seated passion for data engineering, you will architect and build the foundational data infrastructure that powers the customers entire data intelligence ecosystem.

As the Directly Responsible Individual (DRI) for our enterprise-grade data platforms, you own the outcome, end-to-end. You are the definitive solver for our customer's most complex data challenges, leveraging a powerful tech stack including Snowflake, Databricks, etc. and core GCP & AWS services (BigQuery, Spanner, Airflow, Kafka). This is a hands-on-keys role where you won't just design solutions—you'll build them, break them, and perfect them.


  • Solution Design & Pre-sales Excellence:Collaborate with cross-functional teams, including sales, engineering, and operations, to ensure successful project delivery.
  • Design Core Data Engineering: Master data modeling, architecting high-performance data ingestion pipelines and ensuring data quality and governance throughout the data lifecycle.
  • Enable Cloud & AI: Design and implement solutions utilizing core GCP data services, building foundational data platforms that efficiently support advanced analytics and AI/ML initiatives.
  • Optimize Performance & Cost: Continuously optimize data architectures and implementations for performance, efficiency, and cost-effectiveness within the cloud environment.
  • Bridge Business & Tech: Translate complex business requirements into clear technical designs, providing technical leadership and guidance to data engineering teams.
  • Stay Ahead of the Curve: Continuously research and evaluate new data technologies, architectural patterns, and industry trends to keep our data platforms at the cutting edge.


Functional Skills:


  • Enterprise Data Architecture Design: Expert ability to design holistic, scalable, and resilient data architectures for complex enterprise environments.
  • Cloud Data Platform Strategy: Proven capability to strategize, design, and implement cloud-native data platforms.
  • Pre-Sales & Technical Storyteller: Crafts compelling, client-ready proposals, architectural decks, and technical demonstrations. Doesn't just present; shapes the strategic technical narrative behind every proposed solution.
  • Advanced Data Modelling: Mastery in designing various data models for analytical, operational, and transactional use cases.
  • Data Ingestion & Pipeline Orchestration: Strong expertise in designing and optimizing robust data ingestion and transformation pipelines.
  • Stakeholder Communication: Exceptional skills in articulating complex technical concepts and architectural decisions to both technical and non-technical stakeholders.
  • Performance & Cost Optimization: Adept at optimizing data solutions for performance, efficiency, and cost within a cloud environment.


Tech Superpowers:


  • Cloud Data Mastery: You're a wizard at leveraging public cloud data services, with deep expertise in GCP (BigQuery, Spanner, etc.) and expert proficiency in modern data warehouse solutions like Snowflake.
  • Data Engineering Core: Highly skilled in designing, implementing, and managing data workflows using tools like Apache Airflow and Apache Kafka. You're also an authority on advanced data modeling and ETL/ELT patterns.
  • AI/ML Data Foundation: You instinctively design data pipelines and structures that efficiently feed and empower Machine Learning and Artificial Intelligence applications.
  • Programming for Data: You have a strong command over key programming languages (Python, SQL) for scripting, automation, and building data processing applications.


Experience & Relevance:


  • Architectural Leadership (8+ Years): You bring extensive experience (7+ years) specifically in a Solutions Architect role, focused on data engineering and platform building.
  • Cloud Data Expertise: You have a proven track record of designing and implementing production-grade data solutions leveraging major public cloud platforms, with significant experience in Google Cloud Platform (GCP).
  • Data Warehousing & Data Platform: Demonstrated hands-on experience in the end-to-end design, implementation, and optimization of modern data warehouses and comprehensive data platforms.
  • Databricks & BigQuery Mastery: You possess significant practical experience with Databricks as a core data warehouse and GCP BigQuery for analytical workloads.
  • Data Ingestion & Orchestration: Proven experience designing and implementing complex data ingestion pipelines and workflow orchestration using tools like Airflow and real-time streaming technologies like Kafka.
  • AI/ML Data Enablement: Experience in building data foundations specifically geared towards supporting Machine Learning and Artificial Intelligence initiatives.


Join the ‘real solvers’

ready to futurify?

If you are excited by the possibilities of what an AI-native engineering-led, modern tech consultancy can do to futurify businesses, apply here and experience the ‘Art of the possible’.


Don’t Just Send a Resume. Send a Statement.


So, If you are passionate about tech, future & what you read above (we really are!), apply here to experience the ‘Art of Possible’

Read more
TalentXO
tabbasum shaikh
Posted by tabbasum shaikh
Bengaluru (Bangalore)
12 - 20 yrs
₹35L - ₹40L / yr
skill iconJava
skill iconDocker
Microservices
CI/CD
Data modeling
+9 more

Responsibilities

  • Technical Leadership & Architecture: Serve as a technical authority, define cloud-native standards, and own end-to-end system design for complex, distributed, and high-scale solutions.
  • Engineering Excellence: Lead modernization initiatives, transition legacy systems to microservices, and ensure performance, scalability, and security.
  • Platform & Cloud Enablement: Champion GCP-native services (GKE, Cloud Run, Pub/Sub, BigQuery, Cloud SQL, Spanner), influence CI/CD, and drive FinOps strategies.
  • Mentorship & Organizational Impact: Mentor Lead and Senior Engineers, foster engineering rigor, and collaborate with Product Management to translate business strategy into technical solutions.

Required Skills, Experience & Background

  • 12+ years of professional experience building enterprise-grade software systems.
  • Proven experience operating at a Lead or Principal Engineer level.
  • Strong hands-on experience with Java or another major programming language (e.g., Go, Python, C#).
  • Deep expertise in microservices architecture, RESTful APIs, and distributed systems.
  • Strong experience with SQL and NoSQL databases and data modeling.
  • Expertise in cloud-native design principles (stateless services, scalability, resiliency).
  • Strong experience building and operating systems on Google Cloud Platform (GCP).
  • Proficiency with containers and orchestration (Docker, Kubernetes, GKE).
  • Experience with CI/CD pipelines, Git-based workflows, and observability.
  • Experience with Big Data and data engineering technologies (e.g., BigQuery, Spark).

Qualifications

  • Bachelor’s degree in computer science or related field (required).
  • Master’s degree in computer science or related field (preferred).


Read more
Noida
10 - 16 yrs
₹25L - ₹40L / yr
skill iconJava
Microservices
Multithreading
Hibernate (Java)
Java Servlets
+3 more

About Us


CLOUDSUFI, a Google Cloud Premier Partner, a Data Science and Product Engineering organization building Products and Solutions for Technology and Enterprise industries. We firmly believe in the power of data to transform businesses and make better decisions. We combine unmatched experience in business processes with cutting edge infrastructure and cloud services. We partner with our customers to monetize their data and make enterprise data dance.


Our Values


We are a passionate and empathetic team that prioritizes human values. Our purpose is to elevate the quality of lives for our family, customers, partners and the community.


Equal Opportunity Statement


CLOUDSUFI is an equal opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all employees. All qualified candidates receive consideration for employment without regard to race, color, religion, gender, gender identity or expression, sexual orientation, and national origin status. We provide equal opportunities in employment, advancement, and all other areas of our workplace. Please explore more at https://www.cloudsufi.com/.


What we are looking for:


Experience: 10+ years

Education: BTech / BE / ME /MTech/ MCA / MSc Computer Science

Industry: Product Engineering Services or Enterprise Software Companies


Job Responsibilities:


  • Sprint Development Task , Code Review , Defining detailed task for the connector based on design/Timelines, Documentation maturity, Release review and Sanity,Writing the design specifications and user stories for the functionalities assigned.
  • Develop assigned components / classes and assist QA team in writing the test cases
  • Create and maintain coding best practices and do peer code / solution reviews
  • Participate in Daily Scrum calls, Scrum Planning, Retro and Demos meetings
  • Bring out technical/design/architectural challenges/risks during execution, develop action plan for mitigation and aversion of identified risks
  • Comply with development processes, documentation templates and tools prescribed by CloudSufi or and its clients
  • Work with other teams and Architects in the organization and assist them on technical Issues/Demos/POCs and proposal writing for prospective clients
  • Contribute towards the creation of knowledge repository, reusable assets/solution accelerators and IPs
  • Provide feedback to junior developers and be a coach and mentor for them
  • Provide training sessions on the latest technologies and topics to others employees in the organization
  • Participate in organization development activities time to time - Interviews, CSR/Employee engagement activities, participation in business events/conferences, implementation of new policies, systems and procedures as decided by Management team


Certifications (Optional): OCPJP (Oracle Certified Professional Java Programmer)


Required Experience:


  • Strong programming skills in the language Java.
  • Hands on in Core Java and Microservices
  • Understanding of Identity Management using users , groups and entitlements
  • Hands on in developing connectivity for Identity management using SCIM,REST and LDAP.
  • Through Experience in Triggers , Web hooks , events receiver implementations for connectors.
  • Excellent in code review process and assessing developer’s productivity.
  • Excellent analytical and problem-solving skills


Good to Have:


  • Experience of developing 3-4 integration adapters/connectors for enterprise applications (ERP, CRM, HCM, SCM, Billing etc.) using industry standard frameworks and methodologies following Agile/Scrum
  • Experience with IAM products.
  • Experience on Implementation of Message Brokers using JMS.
  • Experience on ETL processes



Non-Technical/ Behavioral competencies required:

  • Must have worked with US/Europe based clients in onsite/offshore delivery model
  • Should have very good verbal and written communication, technical articulation, listening and presentation skills
  • Should have proven analytical and problem solving skills
  • Should have demonstrated effective task prioritization, time management and internal/external stakeholder management skills
  • Should be a quick learner, self starter, go-getter and team player
  • Should have experience of working under stringent deadlines in a Matrix organization structure
  • Should have demonstrated appreciable Organizational Citizenship Behavior in past organizations


Read more
Euphoric Thought Technologies
Bengaluru (Bangalore)
6 - 8 yrs
₹12L - ₹22L / yr
skill iconJava
skill iconPython
skill iconAmazon Web Services (AWS)
Google Cloud Platform (GCP)
Agile/Scrum
+4 more

Key Responsibilities:

  • Lead and mentor a team of Java and Python developers, providing technical guidance and fostering a culture of continuous learning and improvement.
  • Oversee the design, development, and implementation of high-performance, scalable, and secure software solutions for the financial services industry.
  • Collaborate with product managers and architects to translate business requirements into technical specifications and ensure alignment with overall product strategy.
  • Drive the adoption of best practices in software development, including code reviews, testing, and continuous integration/continuous deployment (CI/CD).
  • Manage project timelines and resources effectively, ensuring on-time and within-budget delivery of projects.
  • Identify and mitigate technical risks, proactively addressing potential issues and ensuring the stability and reliability of our platforms.
  • Stay abreast of emerging technologies and trends in Java, Python, and related fields, and evaluate their potential application to our products and services.
  • Contribute to the development of technical documentation and training materials.

Required Skillset:

  • Demonstrated expertise in Java and Python development, with a strong understanding of object-oriented principles, design patterns, and data structures.
  • Proven ability to lead and mentor a team of software engineers, fostering a collaborative and high-performing environment.
  • Experience in designing and developing scalable, high-performance, and secure software solutions.
  • Strong understanding of software development methodologies, including Agile and Waterfall.
  • Excellent communication, interpersonal, and problem-solving skills.
  • Ability to work effectively in a fast-paced, dynamic environment.
  • Bachelor's or Master's degree in Computer Science or a related field.
  • Experience with relational databases (e.g., MySQL, PostgreSQL) and NoSQL databases (e.g., MongoDB, Cassandra).
  • Experience with cloud platforms (e.g., AWS, Azure, GCP) is a plus.
Read more
Facile Technolab Pvt Ltd

at Facile Technolab Pvt Ltd

1 candid answer
3 recruiters
Palak Dalal
Posted by Palak Dalal
B, True Value, Westgate, 1201, Sarkhej - Gandhinagar Hwy, near YMCA Club, Vejalpur, Ahmedabad, Gujarat 380015, Ahmedabad
4 - 6 yrs
₹4L - ₹12L / yr
Production
Infrastructure
skill iconKubernetes
Google Cloud Platform (GCP)
SaaS
+2 more

Job Title:

DevOps / MLOps Engineer – GCP, Kubernetes, GPU Workloads, Python


Job Description

We are looking for an MLOps Engineer with 4–7 years of experience in cloud infrastructure, web application development, and ML pipeline deployment. The ideal candidate will manage GPU workloads on Kubernetes (GKE/EKS), optimize ML infrastructure, and handle scalable web applications in cloud environments.


Responsibilities:

  • Develop and maintain web apps using Python, TypeScript, and Node.js.
  • Manage DevOps pipelines and production infrastructure on GCP, including Terraform, Kubernetes, and cloud storage.
  • Deploy, scale, and monitor ML models (vLLM, Triton, TensorRT-LLM) on multi-GPU instances.
  • Optimize GPU resources, container GPU passthrough, autoscaling, and cost efficiency.
  • Write and run tests for application and infrastructure code.
  • Collaborate effectively using online communication tools (Slack, Teams, Zoom).
  • Utilize AI coding copilots and productivity tools to accelerate development.

Requirements:

  • 4–7 years experience in cloud infrastructure and DevOps/MLOps roles.
  • Strong skills in Python, TypeScript, Node.js, and Kubernetes (GKE/EKS).
  • Experience with GPU workloads, Terraform, SaaS environments, and Linux administration.
  • Knowledge of ML model deployment, CUDA/NCCL debugging, and GPU optimization.
  • Conversational English (written, verbal, listening).


Required - CI/CD, Node.js, TypeScript, CUDA, NCCL, GPU optimization, ML model deployment, Infrastructure as Code, Cloud infrastructure, and AI coding copilots.


Job Types: Full-time, Permanent

Read more
Quanteon Solutions

at Quanteon Solutions

1 recruiter
DurgaPrasad Sannamuri
Posted by DurgaPrasad Sannamuri
Hyderabad
6 - 10 yrs
₹10L - ₹30L / yr
skill iconReact.js
skill iconNodeJS (Node.js)
skill iconReact Native
skill iconAngular (2+)
SQL
+14 more

Key Requirements / Skills

  • 6+ years of overall experience in software development with strong expertise in building scalable web applications.
  • 2+ years of experience as a Technical Lead, managing development teams and driving project delivery.
  • Strong technical decision-making ability, including architecture design, technology selection, and implementation of best practices.
  • Front-end expertise: Strong experience in React, JavaScript, TypeScript, and building responsive and user-friendly UI/UX.
  • Back-end development: Hands-on experience with Node.js, RESTful APIs, API design, and server-side architecture.
  • AI/ML knowledge: Experience in implementing AI/ML models or integrating AI-based solutions to solve business problems.
  • Cloud & DevOps exposure: Experience with AWS/Azure, understanding of CI/CD pipelines, and cloud-based deployments.
  • Code quality & best practices: Experience in code reviews, Git version control, and ensuring maintainable and secure code.
  • Team leadership: Ability to mentor developers, guide technical discussions, and collaborate across teams.
  • Strong communication skills to effectively interact with technical and non-technical stakeholders.
  • Experience working in high-compliance environments such as healthcare systems is a plus.


Education Qualifications:

  • B.Tech/M.Tech in CSE/IT/AI/ML from a good university
Read more
Wissen Technology

at Wissen Technology

4 recruiters
Dhruti Parikh
Posted by Dhruti Parikh
Bengaluru (Bangalore), Mumbai, Pune
10 - 18 yrs
₹15L - ₹35L / yr
Google Cloud Platform (GCP)
Iaac
IAM
GKE
Iac

Job Summary: 

We are looking for an experienced GCP Cloud Engineer to design, implement, and manage cloud-based solutions on Google Cloud Platform (GCP). The ideal candidate should have expertise in GKE (Google Kubernetes Engine), Cloud Run, Cloud Loadbalancer, Cloud function, Azure DevOps, and Terraform, with a strong focus on automation, security, and scalability. 

You will work closely with development, operations, and security teams to ensure robust cloud infrastructure and CI/CD pipelines while optimizing performance and cost. 

 

Key Responsibilities: 

1. Cloud Infrastructure Design & Management 

  • Architect, deploy, and maintain GCP cloud resources via terraform/other automation. 
  • Implement Google Cloud Storage, Cloud SQL, filestore, for data storage and processing needs. 
  • Manage and configure Cloud Load Balancers (HTTP(S), TCP/UDP, and SSL Proxy) for high availability and scalability. 
  • Optimize resource allocation, monitoring, and cost efficiency across GCP environments. 

2. Kubernetes & Container Orchestration 

  • Deploy, manage, and optimize workloads on Google Kubernetes Engine (GKE). 
  • Work with Helm charts for microservices deployments. 
  • Automate scaling, rolling updates, and zero-downtime deployments. 

 

3. Serverless & Compute Services 

  • Deploy and manage applications on Cloud Run and Cloud Functions for scalable, serverless workloads. 
  • Optimize containerized applications running on Cloud Run for cost efficiency and performance. 

 

4. CI/CD & DevOps Automation 

  • Design, implement, and manage CI/CD pipelines using Azure DevOps. 
  • Automate infrastructure deployment using Terraform, Bash and Powershell scripting 
  • Integrate security and compliance checks into the DevOps workflow (DevSecOps). 

 

 

Required Skills & Qualifications: 

✔ Experience: 4+ years in Cloud Engineering, with a focus on GCP. 

✔ Cloud Expertise: Strong knowledge of GCP services (GKE, Compute Engine, IAM, VPC, Cloud Storage, Cloud SQL, Cloud Functions). 

✔ Kubernetes & Containers: Experience with GKE, Docker, GKE Networking, Helm. 

✔ DevOps Tools: Hands-on experience with Azure DevOps for CI/CD pipeline automation. 

✔ Infrastructure-as-Code (IaC): Expertise in Terraform for provisioning cloud resources. 

✔ Scripting & Automation: Proficiency in Python, Bash, or PowerShell for automation. 

✔ Security & Compliance: Knowledge of cloud security principles, IAM, and compliance standards. 

Read more
Quantiphi

at Quantiphi

3 candid answers
1 video
Nikita Sinha
Posted by Nikita Sinha
Bengaluru (Bangalore), Mumbai, Trivandrum
4 - 7 yrs
Upto ₹30L / yr (Varies
)
Google Cloud Platform (GCP)
SQL
ETL
Datawarehousing
Data-flow analysis

We are looking for a skilled Data Engineer / Data Warehouse Engineer to design, develop, and maintain scalable data pipelines and enterprise data warehouse solutions. The role involves close collaboration with business stakeholders and BI teams to deliver high-quality data for analytics and reporting.


Key Responsibilities

  • Collaborate with business users and stakeholders to understand business processes and data requirements
  • Design and implement dimensional data models, including fact and dimension tables
  • Identify, design, and implement data transformation and cleansing logic
  • Build and maintain scalable, reliable, and high-performance ETL/ELT pipelines
  • Extract, transform, and load data from multiple source systems into the Enterprise Data Warehouse
  • Develop conceptual, logical, and physical data models, including metadata, data lineage, and technical definitions
  • Design, develop, and maintain ETL workflows and mappings using appropriate data load techniques
  • Provide high-level design, research, and effort estimates for data integration initiatives
  • Provide production support for ETL processes to ensure data availability and SLA adherence
  • Analyze and resolve data pipeline and performance issues
  • Partner with BI teams to design and develop reports and dashboards while ensuring data integrity and quality
  • Translate business requirements into well-defined technical data specifications
  • Work with data from ERP, CRM, HRIS, and other transactional systems for analytics and reporting
  • Define and document BI usage through use cases, prototypes, testing, and deployment
  • Support and enhance data governance and data quality processes
  • Identify trends, patterns, anomalies, and data quality issues, and recommend improvements
  • Train and support business users, IT analysts, and developers
  • Lead and collaborate with teams spread across multiple locations

Required Skills & Qualifications

  • Bachelor’s degree in Computer Science or a related field, or equivalent work experience
  • 3+ years of experience in Data Warehousing, Data Engineering, or Data Integration
  • Strong expertise in data warehousing concepts, tools, and best practices
  • Excellent SQL skills
  • Strong knowledge of relational databases such as SQL Server, PostgreSQL, and MySQL
  • Hands-on experience with Google Cloud Platform (GCP) services, including:
  1. BigQuery
  2. Cloud SQL
  3. Cloud Composer (Airflow)
  4. Dataflow
  5. Dataproc
  6. Cloud Functions
  7. Google Cloud Storage (GCS)
  • Experience with Informatica PowerExchange for Mainframe, Salesforce, and modern data sources
  • Strong experience integrating data using APIs, XML, JSON, and similar formats
  • In-depth understanding of OLAP, ETL frameworks, Data Warehousing, and Data Lakes
  • Solid understanding of SDLC, Agile, and Scrum methodologies
  • Strong problem-solving, multitasking, and organizational skills
  • Experience handling large-scale datasets and database design
  • Strong verbal and written communication skills
  • Experience leading teams across multiple locations

Good to Have

  • Experience with SSRS and SSIS
  • Exposure to AWS and/or Azure cloud platforms
  • Experience working with enterprise BI and analytics tools

Why Join Us

  • Opportunity to work on large-scale, enterprise data platforms
  • Exposure to modern cloud-native data engineering technologies
  • Collaborative environment with strong stakeholder interaction
  • Career growth and leadership opportunities
Read more
Peliqan

at Peliqan

3 recruiters
Bharath Kumar
Posted by Bharath Kumar
Bengaluru (Bangalore)
3 - 5 yrs
₹10L - ₹20L / yr
skill iconPython
skill iconKubernetes
helm
skill iconDocker
skill iconAmazon Web Services (AWS)
+3 more

DevOps Engineer

Location: Bangalore office


About Peliqan

Peliqan is an all-in-one data platform combining ELT/ETL pipelines, a built-in data warehouse, SQL and low-code Python transformations, reverse ETL, and AI-powered data activation. We connect 250+ data sources and serve enterprise teams, consultants, and SaaS companies. SOC 2 Type II certified and GDPR compliant.


The Role

Own and evolve the infrastructure powering Peliqan's multi-tenant data platform. You'll manage Kubernetes clusters, cloud resources, CI/CD pipelines, and monitoring — keeping everything reliable, secure, and scalable. You'll be the go-to person for infrastructure support across the engineering team.

Responsibilities


Manage and optimise Kubernetes clusters running production workloads — data pipelines, APIs, and customer-facing services.

Maintain Docker-based local development environments for the engineering team.

Administer cloud infrastructure on AWS and Google Cloud (compute, storage, networking, managed databases).

Build and maintain CI/CD pipelines for automated testing, building, and deploying across staging and production.

Set up and manage monitoring, alerting, and logging for platform health and incident response.

Manage release processes — deployments, rollbacks, and release strategies.

  • Maintain infrastructure-as-code using Helm charts.
  • Support security hardening and compliance efforts (SOC 2, GDPR).



Requirements

3+ years in a DevOps, SRE, or Infrastructure Engineering role.

Strong hands-on experience with Kubernetes and Helm charts.

Deep familiarity with Docker for containerisation and local dev workflows.

Production experience with AWS and/or Google Cloud.

  • Proficiency in Python and Bash scripting for automation and tooling.
  • Solid grasp of DevOps principles: infrastructure-as-code, GitOps, observability, continuous delivery.
  • Experience with CI/CD platforms (GitHub Actions, GitLab CI, or similar).



Nice to Have

  • Experience supporting multi-tenant SaaS platforms or data infrastructure at scale.
  • Knowledge of PostgreSQL, MySQL, or cloud-managed database administration.
  • Exposure to security compliance frameworks (SOC 2, ISO 27001, GDPR).


Read more
Bengaluru (Bangalore)
15 - 25 yrs
₹3L - ₹20L / yr
Channel Sales
Google Cloud Platform (GCP)
skill iconAmazon Web Services (AWS)
Windows Azure
SaaS
+1 more

Job Description - Manager Sales

Min 15 years experience,

Should be experience from Sales of Cloud IT Saas Products portfolio which Savex deals with,

Team Management experience, leading cloud business including teams

Sales manager - Cloud Solutions

Reporting to Sr Management

Good personality

Distribution backgroung

Keen on Channel partners

Good database of OEMs and channel partners.

Age group - 35 to 45yrs

Male Candidate

Good communication

B2B Channel Sales

Location - Bangalore


If interested reply with cv and below details


Total exp -

Current ctc - 

Exp ctc - 

Np -

Current location - 

Qualification - 

Total exp Channel Sales -

What are the Cloud IT products, you have done sales for? 

What is the Annual revenue generated through Sales ?

Read more
Mercor
Agency job
via Halogion by BharathKumar Sampath
Remote only
3 - 7 yrs
₹10L - ₹20L / yr
SOC
Splunk
CrowdStrike Falcon
Microsoft Defender
SentinelOne
+12 more

Hiring SOC Investigation Specialist on behalf of high-growth technology and enterprise partners building next-generation SOC automation and AI-driven investigation systems. This role is ideal for experienced SOC analysts who can apply real-world investigative judgment to review, validate, and construct high-quality security investigations across SIEM, endpoint, cloud, and identity environments.

Responsibilities

  • Review, monitor, and evaluate SOC alerts and investigation outputs based on predefined scenarios and criteria.
  • Distinguish true positives from false positives by validating investigative evidence and alert context.
  • Perform end-to-end security investigations when required, including log analysis, entity pivoting, timeline reconstruction, and evidence correlation.
  • Assess the correctness, completeness, and quality of SOC investigations produced by automated or human workflows.
  • Apply consistent investigative judgment while recognizing that multiple valid investigation paths may exist for the same alert.
  • Make clear binary determinations (e.g., ACCEPT / PASS) while also producing detailed ground-truth investigations when required.
  • Use Splunk extensively to pivot across logs, entities, and timelines, including reading and reasoning about SPL queries.
  • Maintain clear and accurate documentation of investigative steps, assumptions, evidence, and conclusions.
  • Collaborate with program leads and other expert annotators to uphold high-quality investigation and annotation standards.
  • Mentor or support other analysts where applicable, particularly in long-term or lead annotator roles.

Requirements

  • 3+ years of hands-on experience as a SOC analyst in a production SOC environment (Tier 2 or above strongly preferred).
  • Strong understanding of alert triage, incident investigation workflows, and evidence-based decision-making under time constraints.
  • Mandatory hands-on experience with Splunk, including:
  • Conducting investigations using Splunk
  • Reading, understanding, and reasoning about SPL queries
  • Pivoting between logs, entities, and timelines
  • Proven ability to evaluate SOC investigations and determine whether conclusions are valid, incomplete, or incorrect.
  • Strong investigative judgment and comfort making decisive evaluations.
  • Fluent English (written and spoken) with strong documentation and communication skills.

Nice to Have

  • Experience with Endpoint Detection & Response (EDR) tools such as CrowdStrike Falcon, Microsoft Defender for Endpoint, or SentinelOne.
  • Experience analyzing cloud security logs and signals:
  • AWS (CloudTrail, GuardDuty)
  • Azure (Activity Log, Defender for Cloud)
  • GCP (Cloud Audit Logs)
  • Familiarity with Identity & Access Management platforms such as Okta Identity Cloud or Microsoft Entra ID (Azure AD).
  • Experience with email security tools like Proofpoint or Mimecast.
  • SOC leadership or mentoring experience.
  • Basic scripting experience (Python or similar).
  • Security certifications (optional): GCIA, GCIH, GCED, Splunk certifications, Security+, CCNA, or cloud security certifications.
Read more
EaseMyTrip.com

at EaseMyTrip.com

1 recruiter
Zainab Siddiqui
Posted by Zainab Siddiqui
Noida
2 - 3 yrs
₹3L - ₹5L / yr
skill iconAmazon Web Services (AWS)
Google Cloud Platform (GCP)
skill iconPython
skill iconNodeJS (Node.js)
skill iconGitHub
+5 more

Key Responsibilities:

  • ☁️ Manage cloud infrastructure and automation on AWS, Google Cloud (GCP), and Azure.
  • 🖥️ Deploy and maintain Windows Server environments, including Internet Information Services (IIS).
  • 🐧 Administer Linux servers and ensure their security and performance.
  • 🚀 Deploy .NET applications (ASP.Net, MVC, Web API, WCF, etc.) using Jenkins CI/CD pipelines.
  • 🔗 Manage source code repositories using GitLab or GitHub.
  • 📊 Monitor and troubleshoot cloud and on-premises server performance and availability.
  • 🤝 Collaborate with development teams to support application deployments and maintenance.
  • 🔒 Implement security best practices across cloud and server environments.



Required Skills:

  • ☁️ Hands-on experience with AWS, Google Cloud (GCP), and Azure cloud services.
  • 🖥️ Strong understanding of Windows Server administration and IIS.
  • 🐧 Proficiency in Linux server management.
  • 🚀 Experience in deploying .NET applications and working with Jenkins for CI/CD automation.
  • 🔗 Knowledge of version control systems such as GitLab or GitHub.
  • 🛠️ Good troubleshooting skills and ability to resolve system issues efficiently.
  • 📝 Strong documentation and communication skills.



Preferred Skills:

  • 🖥️ Experience with scripting languages (PowerShell, Bash, or Python) for automation.
  • 📦 Knowledge of containerization technologies (Docker, Kubernetes) is a plus.
  • 🔒 Understanding of networking concepts, firewalls, and security best practices.


Read more
 Cloud Consulting and Engineering Firm

Cloud Consulting and Engineering Firm

Agency job
via Peak Hire Solutions by Dhara Thakkar
Remote only
5 - 12 yrs
₹0.1L - ₹0.1L / yr
Artificial Intelligence (AI)
Generative AI
skill iconAmazon Web Services (AWS)
Large Language Models (LLM)
skill iconKubernetes
+10 more

Description

Company is a fast-growing company founded by former Google Cloud leaders, architects, and engineers. We are seeking candidates with significant experience in Google Cloud to join our team. Our engagements aim to eliminate obstacles, reduce risk, and accelerate timelines for customers transitioning to Google and seeking assistance with data and application modernization. We embed within customer teams to provide strategic guidance, facilitate technology decisions, and execute projects in a collaborative, co-development style.


As a member of our Cloud Engineering team, you will be working with fast-paced innovative companies, leveraging Cloud as the key driver of their transformation. Our clients will look to you as their trusted advisor, someone they can rely on and who will be there to help them along their Google Cloud journey. You will be expected to work a large spectrum of technology and tools including public cloud platforms, AI and LLMs, Kubernetes, data processing systems, databases, and more.


What you will do...

  • Working with our clients to understand their requirements and technical challenges. Using this input you will develop a technical design for a solution and communicate the value of your solution to the client team.
  • You will work to develop delivery estimates and an estimated project plan.
  • You will act as the lead technical member of the implementation project team. You are responsible for making the key technical and keeping delivery on track. You should be able to unblock when things are stuck.
  • Utilize a broad range of technologies such as Kubernetes, AI, and Large Language Models (LLMs), to develop scalable and efficient cloud applications.
  • Stay abreast of industry trends and new technologies to drive continuous improvement in cloud solutions and practices.
  • Work closely with cross-functional teams to deliver end-to-end cloud solutions, from conceptualization to deployment and maintenance.
  • Engage in problem-solving and troubleshooting to address complex technical challenges in a cloud environment.


What we need...

  • 5+ years of experience working in a Software Engineering capacity
  • Excellent knowledge and experience with Python, and preferably additional languages such as Go
  • Strong critical thinking skills, and a bias towards problem solving
  • Familiarity with implementing microservice architectures
  • Fundamental skills with Kubernetes. You should be familiar with packaging and deploying your applications to k8s
  • Experience building applications that work with data, databases, and other parts of the data ecosystem is preferred
  • Familiarity with Generative AI workflows, frameworks like Langchain, and experience with Streamlit are all highly desirable, but at a minimum you should have a willingness to learn
  • Experience deploying production workloads on the public cloud - either GCP or AWS
  • Experience using CI/CD tools such as GitHub Actions, GitLab, etc
  • Able to work with new tools and technologies where you may not have prior experience
  • Comfortable with being on video in meetings internally and with clients
  • Strong English communications skills



We are a fully remote company and offer competitive compensation and benefits.

Read more
Zoop.one

at Zoop.one

1 candid answer
Malavika Kannoth
Posted by Malavika Kannoth
Pune
3 - 5 yrs
₹12L - ₹15L / yr
skill iconGo Programming (Golang)
Google Cloud Platform (GCP)
Apache Kafka
skill iconDocker

We are looking for a Golang Developer to help build and scale our platform. This role involves designing high-performance distributed systems, building robust APIs and contributing to systems that handle real-time workflows at scale. You will own problems end to end, working closely with product, engineering and platform teams to build solutions that are reliable and scalable.


Responsibilities

  • Develop new and enhance existing microservices, libraries, and features that form our B2B KYC platform.
  • Create and document APIs, Queue Contracts to be consumed by other services.
  • Work closely with the Product and Engineering Leads to implement features following best design principles and patterns.
  • Participate in all phases of the development cycle plan, design, implement, review, test, deploy, document, and training.
  • Help junior developers with best practices like TDD etc. and make sure their code meets the standards.
  • Educate them continuously to improve overall team performance and work quality.

Requirements

  • Development experience (3 to 6 years) preferably on GoLang and scripting skills.
  • Bachelors/Masters in Computer Science or equivalent experience.
  • Strong understanding of Computer Science fundamentals, software design principles, algorithms & design patterns.
  • Interest and ability to quickly learn and ramp up on new languages and technologies.
  • Ability to write understandable, reliable and testable code with minimum supervision.
  • Distributed, highly available systems running at large scale.
  • Distributed platforms which use Kafka, Elasticsearch, Cassandra or similar systems.
  • Cloud environments (e. g., Docker, AWS, GCP, Kubernetes etc., ).
  • Asynchronous programming patterns (e. g., GO Routines/Channels, Async Programming).
  • Experience in CI/CD (Continuous Integration & Delivery), AGILE work environments.
  • Ability to troubleshoot and solve issues on distributed systems.

Why join Zoop

  • Work on high-scale, real-time systems that power critical identity and onboarding workflows for fast-growing businesses
  • Take end-to-end ownership of meaningful engineering problems while building clean, scalable, production-grade systems
  • Grow in a strong engineering environment with exposure to distributed systems, cloud-native technologies, and modern architectures
Read more
Virtana

at Virtana

3 candid answers
2 recruiters
Krutika Devadiga
Posted by Krutika Devadiga
Pune
5 - 10 yrs
Best in industry
skill iconGo Programming (Golang)
skill iconKubernetes
skill iconDocker
skill iconJava
skill iconAmazon Web Services (AWS)
+5 more

Senior Software Engineer 

Challenge convention and work on cutting edge technology that is transforming the way our customers manage their physical, virtual and cloud computing environments. Virtual Instruments seeks highly talented people to join our growing team, where your contributions will impact the development and delivery of our product roadmap. Our award-winning Virtana Platform provides the only real-time, system-wide, enterprise scale solution for providing visibility into performance, health and utilization metrics, translating into improved performance and availability while lowering the total cost of the infrastructure supporting mission-critical applications.  

We are seeking an individual with expert knowledge in Systems Management and/or Systems Monitoring Software, Observability platforms and/or Performance Management Software and Solutions with insight into integrated infrastructure platforms like Cisco UCS, infrastructure providers like Nutanix, VMware, EMC & NetApp and public cloud platforms like Google Cloud and AWS to expand the depth and breadth of Virtana Products. 


Work Location: Pune/ Chennai


Job Type: Hybrid

 

Role Responsibilities: 

  • The engineer will be primarily responsible for architecture, design and development of software solutions for the Virtana Platform 
  • Partner and work closely with cross functional teams and with other engineers and product managers to architect, design and implement new features and solutions for the Virtana Platform. 
  • Communicate effectively across the departments and R&D organization having differing levels of technical knowledge.  
  • Work closely with UX Design, Quality Assurance, DevOps and Documentation teams. Assist with functional and system test design and deployment automation 
  • Provide customers with complex and end-to-end application support, problem diagnosis and problem resolution 
  • Learn new technologies quickly and leverage 3rd party libraries and tools as necessary to expedite delivery 

 

Required Qualifications:    

  • Minimum of 7+ years of progressive experience with back-end development in a Client Server Application development environment focused on Systems Management, Systems Monitoring and Performance Management Software. 
  • Deep experience in public cloud environment using Kubernetes and other distributed managed services like Kafka etc (Google Cloud and/or AWS) 
  • Experience with CI/CD and cloud-based software development and delivery 
  • Deep experience with integrated infrastructure platforms and experience working with one or more data collection technologies like SNMP, REST, OTEL, WMI, WBEM. 
  • Minimum of 6 years of development experience with one or more of these high level languages like GO, Python, Java. Deep experience with one of these languages is required. 
  • Bachelor’s or Master’s degree in computer science, Computer Engineering or equivalent 
  • Highly effective verbal and written communication skills and ability to lead and participate in multiple projects 
  • Well versed with identifying opportunities and risks in a fast-paced environment and ability to adjust to changing business priorities 
  • Must be results-focused, team-oriented and with a strong work ethic 

 

Desired Qualifications: 

  • Prior experience with other virtualization platforms like OpenShift is a plus 
  • Prior experience as a contributor to engineering and integration efforts with strong attention to detail and exposure to Open-Source software is a plus 
  • Demonstrated ability as a lead engineer who can architect, design and code with strong communication and teaming skills 
  • Deep development experience with the development of Systems, Network and performance Management Software and/or Solutions is a plus 

  

About Virtana:  Virtana delivers the industry’s only broadest and deepest Observability Platform that allows organizations to monitor infrastructure, de-risk cloud migrations, and reduce cloud costs by 25% or more. 

  

Over 200 Global 2000 enterprise customers, such as AstraZeneca, Dell, Salesforce, Geico, Costco, Nasdaq, and Boeing, have valued Virtana’s software solutions for over a decade. 

  

Our modular platform for hybrid IT digital operations includes Infrastructure Performance Monitoring and Management (IPM), Artificial Intelligence for IT Operations (AIOps), Cloud Cost Management (Fin Ops), and Workload Placement Readiness Solutions. Virtana is simplifying the complexity of hybrid IT environments with a single cloud-agnostic platform across all the categories listed above. The $30B IT Operations Management (ITOM) Software market is ripe for disruption, and Virtana is uniquely positioned for success. 

Read more
Mumbai
3 - 6 yrs
₹7L - ₹15L / yr
skill iconDocker
skill iconKubernetes
DevOps
Google Cloud Platform (GCP)

Lightning Job By Cutshort ⚡

 

As part of this feature, you can expect status updates about your application and replies within 72 hours (once the screening questions are answered)


Job Overview:


We are seeking an experienced DevOps Engineer to join our team. The successful candidate will be responsible for designing, implementing, and maintaining the infrastructure and software systems required to support our development and production environments. The ideal candidate should have a strong background in Linux, GitHub, Actions/Jenkins, ArgoCD, AWS, Kubernetes, Helm, Datadog, MongoDB, Envoy Proxy, Cert-Manager, Terraform, ELK, Cloudflare, and BigRock.


Kindly apply at https://wohlig.keka.com/careers/jobdetails/54566


Responsibilities:


• Design, implement and maintain CI/CD pipelines using GitHub, Actions/Jenkins, Kubernetes, Helm, and ArgoCD.

• Deploy and manage Kubernetes clusters using AWS.

• Configure and maintain Envoy Proxy and Cert-Manager to automate deployment and manage application environments.

• Monitor system performance using Datadog, ELK, and Cloudflare tools.

• Automate infrastructure management and maintenance tasks using Terraform, Ansible, or similar tools.

• Collaborate with development teams to design, implement and test infrastructure changes.

• Troubleshoot and resolve infrastructure issues as they arise.

• Participate in on-call rotation and provide support for production issues.


Qualifications:

• Bachelor's or Master's degree in Computer Science, Engineering or a related field.

• 3+ years of experience in DevOps engineering with a focus on Linux, GitHub, Actions/CodeFresh, ArgoCD, AWS, Kubernetes, Helm, Datadog, MongoDB, Envoy Proxy, Cert-Manager, Terraform, ELK, Cloudflare, and BigRock.

• Strong understanding of Linux administration and shell scripting.

• Experience with automation tools such as Terraform, Ansible, or similar.

• Ability to write infrastructure as code using tools such as Terraform, Ansible, or similar.

• Experience with container orchestration platforms such as Kubernetes.

• Familiarity with container technologies such as Docker.

• Experience with cloud providers such as AWS.

• Experience with monitoring tools such as Datadog and ELK.



Skills:

• Strong analytical and problem-solving skills.

• Excellent communication and collaboration skills.

• Ability to work independently or in a team environment.

• Strong attention to detail.

• Ability to learn and apply new technologies quickly.

• Ability to work in a fast-paced and dynamic environment.

• Strong understanding of DevOps principles and methodologies.

Read more
AI Industry

AI Industry

Agency job
via Peak Hire Solutions by Dhara Thakkar
Mumbai, Bengaluru (Bangalore), Hyderabad, Gurugram
6 - 10 yrs
₹32L - ₹42L / yr
ETL
SQL
Google Cloud Platform (GCP)
Data engineering
ELT
+17 more

Role & Responsibilities:

We are looking for a strong Data Engineer to join our growing team. The ideal candidate brings solid ETL fundamentals, hands-on pipeline experience, and cloud platform proficiency — with a preference for GCP / BigQuery expertise.


Responsibilities:

  • Design, build, and maintain scalable data pipelines and ETL/ELT workflows
  • Work with Dataform or DBT to implement transformation logic and data models
  • Develop and optimize data solutions on GCP (BigQuery, GCS) or AWS/Azure
  • Support data migration initiatives and data mesh architecture patterns
  • Collaborate with analysts, scientists, and business stakeholders to deliver reliable data products
  • Apply data governance and quality best practices across the data lifecycle
  • Troubleshoot pipeline issues and drive proactive monitoring and resolution


Ideal Candidate:

  • Strong Data Engineer Profile
  • Must have 6+ years of hands-on experience in Data Engineering, with strong ownership of end-to-end data pipeline development.
  • Must have strong experience in ETL/ELT pipeline design, transformation logic, and data workflow orchestration.
  • Must have hands-on experience with any one of the following: Dataform, dbt, or BigQuery, with practical exposure to data transformation, modeling, or cloud data warehousing.
  • Must have working experience on any cloud platform: GCP (preferred), AWS, or Azure, including object storage (GCS, S3, ADLS).
  • Must have strong SQL skills with experience in writing complex queries and optimizing performance.
  • Must have programming experience in Python and/or SQL for data processing.
  • Must have experience in building and maintaining scalable data pipelines and troubleshooting data issues.
  • Exposure to data migration projects and/or data mesh architecture concepts.
  • Experience with Spark / PySpark or large-scale data processing frameworks.
  • Experience working in product-based companies or data-driven environments.
  • Bachelor’s or Master’s degree in Computer Science, Engineering, or related field.


NOTE:

  • There will be an interview drive scheduled on 28th and 29th March 2026, and if shortlisted, they will be expected to be available on these Interview dates. Only Immediate joiners are considered.
Read more
Mango Sciences
Remote only
7 - 12 yrs
₹20L - ₹40L / yr
skill iconPython
SQL
ETL
Data pipeline
Datawarehousing
+12 more

The Mission: We are looking for a visionary Technical Leader to own our healthcare data ecosystem from the first byte to the final dashboard. You won't just be managing a platform; you’ll be the primary architect of a clinical data engine that powers life-changing analytics. If you are an expert in SQL and Python who thrives on solving the "puzzle" of healthcare interoperability (FHIR/HL7) while mentoring a high-performing team, this is your seat at the table.

What You’ll Own

  • Architectural Sovereignty: Define the end-to-end blueprint for our data warehouse (staging, marts, and semantic layers). You choose the frameworks, set the coding standards, and decide how we handle complex dimensional modeling and SCDs.
  • Engineering Excellence: Lead by example. You’ll write production-grade Python for ingestion frameworks and craft advanced, set-based SQL transformations that others use as gold-standard references.
  • The Interoperability Bridge: Turn the chaos of EHR exports, REST APIs, and claims data into clean, FHIR-aligned governed datasets. You ensure our data speaks the language of modern healthcare.
  • Technical Mentorship: Act as the "Engineer’s Engineer." You’ll run design reviews, champion CI/CD best practices, and build the runbooks that keep our small but mighty team efficient.
  • Security by Design: Direct the implementation of HIPAA-compliant data flows, ensuring encryption, auditability, and access controls are baked into the architecture, not bolted on.

The Stack You’ll Command

  • Languages: Expert-level SQL (CTE, Window Functions, Tuning) and Production Python.
  • Databases: Deep polyglot experience across MSSQL, PostgreSQL, Oracle, and NoSQL (MongoDB/Elasticsearch).
  • Orchestration: Advanced Apache Airflow (SLAs, retries, and complex DAGs).
  • Ecosystem: GitHub for CI/CD, Tableau/PowerBI for semantic layers, and Unix/Linux for shell scripting.

Who You Are

  • Experienced: You have 8–12+ years in data engineering, with a significant portion spent in a Lead or Architect capacity.
  • Healthcare-Fluent: You understand the stakes of PHI. You’ve worked with FHIR/HL7 and know how to map clinical resources to analytical models.
  • Performance-Obsessed: You don’t just make it work; you make it fast. You’re the person who uses EXPLAIN/ANALYZE to shave minutes off a query.
  • Culture-Builder: You believe in documentation, observability (lineage/freshness), and "leaving the campground cleaner than you found it."

Bonus Points for:

  • Privacy Pro: Experience with PII/PHI de-identification and privacy-by-design.
  • Cloud Native: Deep familiarity with Azure, AWS, or GCP security and data services.
  • Search Experts: Experience with near-real-time indexing via Elasticsearch.

To process your resume for the next process, please fill out the Google form with your updated resume.

 

Pre-screen Question: https://forms.gle/q3CzfdSiWoXTCEZJ7

 

Details: https://forms.gle/FGgkmQvLnS8tJqo5A

Read more
NonStop io Technologies Pvt Ltd
Kalyani Wadnere
Posted by Kalyani Wadnere
Pune
4 - 7 yrs
Best in industry
DevOps
skill iconAmazon Web Services (AWS)
Terraform
Windows Azure
Google Cloud Platform (GCP)
+9 more

About NonStop io Technologies:

NonStop io Technologies is a value-driven company with a strong focus on process-oriented software engineering. We specialize in Product Development and have a decade's worth of experience in building web and mobile applications across various domains. NonStop io Technologies follows core principles that guide its operations and believes in staying invested in a product's vision for the long term. We are a small but proud group of individuals who believe in the 'givers gain' philosophy and strive to provide value in order to seek value. We are committed to and specialize in building cutting-edge technology products and serving as trusted technology partners for startups and enterprises. We pride ourselves on fostering innovation, learning, and community engagement. Join us to work on impactful projects in a collaborative and vibrant environment.


Brief Description:

We are looking for a skilled and proactive DevOps Engineer to join our growing engineering team. The ideal candidate will have hands-on experience in building, automating, and managing scalable infrastructure and CI CD pipelines. You will work closely with development, QA, and product teams to ensure reliable deployments, performance, and system security.


Roles and Responsibilities:

● Design, implement, and manage CI CD pipelines for multiple environments

● Automate infrastructure provisioning using Infrastructure as Code tools

● Manage and optimize cloud infrastructure on AWS, Azure, or GCP

● Monitor system performance, availability, and security

● Implement logging, monitoring, and alerting solutions

● Collaborate with development teams to streamline release processes

● Troubleshoot production issues and ensure high availability

● Implement containerization and orchestration solutions such as Docker and Kubernetes

● Enforce DevOps best practices across the engineering lifecycle

● Ensure security compliance and data protection standards are maintained


Requirements:

● 4 to 7 years of experience in DevOps or Site Reliability Engineering

● Strong experience with cloud platforms such as AWS, Azure, or GCP - Relevant Certifications will be a great advantage

● Hands-on experience with CI CD tools like Jenkins, GitHub Actions, GitLab CI, or Azure DevOps

● Experience working in microservices architecture

● Exposure to DevSecOps practices

● Experience in cost optimization and performance tuning in cloud environments

● Experience with Infrastructure as Code tools such as Terraform, CloudFormation, or ARM

● Strong knowledge of containerization using Docker

● Experience with Kubernetes in production environments

● Good understanding of Linux systems and shell scripting

● Experience with monitoring tools such as Prometheus, Grafana, ELK, or Datadog

● Strong troubleshooting and debugging skills

● Understanding of networking concepts and security best practices


Why Join Us?

● Opportunity to work on a cutting-edge healthcare product

● A collaborative and learning-driven environment

● Exposure to AI and software engineering innovations

● Excellent work ethic and culture


If you're passionate about technology and want to work on impactful projects, we'd love to hear from you!

Read more
Recruiting Bond

at Recruiting Bond

2 candid answers
Pavan Kumar
Posted by Pavan Kumar
Bengaluru (Bangalore), Mumbai
10 - 16 yrs
₹75L - ₹130L / yr
Distributed Systems
Microservices
Enterprise architecture
System Design & Architecture
Event-Driven Architecture
+29 more

🚨 We’re Building a “Top 1% Engineering Org”


We’re building a high-talent-density, AI-first R&D organization from scratch — inside a publicly listed company undergoing a full-scale transformation.

Think:

→ Rewriting legacy systems into AI-native architectures

→ Embedding LLMs + Agentic AI into core workflows

→ Reimagining platforms, infra, and data systems for the next decade

This is the kind of shift you’d expect from Google, Microsoft, or Meta —

Except you get to build it from day 0 → scale it globally.


About the Role / Team

We are building a next-generation AI-first R&D organization in Bengaluru, focused on solving complex problems across LLMs, Agentic AI systems, distributed computing, and enterprise-scale architectures.


This initiative is part of a publicly listed global company investing heavily in AI-driven transformation, re-architecting its platforms into intelligent, autonomous systems powered by large language models, workflows, and decision engines.


You will be working on:

  • Agentic AI systems & LLM-powered workflows
  • Distributed, scalable backend systems
  • Enterprise-grade AI platforms
  • Automation-first engineering environments


🚀 The Mandate

Own and evolve the technical backbone of an AI-first enterprise platform.


You will define architecture across LLM-powered systems, distributed services, and data platforms — and lead critical transformations from legacy → AI-native systems.


🧩 What You’ll Do

  • Architect large-scale distributed systems powering AI-driven workflows
  • Lead 0→1 and 1→N platform builds (LLM integrations, agentic systems, orchestration layers)
  • Redesign legacy systems into scalable, modular, AI-native architectures
  • Drive system design excellence across teams (APIs, infra, observability, reliability)
  • Make high-stakes decisions on trade-offs (latency, cost, scalability, model performance)
  • Mentor senior engineers and influence engineering culture/org standards
  • Partner with product, data, and leadership on long-term technical strategy


🧠 What We’re Looking For

  • Proven track record building high-scale backend or platform systems
  • Deep expertise in distributed systems, microservices, cloud (AWS/GCP/Azure)
  • Strong exposure to data systems/infra / Data / real-time architectures
  • Experience or strong interest in LLMs, GenAI, or AI system design
  • Exceptional system design, abstraction, and problem-solving ability
  • High ownership mindset — you think in terms of systems, not tickets
  • Strong coding skills in Python / Java / Go / Node.js
  • Solid understanding of data structures, system design basics, and backend architecture
  • Experience building scalable APIs and services
  • Familiarity or curiosity around AI/LLMs, async systems, or event-driven design
  • Strong debugging, problem-solving, and ownership mindset
  • Solve hard system problems (latency, scale, reliability)
  • Drive cross-team technical decisions and standards
  • Mentor senior engineers and influence org-wide architecture 
  • Design large-scale distributed systems and backend platforms
  • Mentorship & Technical Leadership 
  • Expertise in system design, scalability, and performance optimization


Nice to Have

  • Experience integrating LLMs, vector databases, or AI pipelines
  • Contributions to architecture at scale
  • Experience with Agentic AI / LLM orchestration frameworks
  • Background in product engineering or platform companies
  • Exposure to global-scale systems (millions of users / high throughput)


🔥 What Sets You Apart

  • Built platforms used by millions of users / high-throughput systems
  • Experience with event-driven systems, stream processing, or infra platforms
  • Prior work on AI/ML platforms, model serving, or intelligent systems
Read more
SAAS Industry

SAAS Industry

Agency job
via Peak Hire Solutions by Dhara Thakkar
Bengaluru (Bangalore)
5 - 7 yrs
₹20L - ₹25L / yr
TypeScript
skill iconNodeJS (Node.js)
skill iconJavascript
skill iconMongoDB
RESTful APIs
+20 more

Job Details

Job Title: Full Stack Engineer

Industry: SAAS

Function – Information Technology

Experience Required: 5-7 years

- Working Days: 6 days

Employment Type: Full Time

Job Location: Bangalore

CTC Range: Best in Industry

 

Preferred Skills: TypeScript, NodeJS, mongodb, RESTful APIs, React.js

 

Criteria

Candidate should have at least 4+ years of professional experience as a Full Stack Engineer

Hands-on experience with both React.js and Node.js

Solid understanding of MongoDB

Should have experience in RESTful APIs

Should be from a startup or scale up companies

Should have good experience in Typescript

Strong understanding of asynchronous programming patterns

Preferred candidates from SAAS/Software/IT Services based startups or scaleup companies

 

Job Description

The Role:

We’re looking for a Full Stack Engineer to build, scale, and maintain high-performance web applications for company’s technology platforms. This role involves working across the stack-frontend, backend, and infrastructure - using modern JavaScript-based technologies.

You’ll collaborate closely with product managers, designers, and cross-functional engineering teams to deliver scalable, secure, and user-centric solutions. This role is ideal for someone who enjoys end-to-end ownership, technical problem-solving, and working in a fast-paced startup environment.

 

What You’ll Own

1. Full Stack Development

● Design, develop, test, and deploy robust and scalable web applications.

● Build and maintain server-side logic and microservices using Node.js, Express.js, and TypeScript.

● Contribute to frontend feature development and integration.

● Participate in feature planning, estimation, and execution.

 

2. Backend & API Engineering

● Design and develop RESTful APIs and backend services.

● Implement asynchronous workflows and scalable microservice architectures.

● Ensure performance, reliability, and security of backend systems.

● Implement authentication, authorization, and data protection best practices.

 

3. Database Design & Optimization

● Design and manage MongoDB schemas using Mongoose.

● Optimize queries and database performance for scale.

● Ensure data integrity and efficient data access patterns.

 

4. Frontend Collaboration & Integration

● Collaborate with frontend developers to integrate React components and APIs seamlessly.

● Ensure responsive, high-performing application behavior.

 

5. System Design & Scalability

● Contribute to system architecture and technical design discussions.

● Design scalable, maintainable, and future-ready solutions.

● Optimize applications for speed and scalability.

 

6. Product & Cross-Functional Collaboration

● Work closely with product and design teams to deliver high-quality features in rapid iterations.

● Participate in the full development lifecycle—from concept to deployment and maintenance.

 

7. Code Quality & Best Practices

● Write clean, testable, and maintainable code.

● Follow Git-based version control and code review best practices.

● Contribute to improving engineering standards and workflows.

 

What We’re Looking For

Must-Haves

● 4+ years of professional experience as a Full Stack Engineer or similar role.

● Strong proficiency in JavaScript and TypeScript.

● Hands-on experience with Node.js and Express.js.

● Solid understanding of MongoDB and Mongoose.

● Experience building and consuming RESTful APIs and microservices.

● Strong understanding of asynchronous programming patterns.

● Good grasp of system design principles and application architecture.

● Experience with Git and version control best practices.

● Bachelor’s degree in Computer Science, Engineering, or a related field.

 

Good-to-Have / Preferred

● Frontend development experience with React.js.

● Exposure to Three.js or similar 3D/visualization libraries.

● Experience with cloud platforms (AWS, GCP, Azure – EC2, S3, Lambda).

● Knowledge of Docker and containerization workflows.

● Experience with testing frameworks (Jest, Mocha, etc.).

● Familiarity with CI/CD pipelines and automated deployments.

 

Tools You’ll Use

● Backend: Node.js, Express.js, TypeScript

● Frontend: React.js (preferred)

● Database: MongoDB, Mongoose

● Version Control: Git, GitHub / GitLab

● Cloud & DevOps: AWS / GCP / Azure, Docker

● Collaboration: Google Workspace, Notion, Slack

 

Key Metrics You’ll Own

● Code quality, performance, and scalability

● Timely delivery of features and releases

● System reliability and reduction in production issues

● Contribution to architectural improvements

 

Why company

● Work on impactful, product-driven tech platforms.

● High-ownership role with end-to-end engineering exposure.

● Opportunity to work with modern technologies and evolving architectures.

● Collaborative startup culture with strong learning and growth opportunities.

 

Read more
SAAS Industry

SAAS Industry

Agency job
via Peak Hire Solutions by Dhara Thakkar
Bengaluru (Bangalore)
5 - 8 yrs
₹20L - ₹25L / yr
skill iconAmazon Web Services (AWS)
skill iconNodeJS (Node.js)
RESTful APIs
NOSQL Databases
Systems design
+39 more

Job Details

Job Title: Senior Backend Engineer

Industry: SAAS

Function – Information Technology

Experience Required: 5-8 years

- Working Days: 6 days a week, (5 days-in-office, Saturdays WFH)

Employment Type: Full Time

Job Location: Bangalore

CTC Range: Best in Industry

 

Preferred Skills: AWS, NodeJS, RESTful APIs, NoSQL

 

Criteria

· Minimum 5+ years in backend engineering with strong system design expertise

· Experience building scalable systems from scratch

· Expert-level proficiency in Node.js

· Deep understanding of distributed systems

· Strong NoSQL design skills

· Hands-on AWS cloud experience

· Proven leadership and mentoring capability

· Preferred candidates from SAAS/Software/IT Services based startups or scaleup companies

 

Job Description

The Role:

What You’ll Build:

1. System Architecture & Design

● Architect highly scalable backend systems from the ground up

● Define technology choices: frameworks, databases, queues, caching layers

● Evaluate microservices vs monoliths based on product stage

● Design REST, GraphQL, and real-time WebSocket APIs

● Build event-driven systems for asynchronous processing

● Architect multi-tenant systems with strict data isolation

● Maintain architectural documentation and technical specs

2. Core Backend Services

● Build high-performance APIs for 3D content, XR experiences, analytics, and user interactions

● Create 3D asset processing pipelines for uploads, conversions, and optimization

● Develop distributed job workers for CPU/GPU-intensive tasks

● Build authentication/authorization systems (RBAC)

● Implement billing, subscription, and usage metering

● Build secure webhook systems and third-party integration APIs

● Create real-time collaboration features via WebSockets/SSE

3. Data Architecture & Databases

● Design scalable schemas for 3D metadata, XR sessions, and analytics

● Model complex product catalogs with variants and hierarchies

● Implement Redis-based caching strategies

● Build search and indexing systems (Elasticsearch/Algolia)

● Architect ETL pipelines and data warehouses

● Implement sharding, partitioning, and replication strategies

● Design backup, restore, and disaster recovery workflows

4. Scalability & Performance

● Build systems designed for 10x–100x traffic growth

● Implement load balancing, autoscaling, and distributed processing

● Optimize API response times and database performance

● Implement global CDN delivery for heavy 3D assets

● Build rate limiting, throttling, and backpressure mechanisms

● Optimize storage and retrieval of large 3D files

● Profile and improve CPU, memory, and network performance

5. Infrastructure & DevOps

● Architect AWS infrastructure (EC2, S3, Lambda, RDS, ElastiCache)

● Build CI/CD pipelines for automated deployments and rollbacks

● Use IaC tools (Terraform/CloudFormation) for infra provisioning

● Set up monitoring, logging, and alerting systems

● Use Docker + Kubernetes for container orchestration

● Implement security best practices for data, networks, and secrets

● Define disaster recovery and business continuity plans

6. Integration & APIs

● Build integrations with Shopify, WooCommerce, Magento

● Design webhook systems for real-time events

● Build SDKs, client libraries, and developer tools

● Integrate payment gateways (Stripe, Razorpay)

● Implement SSO and OAuth for enterprise customers

● Define API versioning and lifecycle/deprecation strategies

7. Data Processing & Analytics

● Build analytics pipelines for engagement, conversions, and XR performance

● Process high-volume event streams at scale

● Build data warehouses for BI and reporting

● Develop real-time dashboards and insights systems

● Implement analytics export pipelines and platform integrations

● Enable A/B testing and experimentation frameworks

● Build personalization and recommendation systems

 

Technical Stack:

1. Backend Languages & Frameworks 

●  Primary: Node.js (Express, NestJS), Python (FastAPI, Django)

●  Secondary: Go, Java/Kotlin (Spring)

●  APIs: REST, GraphQL, gRPC


2. Databases & Storage

● SQL: PostgreSQL, MySQL

● NoSQL: MongoDB, DynamoDB

● Caching: Redis, Memcached

● Search: Elasticsearch, Algolia

● Storage/CDN: AWS S3, CloudFront

● Queues: Kafka, RabbitMQ, AWS SQS

 

3. Cloud & Infrastructure: 

● Cloud: AWS (primary), GCP/Azure (nice to have)

● Compute: EC2, Lambda, ECS, EKS

● Infrastructure: Terraform, CloudFormation

● CI/CD: GitHub Actions, Jenkins, CircleCI

● Containers: Docker, Kubernetes

 

4. Monitoring & Operations 

● Monitoring: Datadog, New Relic, CloudWatch

● Logging: ELK Stack, CloudWatch Logs

● Error Tracking: Sentry, Rollbar

● APM tools

 

5. Security & Auth

● Auth: JWT, OAuth 2.0, SAML

● Secrets: AWS Secrets Manager, Vault

● Security: Encryption (at rest/in transit), TLS/SSL, IAM

 


What We’re Looking For:

1. Must-Haves

● 5+ years in backend engineering with strong system design expertise

● Experience building scalable systems from scratch

● Expert-level proficiency in at least one backend stack (Node, Python, Go, Java)

● Deep understanding of distributed systems and microservices

● Strong SQL/NoSQL design skills with performance optimization

● Hands-on AWS cloud experience

● Ability to write high-quality production code daily

● Experience building and scaling RESTful APIs

● Strong understanding of caching, sharding, horizontal scaling

● Solid security and best-practice implementation experience

● Proven leadership and mentoring capability


2. Highly Desirable

● Experience with large file processing (3D, video, images)

● Background in SaaS, multi-tenancy, or e-commerce

● Experience with real-time systems (WebSockets, streams)

● Knowledge of ML/AI infrastructure

● Experience with HA systems, DR planning

● Familiarity with GraphQL, gRPC, event-driven systems

● DevOps/infrastructure engineering background

● Experience with XR/AR/VR backend systems

● Open-source contributions or technical writing

● Prior senior technical leadership experience

 

Technical Challenges You’ll Solve:

● Designing large-scale 3D asset processing pipelines

● Serving XR content globally with ultra-low latency

● Scaling from thousands to millions of daily requests

● Efficiently handling CPU/GPU-heavy workloads

● Architecting multi-tenancy with complete data isolation

● Managing billions of analytics events at scale

● Building future-proof APIs with backward compatibility

 

Why company:

● Architectural Ownership: Build foundational systems from scratch

● Deep Technical Work: Solve distributed systems and scaling challenges

● Hands-On Impact: Design and code mission-critical infrastructure

● Diverse Problems: APIs, infra, data, ML, XR, asset processing

● Massive Scale Opportunity: Build systems for exponential growth

● Modern Stack and best practices

● Product Impact: Your architecture directly powers millions of users

● Leadership Opportunity: Shape engineering culture and direction

● Learning Environment: Stay at the forefront of backend engineering

● Backed by AWS, Microsoft, Google

 

Location & Work Culture:

● Location: Bengaluru

● Schedule: 6 days a week, (5 days-in-office, Saturdays WFH)

● Culture: Builder mindset, strong ownership, technical excellence

● Team: Small, highly skilled backend and infra team

● Resources: AWS credits, latest tooling, learning budget

 

Read more
AI Recruiting Platform

AI Recruiting Platform

Agency job
via Peak Hire Solutions by Dhara Thakkar
Remote only
1 - 15 yrs
₹70L - ₹99L / yr
MySQL
skill iconPython
Microservices
API
skill iconJava
+18 more

Description

Join company as a Backend Developer and become a pivotal force in building the robust, scalable services that power our innovative platforms. In this role, you will design, develop, and maintain server‑side applications, ensuring high performance and reliability for millions of users. You’ll collaborate closely with cross‑functional product, front‑end, and DevOps teams to translate business requirements into clean, efficient code, while participating in code reviews and architectural discussions. Our dynamic environment encourages continuous learning, offering opportunities to work with cutting‑edge technologies, cloud infrastructures, and modern development practices. As a key contributor, your work will directly impact product quality, user satisfaction, and the overall success of company's mission to streamline hiring solutions.


Requirements:

  • 1–15 years of professional experience in backend development, with a strong focus on building APIs and microservices.
  • Proficiency in server‑side languages such as Python, Java, Node.js, or Go, and solid understanding of object‑oriented and functional programming paradigms.
  • Extensive experience with relational (e.g., PostgreSQL, MySQL) and NoSQL databases (e.g., MongoDB, Redis), including schema design and query optimization.
  • Familiarity with cloud platforms (AWS, GCP, Azure) and containerization technologies like Docker and Kubernetes.
  • Hands‑on experience with version control (Git), CI/CD pipelines, and automated testing frameworks.
  • Strong problem‑solving abilities, effective communication skills, and a collaborative mindset for working within multidisciplinary teams.


Roles and Responsibilities:

  • Design, develop, and maintain high‑throughput backend services and RESTful APIs that support core product features.
  • Implement data models and storage solutions, ensuring data integrity, security, and optimal performance.
  • Collaborate with front‑end engineers, product managers, and designers to define technical requirements and deliver end‑to‑end solutions.
  • Participate in code reviews, provide constructive feedback, and uphold coding standards and best practices.
  • Monitor, troubleshoot, and optimize production systems, implementing robust logging, alerting, and performance tuning.
  • Contribute to the continuous improvement of development workflows, including CI/CD automation, testing strategies, and deployment processes.
  • Stay current with emerging technologies and industry trends, proposing innovative approaches to enhance system architecture.


Budget:

  • Job Type: payroll
  • Experience Range: 1–15 years


Read more
TVARIT GmbH

at TVARIT GmbH

2 candid answers
DrSoumya Sahadevan
Posted by DrSoumya Sahadevan
Pune
7 - 15 yrs
₹20L - ₹30L / yr
skill iconAmazon Web Services (AWS)
Windows Azure
Google Cloud Platform (GCP)
PySpark
databricks
+2 more

About TVARIT

TVARIT GmbH specializes in developing and delivering cutting-edge artificial intelligence (AI) solutions for the metal industry, including steel, aluminum, copper, cast iron, and more. Our software products empower customers to make intelligent, data-driven decisions, driving advancements in Predictive Quality (PsQ), Predictive Maintenance (PdM), and Energy Consumption Reduction (PsE), etc. With a strong portfolio of renowned reference customers, state-of-the-art technology, a talented research team from prestigious universities, and recognition through esteemed awards such as the EU Horizon 2020 AI Prize, TVARIT is recognized as one of the most innovative AI companies in Germany and Europe. We are seeking a self-motivated individual with a positive "can-do" attitude and excellent oral and written communication skills in English to join our team.


Job Description: We are looking for a Senior Data Engineer with strong expertise in Azure Databricks, PySpark, and distributed computing to develop and optimize scalable ETL pipelines for manufacturing analytics. The role involves working with high-frequency industrial data to enable real-time and batch data processing.


Key Responsibilities · Build scalable real-time and batch processing workflows using Azure Databricks, PySpark, and Apache Spark.

· Perform data pre-processing, including cleaning, transformation, deduplication, normalization, encoding, and scaling to ensure high-quality input for downstream analytics.

· Design and maintain cloud-based data architectures, including data lakes, lakehouses, and warehouses, following Medallion Architecture.

· Deploy and optimize data solutions on Azure (preferred), AWS, or GCP with a focus on performance, security, and scalability.

· Develop and optimize ETL/ELT pipelines for structured and unstructured data from IoT, MES, SCADA, LIMS, and ERP systems. · Automate data workflows using CI/CD and DevOps best practices, ensuring security and compliance with industry standards

· Monitor, troubleshoot, and enhance data pipelines for high availability and reliability.

· Utilize Docker and Kubernetes for scalable data processing.

· Collaborate with automation team, data scientists and engineers to provide clean, structured data for AI/ML models.


Desired Skills and Qualifications · Bachelor’s or Master’s degree in Computer Science, Information Technology, or a related field.

· 7+ years of experience in core data engineering, with a strong focus on cloud platforms such as Azure (preferred), AWS, or GCP · Proficiency in PySpark, Azure Databricks, Python and Apache Spark, etc.

. 2 years of team handling experience.

· Expertise in relational databases (e.g., SQL Server, PostgreSQL), time series databases (e.g. Influx DB), and NoSQL databases (e.g., MongoDB, Cassandra) · Experience in containerization (Docker, Kubernetes).

· Strong analytical and problem-solving skills with attention to detail.

· Good to have MLOps, DevOps including model lifecycle management

· Excellent communication and collaboration skills, with a proven ability to work effectively as a team player.

· Comfortable working in a dynamic, fast-paced startup environment, adapting quickly to changing priorities and responsibilities.

Read more
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Find more jobs
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort