50+ AWS (Amazon Web Services) Jobs in Mumbai | AWS (Amazon Web Services) Job openings in Mumbai
Apply to 50+ AWS (Amazon Web Services) Jobs in Mumbai on CutShort.io. Explore the latest AWS (Amazon Web Services) Job opportunities across top companies like Google, Amazon & Adobe.
Lead Data Engineer
What are we looking for
real solver?
Solver? Absolutely. But not the usual kind. We're searching for the architects of the audacious & the pioneers of the possible. If you're the type to dismantle assumptions, re-engineer ‘best practices,’ and build solutions that make the future possible NOW, then you're speaking our language.
Your Responsibilities
What you will wake up to solve.
- Lead Technical Design & Data Architecture: Architect and lead the end-to-end development of scalable, cloud-native data platforms. You’ll guide the squad on critical architectural decisions—choosing between Batch vs. Streaming or ETL vs. ELT—while remaining 100% hands-on, contributing high-quality, production-grade code.
- Build High-Velocity Data Pipelines: Drive the implementation of robust data transports and ingestion frameworks using Python, SQL, and Spark. You will build integration layers that connect heterogeneous sources (SaaS, RDBMS, NoSQL) into unified, high-availability environments like BigQuery, Snowflake, or Redshift.
- Mentor & Elevate the Squad: Foster a culture of technical excellence by mentoring and inspiring a team of data analysts and engineers. Lead deep-dive code reviews, promote best-practice data modeling (Star/Snowflake schema), and ensure the squad adopts modern engineering standards like CI/CD for data.
- Drive AI-Ready Data Strategy: Be the expert in designing data foundations optimized for AI and Machine Learning. You will champion the use of GCP (Dataflow, Pub/Sub, BigQuery) and AWS (Lambda, Glue, EMR) to create "clean room" environments that fuel advanced analytics and generative AI models.
- Partner with Clients as a Technical DRI: Act as the Directly Responsible Individual for client success. Translate ambiguous business questions into elegant data services, manage project deliverables using Agile methodologies, and ensure that the data provided is accurate, consistent, and mission-critical.
- Troubleshoot & Optimize for Scale: Own the reliability of the reporting layer. You will proactively monitor pipelines, troubleshoot complex transformation bottlenecks, and propose ways to improve platform performance and cost-efficiency.
- Innovate and Build Reusable IP: Spearhead the creation of reusable data frameworks, custom operators, and transformation libraries that accelerate future projects and establish Searce’s unique technical advantage in the market.
Welcome to Searce
The AI-Native tech consultancy that's rewriting the rules.
Searce is an AI-native, engineering-led, modern tech consultancy that empowers clients to futurify their business by delivering intelligent, impactful, real business outcomes. Searce solvers co-innovate with clients as their trusted transformational partners ensuring sustained competitive advantage. Searce clients realize smarter, faster, better business outcomes delivered by AI-native Searce solver squads.
Functional Skills
the solver personas.
- The Data Architect: This persona deconstructs ambiguous business goals into scalable, elegant data blueprints. They don't just move data; they design the foundation—from schema design to partitioning strategies—that allows data scientists and analysts to thrive, foreseeing technical bottlenecks and making pragmatic trade-offs.
- The Player-Coach: As a hands-on leader, this persona leads from the front by writing exemplary, production-grade SQL and Python while simultaneously mentoring and elevating the skills of the squad. Their success is measured by the team's ability to deliver high-quality, maintainable code and their growth as engineers.
- The Pragmatic Innovator: This individual balances a passion for modern data tech (like Generative AI and Real-time Streaming) with a sharp focus on business outcomes. They champion new tools where they add real value but are disciplined enough to choose stable, cost-effective solutions to meet deadlines and deliver robust products.
- The Client-Facing Technologist: This persona acts as the crucial technical bridge between the data squad and the client. They build trust by listening actively, explaining complex data concepts (like data latency or idempotency) in simple terms, and demonstrating how engineering decisions align with the client’s strategic goals.
- The Quality Craftsman: This individual possesses an unwavering commitment to data integrity and treats data engineering as a craft. They are the guardian of the reporting layer, advocating for robust testing, data validation frameworks, and clean, modular code to ensure the long-term reliability of the data platform.
Experience & Relevance
- Engineering Depth: 7-10 years of professional experience in end-to-end data product development. You have a portfolio that proves your ability to build complex, high-velocity pipelines for both Batch and Streaming workloads.
- Cloud-Native Fluency: Deep, hands-on experience designing and deploying scalable data solutions on at least one major cloud platform (AWS, GCP, or Azure). You are comfortable navigating the nuances of EMR, BigQuery, or Synapse at scale.
- AI-Native Workflow: You don’t just build for AI; you build with AI. You must be proficient in using AI coding assistants (e.g., GitHub Copilot) to accelerate your delivery and have a track record of building the data foundations required for Generative AI.
- Architectural Portfolio: Evidence of leading 2-3 large-scale transformations—including platform migrations, data lakehouse builds, or real-time analytics architectures.
- Client-Facing Acumen: You have direct experience in a consultative, client-facing role. You can confidently translate a CEO’s business vision into a Lead Engineer’s technical specification without losing anything in translation.
Join the ‘real solvers’
ready to futurify?
If you are excited by the possibilities of what an AI-native engineering-led, modern tech consultancy can do to futurify businesses, apply here and experience the ‘Art of the possible’. Don’t Just Send a Resume. Send a Statement.
Role Overview:
As a Database Administrator, you will be responsible for the full lifecycle management of our MySQL or PostgreSQL database systems. This includes installation, configuration, performance tuning, security implementation, backup and recovery, and proactive monitoring. You will ensure the reliability, availability, and security of our database infrastructure, supporting our internal operations and client projects.
Job Responsibilities
- Installation and configuration of database software across diverse operating systems.
- Designing efficient physical database models derived from logical designs and application specifications, along with configuring database servers according to best practices and workload requirements.
- Establishing and implementing robust backup and recovery strategies tailored to data volatility and application availability needs.
- Implementing comprehensive security measures at the OS, database, and network levels to ensure authorized data access and maintain a rigorous security infrastructure with auditing capabilities for compliance.
- Fine-tuning hardware/VM resources for optimal database performance.
- Proactive monitoring of the database environment, including performance optimization through adjustments to data structures, SQL, application logic, or the DBMS subsystem.
- Configuration and implementation of database replication technologies (e.g., Master-Slave, Master-Master, Log Shipping, Mirroring, Always On).
- Automation of routine DBA tasks utilizing scripting languages such as Shell, PowerShell, Python, or GO.
- Proficiency in writing general SQL queries.
- Setting up comprehensive monitoring solutions for databases (OS and database levels) using custom scripts or third-party monitoring tools.
- Basic Cloud platform knowledge (AWS/GCP/Azure)
Qualification
Experienced DBA (4-8 years) with deep expertise in MySQL or PostgreSQL architecture, configuration, and management.
Proficient in SQL, backup/recovery, security implementation, performance tuning, and replication for both systems.
Skilled in scripting (e.g., Shell, Python) and Linux, possessing strong problem-solving, communication, and teamwork abilities with a proactive approach.
A relevant Bachelor's degree in Computer Science, Information Technology, or a related field.
We identify better ways of doing things.
Solver? Absolutely. But not the usual kind. We are searching for the architects of the
audacious & the pioneers of the possible. If you are the type to dismantle assumptions,
re-engineer ‘best practices,’ and build solutions that make the future possible NOW,
then you are speaking our language.
➔ Improver. Solver. Futurist.
➔ Great sense of humor.
➔ ‘Possible. It is.’ Mindset.
➔ Compassionate collaborator. Bold experimenter. Tireless iterator.
➔ Natural creativity that doesn’t just challenge the norm, but solves to design what’s better.
➔ Thinks in systems. Solves at scale.
This Isn’t for Everyone. But if you’re the kind who questions why things are done a
certain way—and then identifies 3 better ways to do it — we’d love to chat with you.
Director - Data engineering
What are we looking for
real solver?
Solver? Absolutely. But not the usual kind. We're searching for the architects of the audacious & the pioneers of the possible. If you're the type to dismantle assumptions, re-engineer ‘best practices,’ and build solutions that make the future possible NOW, then you're speaking our language.
Your Responsibilities
what you will wake up to solve.
1. Delivery & Tactical Rigor
- Methodology Implementation: Implement and manage a unified, 'DataOps-First' methodology for data engineering delivery (ETL/ELT pipelines, Data Modeling, MLOps, Data Governance) within assigned business units. This ensures predictable outcomes and trusted data integrity by reducing architecture variability at the project level.
- Operational Stewardship: Drive initiatives to optimize team utilization and enhance operational efficiency within the practice. You manage the commercial success of your squads, ensuring data delivery models (from migration to modern data stack implementation) are executed profitably, scalably, and cost-effectively.
- Execution & Technical Resolution
- Technical Escalation: Serve as the primary escalation point for delivery issues, personally leading the resolution of complex data integration bottlenecks and pipeline failures to protect client timelines and data reliability standards.
- Quality Enforcement
- Quality Oversight: Execute and monitor technical data quality standards, ensuring engineering teams adhere to strict policies regarding data lineage, automated quality checks (observability), security/privacy compliance (GDPR/CCPA/PII), and active catalog management.
2. Strategic Growth & Practice Scaling
- Talent & Scaling Execution: Execute the strategy for data engineering talent acquisition and development within your business units. Implement objective metrics to assess and grow the 'Data-Native' DNA of your teams, ensuring squads are consistently equipped to handle petabyte-scale environments and high-impact delivery.
- Offerings Alignment: Drive the adoption of standardized regional offerings (e.g., Modern Data Platform, Data Mesh, Lakehouse Implementation). Ensure your teams leverage the profitable frameworks defined by the practice to accelerate time-to-insight and eliminate architectural fragmentation in client environments.
- Innovation & IP Development: Lead the practical integration of Vector Databases and LLM-ready architectures into project delivery. Champion the hands-on development of IP and reusable accelerators (e.g., automated ingestion engines) that improve delivery speed and enhance data availability across your portfolio.
3. Leadership & Unit Management
- Unit Leadership: Directly lead, mentor, and manage the Engineering Managers and Lead Architects within your business unit. Hold your teams accountable for project-level operational consistency, technical talent development, and strict adherence to the practice's data governance standards.
- Stakeholder Communication: Clearly articulate the business unit’s operational performance, technical quality metrics, and delivery progress to the C-suite Stakeholders and regional client leadership, bridging the gap between technical execution and business value.
- Ecosystem Alignment: Maintain strong technical relationships with key partner contacts (Snowflake, Databricks, AWS/GCP). Align team delivery capabilities with current product roadmaps and ensure squad-level participation in training, certifications, and partner-led enablement opportunities.
Welcome to Searce
The ‘process-first’, AI-native modern tech consultancy that's rewriting the rules.
We don’t do traditional.
As an engineering-led consultancy, we are dedicated to relentlessly improving the real business outcomes. Our solvers co-innovate with clients to futurify operations and make processes smarter, faster & better.
Functional Skills
1. Delivery Management & Operational Excellence
- Methodology Execution: Expert capability in implementing and enforcing a unified delivery methodology (DataOps, Agile, Mesh Principles) within specific business units. Proven track record of auditing squad-level adherence to ensure consistency across the project lifecycle.
- Operational Performance: High proficiency in managing day-to-day operational metrics, including squad utilization, resource forecasting, and productivity tracking. Skilled at optimizing team performance to meet profitability and efficiency targets.
- SOW & Risk Mitigation: Proven experience in operationalizing Statement of Work (SOW) requirements and identifying technical delivery risks early. Expert at mitigating scope creep and data-specific bottlenecks (e.g., latency, ingestion gaps) before they impact client outcomes.
- Technical Escalation Leadership: Demonstrated ability to lead "war room" efforts to resolve complex pipeline failures or data integrity issues. Skilled at providing clear, rapid remediation plans and communicating technical status directly to regional stakeholders.
2. Architectural Implementation & Technical Oversight
- Modern Stack Proficiency: Deep, hands-on expertise in implementing Cloud-Native architectures (Lakehouse, Data Mesh, MPP) on Snowflake, Databricks, or hyperscalers. Ability to conduct deep-dive architectural reviews and course-correct design decisions at the squad level to ensure scalability.
- Operationalizing Governance: Proven experience in embedding data quality and observability (completeness, freshness, accuracy) directly into the CI/CD pipeline. Responsible for technical enforcement of regulatory compliance (GDPR/PII) and maintaining the integrity of data catalogs across active projects.
- Applied Domain Expertise: Practical experience leading the delivery of high-growth solutions, specifically Generative AI infrastructure (RAG, Vector DBs), Real-Time Streaming, and large-scale platform migrations with a focus on zero-downtime execution.
- DataOps & Engineering Standards: Expert-level mastery of DataOps, including the setup and management of orchestration frameworks (Airflow, Dagster) and Infrastructure as Code (IaC). You ensure that automation is a baseline requirement, not an afterthought, for all delivery teams.
3. Unit Management & Commercial Execution
- Unit & Team Management: Proven success in leading and mentoring Engineering Managers and Lead Architects. Responsible for the operational metrics, technical output, and career development of the business unit's talent pool.
- Offerings Implementation & Scoping: Expertise in translating service offerings (e.g., Data Maturity Assessments, Lakehouse Builds) into accurate project scopes, technical estimates, and resource plans to ensure delivery is both profitable and competitive.
- Talent Growth & Mentorship: Functional ability to implement growth frameworks for data engineering roles. Focus on hands-on coaching and scaling high-performance technical talent to meet the demands of complex, petabyte-scale environments.
- Partner Enablement: Functional competence in managing regional technical relationships with major partners (Snowflake, Databricks, GCP/AWS). Drives squad-level certifications, joint technical enablement, and alignment with partner product roadmaps.
Tech Superpowers
- Modern Data Architect – Reimagines business with the Modern Data Stack (MDS) to deliver data mesh implementations, insights, & real value to clients.
- End-to-End Ecosystem Thinker – Builds modular, reusable data products across ingestion, transformation (ETL/ELT), governance, and consumption layers.
- Distributed Compute Savant – Crafts resilient, high-throughput architectures that survive petabyte-scale volume and data skew without breaking the bank.
- Governance & Integrity Guardian – Embeds data quality, complete lineage, and privacy-by-design (GDPR/PII) into every table, view, and pipeline.
- AI-Ready Orchestrator – Engineers pipelines that bridge structured data with Unstructured/Vector stores, powering RAG models and Generative AI workflows.
- Product-Minded Strategist – Balances architectural purity with time-to-insight; treats every dataset as a measurable "Data Product" with clear ROI.
- Pragmatic Stack Curator – Chooses the simplest tools that compound reliability; fluent in SQL, Python, Spark, dbt, and Cloud Warehouses.
- Builder @ Heart – Writes, reviews, and optimizes queries daily; proves architectures with cost-performance benchmarks, not slideware. Business-first, data-second, outcome focused technology leader.
Experience & Relevance
- Executive Experience: Minimum 10+ years of progressive experience in data engineering and analytics, with at least 3 years in a Senior Manager or Director -level role managing multiple technical teams and owning significant operational and efficiency metrics for a large data service line.
- Delivery Standardization: Demonstrated success in defining and implementing globally consistent, repeatable delivery methodologies (DataOps/Agile Data Warehousing) across diverse teams.
- Architectural Depth: Must retain deep, current expertise in Modern Data Stack architectures (Lakehouse, MPP, Mesh) and maintain the ability to personally validate high-level architectural and data pipeline design decisions.
- Operational Leadership: Proven expertise in managing and scaling large professional services organizations, demonstrated ability to optimize utilization, resource allocation, and operational expense.
- Domain Expertise: Strong background in Enterprise Data Platforms, Applied AI/ML, Generative AI integration, or large-scale Cloud Data Migration.
- Communication: Exceptional executive-level presentation and negotiation skills, particularly in communicating complex operational, data quality, and governance metrics to C-level stakeholders.
Join the ‘real solvers’
ready to futurify?
If you are excited by the possibilities of what an AI-native engineering-led, modern tech consultancy can do to futurify businesses, apply here and experience the ‘Art of the possible’. Don’t Just Send a Resume. Send a Statement.
Solutions Architect - Data Engineering
Modern tech solutions advisory & 'futurify' consulting as a Searce lead fds (‘forward deployed solver’) architecting scalable data platforms and robust data engineering solutions that power intelligent insights and fuel AI innovation.
If you’re a tech-savvy, consultative seller with the brain of a strategist, the heart of a builder, and the charisma of a storyteller — we’ve got a seat for you at the front of the table.
You're not a sales lead. You're the transformation driver.
What are we looking for
real solver?
Solver? Absolutely. But not the usual kind. We're searching for the architects of the audacious & the pioneers of the possible. If you're the type to dismantle assumptions, re-engineer ‘best practices,’ and build solutions that make the future possible NOW, then you're speaking our language.
- Improver. Solver. Futurist.
- Great sense of humor.
- ‘Possible. It is.’ Mindset.
- Compassionate collaborator. Bold experimenter. Tireless iterator.
- Natural creativity that doesn’t just challenge the norm, but solves to design what’s better.
- Thinks in systems. Solves at scale.
This Isn’t for Everyone. But if you’re the kind who questions why things are done a certain way— and then identifies 3 better ways to do it — we’d love to chat with you.
Your Responsibilities
what you will wake up to solve.
You are not just a Solutions Architect; you are a futurifier of our data universe and the primary enabler of our AI ambitions. With a deep-seated passion for data engineering, you will architect and build the foundational data infrastructure that powers the customers entire data intelligence ecosystem.
As the Directly Responsible Individual (DRI) for our enterprise-grade data platforms, you own the outcome, end-to-end. You are the definitive solver for our customer's most complex data challenges, leveraging a powerful tech stack including Snowflake, Databricks, etc. and core GCP & AWS services (BigQuery, Spanner, Airflow, Kafka). This is a hands-on-keys role where you won't just design solutions—you'll build them, break them, and perfect them.
- Solution Design & Pre-sales Excellence:Collaborate with cross-functional teams, including sales, engineering, and operations, to ensure successful project delivery.
- Design Core Data Engineering: Master data modeling, architecting high-performance data ingestion pipelines and ensuring data quality and governance throughout the data lifecycle.
- Enable Cloud & AI: Design and implement solutions utilizing core GCP data services, building foundational data platforms that efficiently support advanced analytics and AI/ML initiatives.
- Optimize Performance & Cost: Continuously optimize data architectures and implementations for performance, efficiency, and cost-effectiveness within the cloud environment.
- Bridge Business & Tech: Translate complex business requirements into clear technical designs, providing technical leadership and guidance to data engineering teams.
- Stay Ahead of the Curve: Continuously research and evaluate new data technologies, architectural patterns, and industry trends to keep our data platforms at the cutting edge.
Functional Skills:
- Enterprise Data Architecture Design: Expert ability to design holistic, scalable, and resilient data architectures for complex enterprise environments.
- Cloud Data Platform Strategy: Proven capability to strategize, design, and implement cloud-native data platforms.
- Pre-Sales & Technical Storyteller: Crafts compelling, client-ready proposals, architectural decks, and technical demonstrations. Doesn't just present; shapes the strategic technical narrative behind every proposed solution.
- Advanced Data Modelling: Mastery in designing various data models for analytical, operational, and transactional use cases.
- Data Ingestion & Pipeline Orchestration: Strong expertise in designing and optimizing robust data ingestion and transformation pipelines.
- Stakeholder Communication: Exceptional skills in articulating complex technical concepts and architectural decisions to both technical and non-technical stakeholders.
- Performance & Cost Optimization: Adept at optimizing data solutions for performance, efficiency, and cost within a cloud environment.
Tech Superpowers:
- Cloud Data Mastery: You're a wizard at leveraging public cloud data services, with deep expertise in GCP (BigQuery, Spanner, etc.) and expert proficiency in modern data warehouse solutions like Snowflake.
- Data Engineering Core: Highly skilled in designing, implementing, and managing data workflows using tools like Apache Airflow and Apache Kafka. You're also an authority on advanced data modeling and ETL/ELT patterns.
- AI/ML Data Foundation: You instinctively design data pipelines and structures that efficiently feed and empower Machine Learning and Artificial Intelligence applications.
- Programming for Data: You have a strong command over key programming languages (Python, SQL) for scripting, automation, and building data processing applications.
Experience & Relevance:
- Architectural Leadership (8+ Years): You bring extensive experience (7+ years) specifically in a Solutions Architect role, focused on data engineering and platform building.
- Cloud Data Expertise: You have a proven track record of designing and implementing production-grade data solutions leveraging major public cloud platforms, with significant experience in Google Cloud Platform (GCP).
- Data Warehousing & Data Platform: Demonstrated hands-on experience in the end-to-end design, implementation, and optimization of modern data warehouses and comprehensive data platforms.
- Databricks & BigQuery Mastery: You possess significant practical experience with Databricks as a core data warehouse and GCP BigQuery for analytical workloads.
- Data Ingestion & Orchestration: Proven experience designing and implementing complex data ingestion pipelines and workflow orchestration using tools like Airflow and real-time streaming technologies like Kafka.
- AI/ML Data Enablement: Experience in building data foundations specifically geared towards supporting Machine Learning and Artificial Intelligence initiatives.
Join the ‘real solvers’
ready to futurify?
If you are excited by the possibilities of what an AI-native engineering-led, modern tech consultancy can do to futurify businesses, apply here and experience the ‘Art of the possible’.
Don’t Just Send a Resume. Send a Statement.
So, If you are passionate about tech, future & what you read above (we really are!), apply here to experience the ‘Art of Possible’
Role & Responsibilities:
We are looking for a strong Data Engineer to join our growing team. The ideal candidate brings solid ETL fundamentals, hands-on pipeline experience, and cloud platform proficiency — with a preference for GCP / BigQuery expertise.
Responsibilities:
- Design, build, and maintain scalable data pipelines and ETL/ELT workflows
- Work with Dataform or DBT to implement transformation logic and data models
- Develop and optimize data solutions on GCP (BigQuery, GCS) or AWS/Azure
- Support data migration initiatives and data mesh architecture patterns
- Collaborate with analysts, scientists, and business stakeholders to deliver reliable data products
- Apply data governance and quality best practices across the data lifecycle
- Troubleshoot pipeline issues and drive proactive monitoring and resolution
Ideal Candidate:
- Strong Data Engineer Profile
- Must have 6+ years of hands-on experience in Data Engineering, with strong ownership of end-to-end data pipeline development.
- Must have strong experience in ETL/ELT pipeline design, transformation logic, and data workflow orchestration.
- Must have hands-on experience with any one of the following: Dataform, dbt, or BigQuery, with practical exposure to data transformation, modeling, or cloud data warehousing.
- Must have working experience on any cloud platform: GCP (preferred), AWS, or Azure, including object storage (GCS, S3, ADLS).
- Must have strong SQL skills with experience in writing complex queries and optimizing performance.
- Must have programming experience in Python and/or SQL for data processing.
- Must have experience in building and maintaining scalable data pipelines and troubleshooting data issues.
- Exposure to data migration projects and/or data mesh architecture concepts.
- Experience with Spark / PySpark or large-scale data processing frameworks.
- Experience working in product-based companies or data-driven environments.
- Bachelor’s or Master’s degree in Computer Science, Engineering, or related field.
NOTE:
- There will be an interview drive scheduled on 28th and 29th March 2026, and if shortlisted, they will be expected to be available on these Interview dates. Only Immediate joiners are considered.
🚨 We’re Building a “Top 1% Engineering Org”
We’re building a high-talent-density, AI-first R&D organization from scratch — inside a publicly listed company undergoing a full-scale transformation.
Think:
→ Rewriting legacy systems into AI-native architectures
→ Embedding LLMs + Agentic AI into core workflows
→ Reimagining platforms, infra, and data systems for the next decade
This is the kind of shift you’d expect from Google, Microsoft, or Meta —
Except you get to build it from day 0 → scale it globally.
About the Role / Team
We are building a next-generation AI-first R&D organization in Bengaluru, focused on solving complex problems across LLMs, Agentic AI systems, distributed computing, and enterprise-scale architectures.
This initiative is part of a publicly listed global company investing heavily in AI-driven transformation, re-architecting its platforms into intelligent, autonomous systems powered by large language models, workflows, and decision engines.
You will be working on:
- Agentic AI systems & LLM-powered workflows
- Distributed, scalable backend systems
- Enterprise-grade AI platforms
- Automation-first engineering environments
🚀 The Mandate
Own and evolve the technical backbone of an AI-first enterprise platform.
You will define architecture across LLM-powered systems, distributed services, and data platforms — and lead critical transformations from legacy → AI-native systems.
🧩 What You’ll Do
- Architect large-scale distributed systems powering AI-driven workflows
- Lead 0→1 and 1→N platform builds (LLM integrations, agentic systems, orchestration layers)
- Redesign legacy systems into scalable, modular, AI-native architectures
- Drive system design excellence across teams (APIs, infra, observability, reliability)
- Make high-stakes decisions on trade-offs (latency, cost, scalability, model performance)
- Mentor senior engineers and influence engineering culture/org standards
- Partner with product, data, and leadership on long-term technical strategy
🧠 What We’re Looking For
- Proven track record building high-scale backend or platform systems
- Deep expertise in distributed systems, microservices, cloud (AWS/GCP/Azure)
- Strong exposure to data systems/infra / Data / real-time architectures
- Experience or strong interest in LLMs, GenAI, or AI system design
- Exceptional system design, abstraction, and problem-solving ability
- High ownership mindset — you think in terms of systems, not tickets
- Strong coding skills in Python / Java / Go / Node.js
- Solid understanding of data structures, system design basics, and backend architecture
- Experience building scalable APIs and services
- Familiarity or curiosity around AI/LLMs, async systems, or event-driven design
- Strong debugging, problem-solving, and ownership mindset
- Solve hard system problems (latency, scale, reliability)
- Drive cross-team technical decisions and standards
- Mentor senior engineers and influence org-wide architecture
- Design large-scale distributed systems and backend platforms
- Mentorship & Technical Leadership
- Expertise in system design, scalability, and performance optimization
Nice to Have
- Experience integrating LLMs, vector databases, or AI pipelines
- Contributions to architecture at scale
- Experience with Agentic AI / LLM orchestration frameworks
- Background in product engineering or platform companies
- Exposure to global-scale systems (millions of users / high throughput)
🔥 What Sets You Apart
- Built platforms used by millions of users / high-throughput systems
- Experience with event-driven systems, stream processing, or infra platforms
- Prior work on AI/ML platforms, model serving, or intelligent systems
Job Title : Senior DevOps Engineer (Only Mumbai Candidates)
Experience : 5+ Years
Location : Mumbai (On-site)
Notice Period : Immediate to 15 Days
Interview Process : 1 Internal Round + 1 Client Round
Mandatory Skills :
Multi-Cloud (AWS/GCP/Azure – any two), Kubernetes, Terraform, Helm (writing Helm Charts), CI/CD (GitLab CI/Jenkins/GitHub Actions), GitOps (ArgoCD/FluxCD), Multi-tenant deployments, Stateful microservices on Kubernetes, Enterprise Linux.
Role Overview :
We are looking for a Senior DevOps Engineer to design, build, and manage scalable cloud infrastructure and DevOps pipelines for product-based platforms.
The ideal candidate should have strong experience with Kubernetes, Terraform, Helm Charts, CI/CD, and GitOps practices.
Key Responsibilities :
- Design and manage scalable cloud infrastructure across AWS/GCP/Azure.
- Deploy and manage microservices on Kubernetes clusters.
- Build and maintain Infrastructure as Code using Terraform and Helm.
- Implement CI/CD pipelines using GitLab CI, Jenkins, or GitHub Actions.
- Implement GitOps workflows using ArgoCD or FluxCD.
- Ensure secure, scalable, and reliable DevOps architecture.
- Implement monitoring and logging using Prometheus, Grafana, or ELK.
Good to Have :
- Packer, OpenShift/Rancher/K3s, On-prem deployments, PaaS experience, scripting (Bash/Python), Terraform modules.
Key Responsibilities
DevOps Strategy & Leadership
- Define and execute the end-to-end DevOps strategy for high-frequency trading and fintech platforms.
- Lead, mentor, and scale a high-performing DevOps team focused on automation, reliability, and performance.
- Partner closely with engineering and product leaders to ensure infrastructure strategy supports business and technical goals.
CI/CD & Infrastructure Automation
- Architect, implement, and optimize enterprise-grade CI/CD pipelines for ultra-low-latency trading systems.
- Drive Infrastructure as Code (IaC) adoption using Terraform, Helm, Kubernetes, and advanced automation toolsets.
- Establish robust release management, deployment workflows, and versioning best practices for mission‑critical environments.
Cloud & On‑Prem Infrastructure Management
- Design and manage hybrid infrastructures across AWS, GCP, and on-premise data centers ensuring high availability and fault tolerance.
- Implement sophisticated networking strategies for low-latency workloads including routing optimization and performance tuning.
- Lead multi‑cloud scalability, cost optimization, and environment standardization initiatives.
Performance Monitoring & Optimization
- Oversee large-scale monitoring systems using Prometheus, Grafana, ELK, and related observability tools.
- Implement predictive alerting, automated remediation, and system‑wide health checks for zero‑downtime operations.
- Conduct root-cause analyses and performance tuning for systems processing millions of transactions per second.
Security & Compliance
- Champion DevSecOps practices and embed security across the entire development and deployment lifecycle.
- Ensure adherence to financial regulatory standards (SEBI and global frameworks) with strong audit and compliance mechanisms.
- Lead security automation efforts, vulnerability management, and advanced IAM policy implementation.
Required Skills & Qualifications
- 10+ years of DevOps experience, with 5+ years in a leadership capacity.
- Deep hands-on expertise in CI/CD tools such as Jenkins, GitLab CI/CD, and ArgoCD.
- Strong command of AWS, GCP, and hybrid cloud infrastructures.
- Expert-level knowledge of Kubernetes, Docker, and large-scale container orchestration.
- Advanced proficiency in Terraform, Helm, and overall IaC workflows.
- Strong Linux administration, networking fundamentals (TCP/IP, DNS, Firewalls), and system internals.
- Experience with monitoring and observability platforms (Prometheus, Grafana, ELK).
- Excellent scripting skills in Python, Bash, or Go for automation and tooling.
- Deep understanding of security principles, encryption, IAM, and compliance frameworks.
Good to Have
- Experience with ultra-low-latency or high-frequency trading systems.
- Knowledge of FIX protocol, FPGA acceleration, or network‑level optimizations.
- Familiarity with Redis, Nginx, or other high‑throughput systems.
- Exposure to micro‑second‑level performance tuning or network acceleration technologies.
Why Join Us?
- Be part of a team that consistently raises the bar and delivers exceptional engineering outcomes.
- A culture where innovation, ownership, and bold thinking are valued.
- Exceptional growth opportunities—ideal for someone who thrives in fast-paced, high-impact environments.
- Build systems that influence markets and redefine the fintech landscape.
This isn’t just a role—it’s a challenge, a platform, and a proving ground.
Ready to step up? Apply now.
Job Overview
We are seeking an experienced Senior Solution Architect to join our dynamic DevOps organization. The ideal candidate will have a strong background in cloud technologies, with expertise in migration projects across platforms such as GCP, AWS, and Azure. The candidate should possess a deep understanding of DevOps principles, Kubernetes orchestration, Data migration & management and automation tools like CI/CD pipelines and Terraform.The individual should be highly skilled in designing scalable application architectures capable of handling substantial workloads while ensuring the highest standards of quality.
Key Responsibilities
- Lead and drive cloud migration projects from on-premises data centers or other cloud platforms to GCP, AWS, or Azure.
- Design and implement migration strategies that ensure minimal downtime and maximum efficiency.
- Demonstrate proficiency in GCP, AWS, and Azure, with the ability to choose and optimize solutions based on specific business requirements.
- Provide guidance on selecting the appropriate cloud services for various workloads.
- Design, implement, and optimize CI/CD pipelines to streamline software delivery.
- Utilize Terraform for infrastructure as code (IaC) to automate deployment processes.
- Collaborate with development and operations teams to enhance the overall DevOps culture.
- Possess in-depth knowledge and practical experience with Kubernetes orchestration for containerized applications.
- Architect and optimize Kubernetes clusters for high availability and scalability.
- Engage in research and development activities to stay abreast of industry trends and emerging technologies.
- Evaluate and introduce new tools and methodologies to enhance the efficiency and effectiveness of cloud solutions.
- Architect solutions that can handle large-scale workloads and provide guidance on scaling strategies.
- Ensure high-performance levels and reliability in production environments.
- Design scalable and high-performance database architectures tailored to meet business needs.
- Execute database migrations with a keen focus on data consistency, integrity, and performance.
- Develop and implement database pipelines to automate processes such as data migrations, schema changes, and backups.
- Optimize database workflows to enhance efficiency and reliability.
- Work closely with clients to assess and enhance the quality of existing architectures.
- Implement best practices to ensure robust, secure, and well-architected solutions.
- Drive migration projects, collaborating with cross-functional teams to ensure successful execution.
- Provide technical leadership and mentorship to junior team members.
Required Skills and Qualifications:
- Bachelor's degree in Computer Science, Information Technology, or related field.
- Relevant industry experience in a Solution Architect role.
- Proven experience in leading cloud migration projects across GCP, AWS, and Azure.
- Expertise in DevOps practices, CI/CD pipelines, and infrastructure automation.
- In-depth knowledge of Kubernetes and container orchestration.
- Strong background in scaling architectures to handle significant workloads.
- Sound knowledge in database migrations
- Excellent communication skills and the ability to articulate complex technical concepts to both technical and non-technical stakeholders.
Location: Mumbai, Maharashtra, India
Sector: Technology, Information & Media
Company Size: 500 - 1,000 Employees
Employment: Full-Time, Permanent
Experience: 10 - 14 Years (Engineering Leadership)
Level: Engineering Manager / Group EM
ABOUT THIS MANDATE :
Recruiting Bond has been exclusively retained by one of India's most prominent and well-established digital platform organisations operating at the intersection of Technology, Information, and Media to identify and place an exceptional Engineering Manager who can lead engineering teams through an enterprise-wide AI adoption and digital transformation agenda.
This is a high-impact, hands-on leadership role at the nexus of people, product, and technology. The organisation is executing one of the most ambitious AI transformation programmes in its sector and this Engineering Manager will be a core driver of that change. You will lead multiple squads, own engineering delivery end-to-end, embed AI tooling and practices into the team's DNA, and shape the engineering culture of tomorrow.
We are seeking leaders who code when it matters, who build systems and teams with equal conviction, and who view AI not as a trend but as a fundamental shift in how great software is built.
THE OPPORTUNITY AT A GLANCE :
AI-First Engineering Culture :
- Own AI adoption across your squads - from LLM tooling integration to automation-first delivery workflows. Make AI a default, not an afterthought.
Hands-On Engineering Leadership :
- Stay close to the code. Lead architecture reviews, unblock engineers, and set the technical bar - not just the management agenda.
People & Org Builder :
- Grow engineers into leaders. Build squads of 615 across functions. Drive hiring, career frameworks, and a culture of psychological safety.
KEY RESPONSIBILITIES :
1. Hands-On Technical Engagement :
- Remain deeply embedded in the technical work participate in design reviews, architecture decisions, and critical code reviews
- Set and uphold the engineering quality bar : performance benchmarks, security standards, test coverage, and release quality
- Provide technical direction on backend platform strategy, API design, service decomposition, and data architecture
- Identify and resolve systemic technical debt and architectural risks across team-owned services
- Unblock engineers by diving into complex problems debugging, pair programming, and system analysis when it matters
- Own key technical decisions in collaboration with Tech Leads and Principal Engineers; balance pragmatism with long-term sustainability
2. AI Adoption, Integration & Transformation (2026 Mandate) :
- Define and execute the team's AI adoption roadmap - from developer tooling to product-facing AI features
- Champion the integration of GenAI tools (GitHub Copilot, Cursor, Claude, ChatGPT) across the full engineering workflow coding, testing, documentation, incident response
- Embed LLM-powered capabilities into the product : recommendation engines, intelligent search, conversational interfaces, content generation, and predictive systems
- Lead evaluation and adoption of AI-assisted SDLC practices : automated code review, AI-generated test suites, intelligent observability, and anomaly detection
- Partner with Data Science and ML Platform teams to productionise ML models with robust MLOps pipelines
- Build team literacy in prompt engineering, RAG (Retrieval-Augmented Generation), and AI agent frameworks
- Create an experimentation culture : run structured AI pilots, measure productivity impact, and scale what works
- Stay ahead of the AI tooling landscape and advise senior leadership on strategic AI investments and engineering implications
3. People Leadership & Team Development :
- Lead, manage, and grow squads of 6 - 15 engineers across seniority levels (L2 through L6 / Junior through Staff)
- Conduct structured 1 : 1s, career growth conversations, and development planning with every direct report
- Design and execute personalised AI upskilling programmes ensure every engineer develops practical AI fluency by end of 2026
- Build and maintain a high-performance team culture : clarity of ownership, accountability, fast feedback loops, and psychological safety
- Drive performance management fairly and rigorously recognise top performers, manage underperformance constructively
- Lead technical hiring end-to-end : define job requirements, conduct bar-raising interviews, and make data-driven hire decisions
- Contribute to engineering career frameworks and level definitions in partnership with the VP / Director of Engineering
4. Engineering Delivery & Execution Excellence :
- Own end-to-end delivery for multiple product squads from planning and scoping through production release and post-launch stability
- Implement and refine agile delivery frameworks (Scrum, Kanban, Shape Up) calibrated to squad needs and product cadence
- Drive predictable delivery : maintain healthy sprint velocity, manage WIP limits, and ensure dependency resolution across teams.
- Establish and own engineering KPIs : DORA metrics (deployment frequency, lead time, MTTR, change failure rate), uptime SLOs, and velocity trends
- Lead incident management : build blameless post-mortem culture, own RCA processes, and drive systemic reliability improvements
- Balance technical debt repayment with feature velocity negotiate prioritisation transparently with Product leadership
5. Strategic Leadership & Cross-Functional Influence :
- Serve as the primary engineering partner for Product, Design, Data, and Business stakeholders translate ambiguity into executable engineering plans
- Participate in quarterly roadmap planning, capacity forecasting, and OKR definition for engineering teams
- Represent engineering in leadership forums articulate technical constraints, risks, and opportunities in business terms
- Contribute to org-wide engineering strategy : platform investments, build-vs-buy decisions, and shared infrastructure priorities
- Build relationships across geographies (Mumbai HQ + distributed teams) to maintain alignment and delivery cohesion
- Act as a culture carrier and ambassador for engineering excellence, innovation, and responsible AI use
AI TRANSFORMATION LEADERSHIP 2026 EXPECTATIONS :
In 2026, Engineering Managers at this organisation are expected to be active architects of AI transformation not passive observers. The following outlines the specific AI leadership expectations for this role :
AI Developer Productivity
- Drive measurable uplift in developer velocity through AI tooling adoption. Target : 30%+ reduction in code review cycle time and 40%+ increase in test coverage automation by Q3 2026.
LLM & GenAI Product Features
- Own delivery of GenAI-powered product capabilities : intelligent content, semantic search, personalisation, and conversational UX in production, at scale.
AI-Augmented Observability
- Implement AI-driven monitoring and anomaly detection pipelines. Reduce MTTR by leveraging predictive alerting, intelligent runbooks, and auto-remediation scripts.
Team AI Fluency :
- Build mandatory AI literacy across all engineering levels.
- Every engineer understands prompt engineering basics, AI ethics guardrails, and responsible AI deployment practices.
Responsible AI Governance :
- Partner with Security, Legal, and Data Privacy to ensure all AI deployments meet compliance standards, bias mitigation requirements, and explainability benchmarks.
TECHNOLOGY STACK & DOMAIN FAMILIARITY REQUIRED :
- Languages: Java/ Go/ Python/ Node.js /PHP /Rust (must be hands-on in at least 2)
- Cloud: AWS / GCP / Azure (multi-cloud exposure strongly preferred)
- AI & GenAI: OpenAI / Anthropic / Gemini APIs /LangChain /LlamaIndex / RAG / Vector DBs / GitHub
- Copilot: Cursor /Hugging Face
- Containers: Docker /Kubernetes /Helm /Service Mesh (Istio / Linkerd)
- Databases: PostgreSQL /MongoDB / Redis / Cassandra / Elasticsearch / Pinecone (Vector DB)
- Messaging: Apache Kafka /RabbitMQ /AWS SQS/SNS /Google Pub/Sub
- MLOps & DataOps: MLflow /Kubeflow / SageMaker / Vertex AI /Airflow /dbt
- Observability: Datadog /Prometheus /Grafana /OpenTelemetry / Jaeger /ELK Stack
- CI/CD & IaC: GitHub Actions ArgoCD / Jenkins / Terraform /Ansible /Backstage (IDP)
QUALIFICATIONS & CANDIDATE PROFILE :
Education :
- B.E. / B.Tech or M.E. / M.Tech from a Tier-I or Tier-II Institution - CS, IS, ECE, AI/ML streams strongly preferred
- Demonstrated engineering depth and leadership impact may complement institution pedigree
Experience :
- 10 to 14 years of progressive engineering experience, with at least 3 years in a formal Engineering Manager or equivalent people-leadership role
- Proven track record of managing and scaling engineering teams (615+ engineers) in a fast-growing SaaS or digital product environment
- Hands-on backend engineering background must be able to read, write, and critique production code
- Direct experience driving AI/ML feature delivery or AI tooling adoption within engineering organisations
- Exposure across start-up, mid-size, and large-scale product organisations, preferred adaptability is a core requirement
- Strong CS fundamentals: distributed systems, algorithms, system design, and software architecture
- Demonstrated career stability minimum of 2 years of average tenure per organisation.
The Ideal Engineering Manager in 2026 :
- Leads with context, not control, empowers engineers while maintaining accountability and quality
- Is fluent in both people language and technical language, switches registers naturally with engineers and executives alike
- Sees AI as a force multiplier for the team, not a threat. Actively experiments with and advocates for AI tooling
- Measures success by team outcomes, not personal output. Takes pride in what the team ships, not what they build alone
- Creates feedback loops obsessively between product and engineering, between seniors and juniors, between metrics and decisions
- Has strong opinions, loosely held, brings conviction to discussions but updates on evidence
- Invests in engineering excellence as seriously as delivery velocity knows that quality and speed are not opposites
WHY THIS ROLE STANDS APART :
AI Transformation at Scale :
- Lead one of the most significant AI adoption programmes in India's digital media sector.
- Our decisions will shape how hundreds of engineers work in 2026 and beyond.
Hands-On & Strategic Balance :
- A rare EM role that actively encourages technical depth.
- Stay close to the code while owning the people agenda - the best of both worlds.
Established Platform, Real Scale :
- 5001,000 engineers, proven product-market fit, and the org maturity to execute.
- This is not a greenfield startup gamble it is a serious company with serious ambition.
Clear Leadership Growth Path :
- A visible, direct path toward Director / VP of Engineering.
- Senior leadership is invested in growing its next generation of technology executives.
JOB DESCRIPTION:
Location: Pune, Mumbai
Mode of Work : 3 days from Office
DSA(Collections, Hash maps, trees, Linkedlist and Arrays, etc), Core Oops concepts(Multithreading, Multi Processing, Polymorphism, Inheritance etc) Annotations in Spring and Spring boot, Java 8 Vital features, database Optimization, Microsevices and Rest API
- Design, develop, and maintain low-latency, high-performance enterprise applications using Core Java (Java 5.0 and above).
- Implement and integrate APIs using Spring Framework and Apache CXF.
- Build microservices-based architecture for scalable and distributed systems.
- Collaborate with cross-functional teams for high/low-level design, development, and deployment of software solutions.
- Optimize performance through efficient multithreading, memory management, and algorithm design.
- Ensure best coding practices, conduct code reviews, and perform unit/integration testing.
- Work with RDBMS (preferably Sybase) for backend data integration.
- Analyze complex business problems and deliver innovative technology solutions in the financial/trading domain.
- Work in Unix/Linux environments for deployment and troubleshooting.
We are looking for an experienced AI Technical Architect who can design and lead end-to-end AI/ML solutions, define scalable architecture, and guide development teams in building intelligent applications aligned with business goals.
Key Responsibilities:
- Design AI/ML architecture and technical solutions.
- Lead AI strategy, model deployment, and integration.
- Build scalable AI pipelines and cloud-based solutions.
- Work closely with data scientists, developers, and stakeholders.
- Ensure best practices in MLOps, automation, and performance optimization.
- Evaluate new AI technologies and frameworks.
JOB DETAILS:
* Job Title: Head of Engineering/Senior Product Manager
* Industry: Digital transformation excellence provider
* Salary: Best in Industry
* Experience: 12-20 years
* Location: Mumbai
Job Description
Role Overview
The VP / Head of Technology will lead company’s technology function across engineering, product development, cloud infrastructure, security, and AI-led initiatives. This role focuses on delivering scalable, high-quality technology solutions across company’s core verticals including eCommerce, Procurement & e-Sourcing, ERP integrations, Sustainability/ESG, and Business Services.
This leader will drive execution, ensure technical excellence, modernize platforms, and collaborate closely with business and delivery teams.
Roles and Responsibilities:
Technology Execution & Architecture Leadership
· Own and execute the technology roadmap aligned with business goals.
· Build and maintain scalable architecture supporting multiple verticals.
· Enforce engineering best practices, code quality, performance, and security.
· Lead platform modernization including microservices, cloud-native architecture, API-first systems, and integration frameworks.
Product & Engineering Delivery
· Manage multi-product engineering teams across eCommerce platforms, procurement systems, ERP integrations, analytics, and ESG solutions.
· Own the full SDLC — requirements, design, development, testing, deployment, support.
· Implement Agile, DevOps, CI/CD for faster releases and improved reliability.
· Oversee product/platform interoperability across all company systems.
Vertical-Specific Technology Leadership
Procurement Tech:
· Lead architecture and enhancements of procurement and indirect spend platforms.
· Ensure interoperability with SAP Ariba, Coupa, Oracle, MS Dynamics, etc.
eCommerce:
· Drive development of scalable B2B/B2C commerce platforms, headless commerce, marketplace integrations, and personalization capabilities.
Sustainability/ESG:
· Support development of GHG tracking, reporting systems, and sustainability analytics platforms.
Business Services:
· Enhance operational platforms with automation, workflow management, dashboards, and AI-driven efficiency tools.
Data, Cloud, Security & Infrastructure
· Own cloud infrastructure strategy (Azure/AWS/GCP).
· Ensure adherence to compliance standards (SOC2, ISO 27001, GDPR).
· Lead cybersecurity policies, monitoring, threat detection, and recovery planning.
· Drive observability, cost optimization, and system scalability.
AI, Automation & Innovation
· Integrate AI/ML, analytics, and automation into product platforms and service delivery.
· Build frameworks for workflow automation, supplier analytics, personalization, and operational efficiency.
· Lead R&D for emerging tech aligned to business needs.
Leadership & Team Management
· Lead and mentor engineering managers, architects, developers, QA, and DevOps.
· Drive a culture of ownership, innovation, continuous learning, and performance accountability.
· Build capability development frameworks and internal talent pipelines.
Stakeholder Collaboration
· Partner with Sales, Delivery, Product, and Business Teams to align technology outcomes with customer needs.
· Ensure transparent reporting on project status, risks, and technology KPIs.
· Manage vendor relationships, technology partnerships, and external consultants.
Education, Training, Skills, and Experience Requirements:
Experience & Background
· 16+ years in technology execution roles, including 5–7 years in senior leadership.
· Strong background in multi-product engineering for B2B platforms or enterprise systems.
· Proven delivery experience across: eCommerce, ERP integrations, procurement platforms, ESG solutions, and automation.
Technical Skills
· Expertise in cloud platforms (Azure/AWS/GCP), microservices architecture, API frameworks.
· Strong grasp of procurement tech, ERP integrations, eCommerce platforms, and enterprise-scale systems.
· Hands-on exposure to AI/ML, automation tools, data engineering, and analytics stacks.
· Strong understanding of security, compliance, scalability, performance engineering.
Leadership Competencies
· Execution-focused technology leadership.
· Strong communication and stakeholder management skills.
· Ability to lead distributed teams, manage complexity, and drive measurable outcomes.
· Innovation mindset with practical implementation capability.
Education
· Bachelor’s or Master’s in Computer Science/Engineering or equivalent.
· Additional leadership education (MBA or similar) is a plus, not mandatory.
Travel Requirements
· Occasional travel for client meetings, technology reviews, or global delivery coordination.
Must-Haves
· 10+ years of technology experience, with with at least 6 years leading large (50-100+) multi product engineering teams.
· Must have worked on B2B Platforms. Experience in Procurement Tech or Supply Chain
· Min. 10+ Years of Expertise in Cloud-Native Architecture, Expert-level design in Azure, AWS, or GCP using Microservices, Kubernetes (K8s), and Docker.
· Min. 8+ Years of Expertise in Modern Engineering Practices, Advanced DevOps, CI/CD pipelines, and automated testing frameworks (Selenium, Cypress, etc.).
· Hands-on leadership experience in Security & Compliance.
· Min. 3+ Years of Expertise in AI & Data Engineering, Practical implementation of LLMs, Predictive Analytics, or AI-driven automation
· Strong technology execution leadership, with ownership of end-to-end technology roadmaps aligned to business outcomes.
· Min. 6+ Years of Expertise in B2B eCommerce Logic Architecture of Headless Commerce, marketplace integrations, and complex B2B catalog management.
· Strong product management exposure
· Proven experience in leading end-to-end team operations
· Relevant experience in product-driven organizations or platforms
· Strong Subject Matter Expertise (SME)
Education: - Master degree.
**************
Joining time / Notice Period: Immediate - 45days.
Location: - Andheri,
5 days working (3 - 2 days’ work from office)
Job Title : Senior Backend Developer (Node.js + AWS + MongoDB)
Experience : 4+ Years
Location : Andheri, Mumbai (Work From Office)
About the Role :
We are looking for a highly skilled Senior Backend Developer with strong expertise in Node.js (NestJS), AWS, and MongoDB to join our growing engineering team.
This role requires someone who takes ownership, is proactive, and enjoys building scalable, high-performance backend systems in a fast-paced environment.
Key Responsibilities :
- Architect, design, and develop scalable backend services using Node.js (NestJS).
- Design and manage cloud infrastructure on AWS Services (EC2, ECS, RDS, Lambda, etc.).
- Develop and maintain high-performance database solutions using MongoDB.
- Work with Kafka, Docker, and serverless frameworks (SST) for efficient deployments.
- Optimize system performance, scalability, and reliability across services.
- Ensure application security, best practices, and compliance standards.
- Collaborate with cross-functional teams to deliver robust product features.
- Take end-to-end ownership of features from design to deployment.
Technical Requirements :
- 4+ years of backend development experience.
- 3+ years of hands-on experience with Node.js.
- 2+ years of hands-on experience with AWS.
- Strong experience with NestJS framework.
- Solid experience with MongoDB and database design.
- Experience with Kafka, Docker, and serverless architecture.
- Understanding of system design, scalability, and performance optimization.
Good to Have (Bonus Skills) :
- Experience with Python or other backend languages.
- Exposure to Agentic AI use cases or implementations.
- Strong understanding of security best practices.
What We’re Looking For :
- Curious mindset and eagerness to learn new technologies.
- Proactive problem solver with strong ownership attitude.
- Strong team player with effective communication skills.
- Positive, energetic, and passionate about building great systems.
About the Role:
We are seeking a highly skilled and motivated individual to join our development team. The ideal candidate will have extensive experience with Node.js, AWS, and MongoDB, and a strong pro-activeness and ownership mindset.
Technical Expertise:
- Architect, design, and develop scalable and efficient backend services using Node.js (Nest.js).
- Design and manage cloud-based infrastructure on AWS, including EC2, ECS, RDS, Lambda, and other services.
- Work with MongoDB to design, implement, and maintain high-performance database solutions.
- Leverage Kafka, Docker and serverless technologies like SST to streamline deployments and infrastructure management.
- Optimize application performance and scalability across the stack.
- Ensure security and compliance standards are met across all development and deployment processes.
Bonus Points:
- Experience with other backend languages like Python and worked on Agentic AI
- Security knowledge and best practices.
JOB DETAILS:
- Job Title: Lead II - Software Engineering- React Native - React Native, Mobile App Architecture, Performance Optimization & Scalability
- Industry: Global digital transformation solutions provider
- Experience: 7-9 years
- Working Days: 5 days/week
- Job Location: Mumbai
- CTC Range: Best in Industry
Job Description
Job Title
Lead React Native Developer (6–8 Years Experience)
Position Overview
We are looking for a Lead React Native Developer to provide technical leadership for our mobile applications. This role involves owning architectural decisions, setting development standards, mentoring teams, and driving scalable, high-performance mobile solutions aligned with business goals.
Must-Have Skills
- 6–8 years of experience in mobile application development
- Extensive hands-on experience leading React Native projects
- Expert-level understanding of React Native architecture and internals
- Strong knowledge of mobile app architecture patterns
- Proven experience with performance optimization and scalability
- Experience in technical leadership, team management, and mentorship
- Strong problem-solving and analytical skills
- Excellent communication and collaboration abilities
- Proficiency in modern React Native development practices
- Experience with Expo toolkit and libraries
- Strong understanding of custom hooks development
- Focus on writing clean, maintainable, and scalable code
- Understanding of mobile app lifecycle
- Knowledge of cross-platform design consistency
Good-to-Have Skills
- Experience with microservices architecture
- Knowledge of cloud platforms such as AWS, Firebase, etc.
- Understanding of DevOps practices and CI/CD pipelines
- Experience with A/B testing and feature flag implementation
- Familiarity with machine learning integration in mobile applications
- Exposure to innovation-driven technical decision-making
Skills: React native, mobile app development, devops, machine learning
******
Notice period - 0 to 15 days only (Need Feb Joiners)
Location: Navi Mumbai, Belapur
Job Title : Node.js Developer / Backend Developer
Experience : 4+ Years
Job Location : Mumbai – Andheri
Work Mode : Work From Office (5 Days a Week)
Job Type : Full-time Opportunity
Role Overview :
We are seeking an experienced Node.js / Backend Developer to design, develop, and maintain scalable backend systems.
The ideal candidate will have strong hands-on experience with Node.js, Nest.js, relational and NoSQL databases, and AWS cloud services.
You will work closely with frontend developers, DevOps, and product teams to deliver secure, high-performance, and reliable backend solutions.
Mandatory Skills : Node.js, Nest.js, MongoDB, PostgreSQL, AWS, REST API development, strong backend fundamentals.
Key Responsibilities :
• Design, develop, and maintain scalable backend applications using Node.js & Nest.js
• Build and manage RESTful APIs and backend services
• Work with MongoDB and PostgreSQL for efficient data storage and retrieval
• Develop cloud-ready applications and deploy them on AWS
• Ensure application performance, security, and scalability
• Write clean, well-documented, and maintainable code
• Participate in code reviews and follow best engineering practices
• Troubleshoot, debug, and optimize existing applications
• Collaborate with cross-functional teams for end-to-end delivery
Required Skills & Qualifications :
• 4+ years of experience in Backend / Node.js development
• Strong hands-on experience with Node.js and Nest.js
• Experience working with MongoDB and PostgreSQL
• Good understanding of AWS services (EC2, S3, RDS, etc.)
• Experience building RESTful APIs
• Understanding of backend architecture, design patterns, and best practices
• Strong problem-solving and debugging skills
• Familiarity with version control systems (Git)
Good-to-Have Skills :
• Experience with microservices architecture
• Knowledge of Docker and CI/CD pipelines
• Exposure to message queues or event-driven systems
• Basic understanding of frontend-backend integration
Description :
About the Role :
We're seeking a dynamic and technically strong Engineering Manager to lead, grow, and inspire our high-performing engineering team. In this role, you'll drive technical strategy, deliver scalable systems, and ensure SolarSquare's platforms continue to delight users at scale. You'll combine hands-on technical expertise with a passion for mentoring engineers, shaping culture, and collaborating across functions to bring bold ideas to life in a fast-paced startup environment.
Responsibilities :
- Lead and manage a team of full stack developers (SDE1 to SDE3), fostering a culture of ownership, technical excellence, and continuous learning.
- Drive the technical vision and architectural roadmap for the MERN stack platform, ensuring scalability, security, and high performance.
- Collaborate closely with product, design, and business teams to align engineering priorities with business goals and deliver impactful products.
- Ensure engineering best practices across code reviews, testing strategies, and deployment pipelines (CI/CD).
- Implement robust observability and monitoring systems to proactively identify and resolve issues in production environments.
- Optimize system performance and cost-efficiency in cloud infrastructure (AWS, Azure, GCP).
- Manage technical debt effectively, balancing long-term engineering health with short-term product needs.
- Recruit, onboard, and develop top engineering talent, creating growth paths for team members.
- Drive delivery excellence by setting clear goals, metrics, and expectations, and ensuring timely execution of projects.
- Advocate for secure coding practices and compliance with data protection standards (e.g., OWASP, GDPR).
Requirements :
- 8 to 12 years of experience in full stack development, with at least 2+ years in a technical leadership or people management role.
- Proven expertise in the MERN stack (MongoDB, Express.js, React.js, Node.js) and strong understanding of distributed systems and microservices.
- Hands-on experience designing and scaling high-traffic web applications.
- Deep knowledge of cloud platforms (AWS, Azure, GCP), containerization (Docker), and orchestration tools (Kubernetes).
- Strong understanding of observability practices and tools (Prometheus, Grafana, ELK, Datadog) for maintaining production-grade systems.
- Track record of building and leading high-performing engineering teams in agile environments.
- Excellent communication and stakeholder management skills, with the ability to align technical efforts with business objectives.
- Experience with cost optimization, security best practices, and performance tuning in cloud-native environments.
Bonus : Prior experience in established Product companies or experience with scaling teams in early stage startup and designing systems from scratch.
Work Arrangement :
- Flexible work setup, including hybrid options. Monday to Friday.
JOB DESCRIPTION:
Location: Pune, Mumbai
Mode of Work : 3 days from Office
DSA(Collections, Hash maps, trees, Linkedlist and Arrays, etc), Core Oops concepts(Multithreading, Multi Processing, Polymorphism, Inheritance etc) Annotations in Spring and Spring boot, Java 8 Vital features, database Optimization, Microsevices and Rest API
- Design, develop, and maintain low-latency, high-performance enterprise applications using Core Java (Java 5.0 and above).
- Implement and integrate APIs using Spring Framework and Apache CXF.
- Build microservices-based architecture for scalable and distributed systems.
- Collaborate with cross-functional teams for high/low-level design, development, and deployment of software solutions.
- Optimize performance through efficient multithreading, memory management, and algorithm design.
- Ensure best coding practices, conduct code reviews, and perform unit/integration testing.
- Work with RDBMS (preferably Sybase) for backend data integration.
- Analyze complex business problems and deliver innovative technology solutions in the financial/trading domain.
- Work in Unix/Linux environments for deployment and troubleshooting.
Review Criteria
- Strong Data Scientist/Machine Learnings/ AI Engineer Profile
- 2+ years of hands-on experience as a Data Scientist or Machine Learning Engineer building ML models
- Strong expertise in Python with the ability to implement classical ML algorithms including linear regression, logistic regression, decision trees, gradient boosting, etc.
- Hands-on experience in minimum 2+ usecaseds out of recommendation systems, image data, fraud/risk detection, price modelling, propensity models
- Strong exposure to NLP, including text generation or text classification (Text G), embeddings, similarity models, user profiling, and feature extraction from unstructured text
- Experience productionizing ML models through APIs/CI/CD/Docker and working on AWS or GCP environments
- Preferred (Company) – Must be from product companies
Job Specific Criteria
- CV Attachment is mandatory
- What's your current company?
- Which use cases you have hands on experience?
- Are you ok for Mumbai location (if candidate is from outside Mumbai)?
- Reason for change (if candidate has been in current company for less than 1 year)?
- Reason for hike (if greater than 25%)?
Role & Responsibilities
- Partner with Product to spot high-leverage ML opportunities tied to business metrics.
- Wrangle large structured and unstructured datasets; build reliable features and data contracts.
- Build and ship models to:
- Enhance customer experiences and personalization
- Boost revenue via pricing/discount optimization
- Power user-to-user discovery and ranking (matchmaking at scale)
- Detect and block fraud/risk in real time
- Score conversion/churn/acceptance propensity for targeted actions
- Collaborate with Engineering to productionize via APIs/CI/CD/Docker on AWS.
- Design and run A/B tests with guardrails.
- Build monitoring for model/data drift and business KPIs
Ideal Candidate
- 2–5 years of DS/ML experience in consumer internet / B2C products, with 7–8 models shipped to production end-to-end.
- Proven, hands-on success in at least two (preferably 3–4) of the following:
- Recommender systems (retrieval + ranking, NDCG/Recall, online lift; bandits a plus)
- Fraud/risk detection (severe class imbalance, PR-AUC)
- Pricing models (elasticity, demand curves, margin vs. win-rate trade-offs, guardrails/simulation)
- Propensity models (payment/churn)
- Programming: strong Python and SQL; solid git, Docker, CI/CD.
- Cloud and data: experience with AWS or GCP; familiarity with warehouses/dashboards (Redshift/BigQuery, Looker/Tableau).
- ML breadth: recommender systems, NLP or user profiling, anomaly detection.
- Communication: clear storytelling with data; can align stakeholders and drive decisions.
Roles & Responsibilities
- Data Engineering Excellence: Design and implement data pipelines using formats like JSON, Parquet, CSV, and ORC, utilizing batch and streaming ingestion.
- Cloud Data Migration Leadership: Lead cloud migration projects, developing scalable Spark pipelines.
- Medallion Architecture: Implement Bronze, Silver, and gold tables for scalable data systems.
- Spark Code Optimization: Optimize Spark code to ensure efficient cloud migration.
- Data Modeling: Develop and maintain data models with strong governance practices.
- Data Cataloging & Quality: Implement cataloging strategies with Unity Catalog to maintain high-quality data.
- Delta Live Table Leadership: Lead the design and implementation of Delta Live Tables (DLT) pipelines for secure, tamper-resistant data management.
- Customer Collaboration: Collaborate with clients to optimize cloud migrations and ensure best practices in design and governance.
Educational Qualifications
- Experience: Minimum 5 years of hands-on experience in data engineering, with a proven track record in complex pipeline development and cloud-based data migration projects.
- Education: Bachelor’s or higher degree in Computer Science, Data Engineering, or a related field.
- Skills
- Must-have: Proficiency in Spark, SQL, Python, and other relevant data processing technologies. Strong knowledge of Databricks and its components, including Delta Live Table (DLT) pipeline implementations. Expertise in on-premises to cloud Spark code optimization and Medallion Architecture.
Good to Have
- Familiarity with AWS services (experience with additional cloud platforms like GCP or Azure is a plus).
Soft Skills
- Excellent communication and collaboration skills, with the ability to work effectively with clients and internal teams.
- Certifications
- AWS/GCP/Azure Data Engineer Certification.
Specific Knowledge/Skills
- 4-6 years of experience
- Proficiency in Python programming.
- Basic knowledge of front-end development.
- Basic knowledge of Data manipulation and analysis libraries
- Code versioning and collaboration. (Git)
- Knowledge for Libraries for extracting data from websites.
- Knowledge of SQL and NoSQL databases
- Familiarity with RESTful APIs
- Familiarity with Cloud (Azure /AWS) technologies
ROLES AND RESPONSIBILITIES:
You will be responsible for architecting, implementing, and optimizing Dremio-based data Lakehouse environments integrated with cloud storage, BI, and data engineering ecosystems. The role requires a strong balance of architecture design, data modeling, query optimization, and governance enablement in large-scale analytical environments.
- Design and implement Dremio lakehouse architecture on cloud (AWS/Azure/Snowflake/Databricks ecosystem).
- Define data ingestion, curation, and semantic modeling strategies to support analytics and AI workloads.
- Optimize Dremio reflections, caching, and query performance for diverse data consumption patterns.
- Collaborate with data engineering teams to integrate data sources via APIs, JDBC, Delta/Parquet, and object storage layers (S3/ADLS).
- Establish best practices for data security, lineage, and access control aligned with enterprise governance policies.
- Support self-service analytics by enabling governed data products and semantic layers.
- Develop reusable design patterns, documentation, and standards for Dremio deployment, monitoring, and scaling.
- Work closely with BI and data science teams to ensure fast, reliable, and well-modeled access to enterprise data.
IDEAL CANDIDATE:
- Bachelor’s or Master’s in Computer Science, Information Systems, or related field.
- 5+ years in data architecture and engineering, with 3+ years in Dremio or modern lakehouse platforms.
- Strong expertise in SQL optimization, data modeling, and performance tuning within Dremio or similar query engines (Presto, Trino, Athena).
- Hands-on experience with cloud storage (S3, ADLS, GCS), Parquet/Delta/Iceberg formats, and distributed query planning.
- Knowledge of data integration tools and pipelines (Airflow, DBT, Kafka, Spark, etc.).
- Familiarity with enterprise data governance, metadata management, and role-based access control (RBAC).
- Excellent problem-solving, documentation, and stakeholder communication skills.
PREFERRED:
- Experience integrating Dremio with BI tools (Tableau, Power BI, Looker) and data catalogs (Collibra, Alation, Purview).
- Exposure to Snowflake, Databricks, or BigQuery environments.
- Experience in high-tech, manufacturing, or enterprise data modernization programs.
Review Criteria
- Strong Data Scientist/Machine Learnings/ AI Engineer Profile
- 2+ years of hands-on experience as a Data Scientist or Machine Learning Engineer building ML models
- Strong expertise in Python with the ability to implement classical ML algorithms including linear regression, logistic regression, decision trees, gradient boosting, etc.
- Hands-on experience in minimum 2+ usecaseds out of recommendation systems, image data, fraud/risk detection, price modelling, propensity models
- Strong exposure to NLP, including text generation or text classification (Text G), embeddings, similarity models, user profiling, and feature extraction from unstructured text
- Experience productionizing ML models through APIs/CI/CD/Docker and working on AWS or GCP environments
- Preferred (Company) – Must be from product companies
Job Specific Criteria
- CV Attachment is mandatory
- What's your current company?
- Which use cases you have hands on experience?
- Are you ok for Mumbai location (if candidate is from outside Mumbai)?
- Reason for change (if candidate has been in current company for less than 1 year)?
- Reason for hike (if greater than 25%)?
Role & Responsibilities
- Partner with Product to spot high-leverage ML opportunities tied to business metrics.
- Wrangle large structured and unstructured datasets; build reliable features and data contracts.
- Build and ship models to:
- Enhance customer experiences and personalization
- Boost revenue via pricing/discount optimization
- Power user-to-user discovery and ranking (matchmaking at scale)
- Detect and block fraud/risk in real time
- Score conversion/churn/acceptance propensity for targeted actions
- Collaborate with Engineering to productionize via APIs/CI/CD/Docker on AWS.
- Design and run A/B tests with guardrails.
- Build monitoring for model/data drift and business KPIs
Ideal Candidate
- 2–5 years of DS/ML experience in consumer internet / B2C products, with 7–8 models shipped to production end-to-end.
- Proven, hands-on success in at least two (preferably 3–4) of the following:
- Recommender systems (retrieval + ranking, NDCG/Recall, online lift; bandits a plus)
- Fraud/risk detection (severe class imbalance, PR-AUC)
- Pricing models (elasticity, demand curves, margin vs. win-rate trade-offs, guardrails/simulation)
- Propensity models (payment/churn)
- Programming: strong Python and SQL; solid git, Docker, CI/CD.
- Cloud and data: experience with AWS or GCP; familiarity with warehouses/dashboards (Redshift/BigQuery, Looker/Tableau).
- ML breadth: recommender systems, NLP or user profiling, anomaly detection.
- Communication: clear storytelling with data; can align stakeholders and drive decisions.
Review Criteria
- Strong Dremio / Lakehouse Data Architect profile
- 5+ years of experience in Data Architecture / Data Engineering, with minimum 3+ years hands-on in Dremio
- Strong expertise in SQL optimization, data modeling, query performance tuning, and designing analytical schemas for large-scale systems
- Deep experience with cloud object storage (S3 / ADLS / GCS) and file formats such as Parquet, Delta, Iceberg along with distributed query planning concepts
- Hands-on experience integrating data via APIs, JDBC, Delta/Parquet, object storage, and coordinating with data engineering pipelines (Airflow, DBT, Kafka, Spark, etc.)
- Proven experience designing and implementing lakehouse architecture including ingestion, curation, semantic modeling, reflections/caching optimization, and enabling governed analytics
- Strong understanding of data governance, lineage, RBAC-based access control, and enterprise security best practices
- Excellent communication skills with ability to work closely with BI, data science, and engineering teams; strong documentation discipline
- Candidates must come from enterprise data modernization, cloud-native, or analytics-driven companies
Preferred
- Preferred (Nice-to-have) – Experience integrating Dremio with BI tools (Tableau, Power BI, Looker) or data catalogs (Collibra, Alation, Purview); familiarity with Snowflake, Databricks, or BigQuery environments
Job Specific Criteria
- CV Attachment is mandatory
- How many years of experience you have with Dremio?
- Which is your preferred job location (Mumbai / Bengaluru / Hyderabad / Gurgaon)?
- Are you okay with 3 Days WFO?
- Virtual Interview requires video to be on, are you okay with it?
Role & Responsibilities
You will be responsible for architecting, implementing, and optimizing Dremio-based data lakehouse environments integrated with cloud storage, BI, and data engineering ecosystems. The role requires a strong balance of architecture design, data modeling, query optimization, and governance enablement in large-scale analytical environments.
- Design and implement Dremio lakehouse architecture on cloud (AWS/Azure/Snowflake/Databricks ecosystem).
- Define data ingestion, curation, and semantic modeling strategies to support analytics and AI workloads.
- Optimize Dremio reflections, caching, and query performance for diverse data consumption patterns.
- Collaborate with data engineering teams to integrate data sources via APIs, JDBC, Delta/Parquet, and object storage layers (S3/ADLS).
- Establish best practices for data security, lineage, and access control aligned with enterprise governance policies.
- Support self-service analytics by enabling governed data products and semantic layers.
- Develop reusable design patterns, documentation, and standards for Dremio deployment, monitoring, and scaling.
- Work closely with BI and data science teams to ensure fast, reliable, and well-modeled access to enterprise data.
Ideal Candidate
- Bachelor’s or master’s in computer science, Information Systems, or related field.
- 5+ years in data architecture and engineering, with 3+ years in Dremio or modern lakehouse platforms.
- Strong expertise in SQL optimization, data modeling, and performance tuning within Dremio or similar query engines (Presto, Trino, Athena).
- Hands-on experience with cloud storage (S3, ADLS, GCS), Parquet/Delta/Iceberg formats, and distributed query planning.
- Knowledge of data integration tools and pipelines (Airflow, DBT, Kafka, Spark, etc.).
- Familiarity with enterprise data governance, metadata management, and role-based access control (RBAC).
- Excellent problem-solving, documentation, and stakeholder communication skills.
Job Summary:
Deqode is looking for a highly motivated and experienced Python + AWS Developer to join our growing technology team. This role demands hands-on experience in backend development, cloud infrastructure (AWS), containerization, automation, and client communication. The ideal candidate should be a self-starter with a strong technical foundation and a passion for delivering high-quality, scalable solutions in a client-facing environment.
Key Responsibilities:
- Design, develop, and deploy backend services and APIs using Python.
- Build and maintain scalable infrastructure on AWS (EC2, S3, Lambda, RDS, etc.).
- Automate deployments and infrastructure with Terraform and Jenkins/GitHub Actions.
- Implement containerized environments using Docker and manage orchestration via Kubernetes.
- Write automation and scripting solutions in Bash/Shell to streamline operations.
- Work with relational databases like MySQL and SQL, including query optimization.
- Collaborate directly with clients to understand requirements and provide technical solutions.
- Ensure system reliability, performance, and scalability across environments.
Required Skills:
- 3.5+ years of hands-on experience in Python development.
- Strong expertise in AWS services such as EC2, Lambda, S3, RDS, IAM, CloudWatch.
- Good understanding of Terraform or other Infrastructure as Code tools.
- Proficient with Docker and container orchestration using Kubernetes.
- Experience with CI/CD tools like Jenkins or GitHub Actions.
- Strong command of SQL/MySQL and scripting with Bash/Shell.
- Experience working with external clients or in client-facing roles
.
Preferred Qualifications:
- AWS Certification (e.g., AWS Certified Developer or DevOps Engineer).
- Familiarity with Agile/Scrum methodologies.
- Strong analytical and problem-solving skills.
- Excellent communication and stakeholder management abilities.
Type: Client-Facing Technical Architecture, Infrastructure Solutioning & Domain Consulting (India + International Markets)
Role Overview
Tradelab is seeking a senior Solution Architect who can interact with both Indian and international clients (Dubai, Singapore, London, US), helping them understand our trading systems, OMS/RMS/CMS stack, HFT platforms, feed systems, and Matching Engine. The architect will design scalable, secure, and ultra-low-latency deployments tailored to global forex markets, brokers, prop firms, liquidity providers, and market makers.
Key Responsibilities
1. Client Engagement (India + International Markets)
- Engage with brokers, prop trading firms, liquidity providers, and financial institutions across India, Dubai, Singapore, and global hubs.
- Explain Tradelab’s capabilities, architecture, and deployment options.
- Understand region-specific latency expectations, connectivity options, and regulatory constraints.
2. Requirement Gathering & Solutioning
- Capture client needs, throughput, order concurrency, tick volumes, and market data handling.
- Assess infra readiness (cloud/on-prem/colo).
- Propose architecture aligned with forex markets.
3. Global Architecture & Deployment Design
- Design multi-region infrastructure using AWS/Azure/GCP.
- Architect low-latency routing between India–Singapore–Dubai.
- Support deployments in DCs like Equinix SG1/DX1.
4. Networking & Security Architecture
- Architect multicast/unicast feeds, VPNs, IPSec tunnels, BGP routes.
- Implement network hardening, segmentation, WAF/firewall rules.
5. DevOps, Cloud Engineering & Scalability
- Build CI/CD pipelines, Kubernetes autoscaling, cost-optimized AWS multi-region deployments.
- Design global failover models.
6. BFSI & Trading Domain Expertise
- Indian broking, international forex, LP aggregation, HFT.
- OMS/RMS, risk engines, LP connectivity, and matching engines.
7. Latency, Performance & Capacity Planning
- Benchmark and optimize cross-region latency.
- Tune performance for high tick volumes and volatility bursts.
8. Documentation & Consulting
- Prepare HLDs, LLDs, SOWs, cost sheets, and deployment of playbooks.
- Required Skills
- AWS: EC2, VPC, EKS, NLB, MSK/Kafka, IAM, Global Accelerator.
- DevOps: Kubernetes, Docker, Helm, Terraform.
- Networking: IPSec, GRE, VPN, BGP, multicast (PIM/IGMP).
- Message buses: Kafka, RabbitMQ, Redis Streams.
Domain Skills
- Deep Broking Domain Understanding.
- Indian broking + global forex/CFD.
- FIX protocol, LP integration, market data feeds.
- Regulations: SEBI, DFSA, MAS, ESMA.
Soft Skills
- Excellent communication and client-facing ability.
- Strong presales and solutioning mindset.
- Preferred Qualifications
- B.Tech/BE/M.Tech in CS or equivalent.
- AWS Architect Professional, CCNP, CKA.
Why Join Us?
- Experience in colocation/global trading infra.
- Work with a team that expects and delivers excellence.
- A culture where risk-taking is rewarded, and complacency is not.
- Limitless opportunities for growth—if you can handle the pace.
- A place where learning is currency, and outperformance is the only metric that matters.
- The opportunity to build systems that move markets, execute trades in microseconds, and redefine fintech.
This isn’t just a job—it’s a proving ground. Ready to take the leap? Apply now.
Job Title : DevOps Engineer – Fintech (Product-Based)
Experience : 5+ Years
Location : Mumbai
Job Type : Full-Time | Product Company
Role Summary :
We are hiring a DevOps Engineer with strong product-based experience to manage infrastructure for a Fintech platform built on stateful microservices.
The role involves working across hybrid cloud + on-prem, with deep expertise in Kubernetes, Helm, GitOps, IaC, and Cloud Networking.
Mandatory Skills :
Product-based experience, deep Kubernetes (managed & self-managed), custom Helm Chart development, ArgoCD/FluxCD (GitOps), strong AWS/Azure cloud networking & security, IaC module development (Terraform/Pulumi/CloudFormation), experience with stateful microservices (DBs/queues/caches), multi-tenant deployments, HA/load balancing/SSL/TLS/cert management.
Key Responsibilities :
- Deploy and manage stateful microservices in production.
- Handle both managed & self-managed Kubernetes clusters.
- Develop and maintain custom Helm Charts.
- Implement GitOps pipelines using ArgoCD/FluxCD.
- Architect and operate secure infra on AWS/Azure (VPC, IAM, networking).
- Build reusable IaC modules using Terraform/CloudFormation/Pulumi.
- Design multi-tenant cluster deployments.
- Manage HA, load balancers, certificates, DNS, and networking.
Mandatory Skills :
- Product-based company experience.
- Strong Kubernetes (EKS/AKS/GKE + self-managed).
- Custom Helm Chart development.
- GitOps tools : ArgoCD/FluxCD.
- AWS/Azure cloud networking & security.
- IaC module development (Terraform/Pulumi/CloudFormation).
- Experience with stateful components (DBs, queues, caches).
- Understanding of multi-tenant deployments, HA, SSL/TLS, ingress, LB.
Backend Engineer (MongoDB / API Integrations / AWS / Vectorization)
Position Summary
We are hiring a Backend Engineer with expertise in MongoDB, data vectorization, and advanced AI/LLM integrations. The ideal candidate will have hands-on experience developing backend systems that power intelligent data-driven applications, including robust API integrations with major social media platforms (Meta, Instagram, Facebook, with expansion to TikTok, Snapchat, etc.). In addition, this role requires deep AWS experience (Lambda, S3, EventBridge) to manage serverless workflows, automate cron jobs, and execute both scheduled and manual data pulls. You will collaborate closely with frontend developers and AI engineers to deliver scalable, resilient APIs that power our platform.
Key Responsibilities
- Design, implement, and maintain backend services with MongoDB and scalable data models.
- Build pipelines to vectorize data for retrieval-augmented generation (RAG) and other AI-driven features.
- Develop robust API integrations with major social platforms (Meta, Instagram Graph API, Facebook API; expand to TikTok, Snapchat, etc.).
- Implement and maintain AWS Lambda serverless functions for scalable backend processes.
- Use AWS EventBridge to schedule cron jobs and manage event-driven workflows.
- Leverage AWS S3 for structured and unstructured data storage, retrieval, and processing.
- Build workflows for manual and automated data pulls from external APIs.
- Optimize backend systems for performance, scalability, and reliability at high data volumes.
- Collaborate with frontend engineers to ensure smooth integration into Next.js applications.
- Ensure security, compliance, and best practices in API authentication (OAuth, tokens, etc.).
- Contribute to architecture planning, documentation, and system design reviews.
Required Skills/Qualifications
- Strong expertise with MongoDB (including Atlas) and schema design.
- Experience with data vectorization and embeddings (OpenAI, Pinecone, MongoDB Atlas Vector Search, etc.).
- Proven track record of social media API integrations (Meta, Instagram, Facebook; additional platforms a plus).
- Proficiency in Node.js, Python, or other backend languages for API development.
- Deep understanding of AWS services:
- Lambda for serverless functions.
- S3 for structured/unstructured data storage.
- EventBridge for cron jobs, scheduled tasks, and event-driven workflows.
- Strong understanding of REST and GraphQL API design.
- Experience with data optimization, caching, and large-scale API performance.
Preferred Skills/Experience
- Experience with real-time data pipelines (Kafka, Kinesis, or similar).
- Familiarity with CI/CD pipelines and automated deployments on AWS.
- Knowledge of serverless architecture best practices.
- Background in SaaS platform development or data analytics systems.
ey Responsibilities:
- Design, develop, and maintain microservices using Java (Spring Boot).
- Implement RESTful APIs and ensure integration with front-end and third-party services.
- Work with AWS services such as EC2, ECS, Lambda, S3, RDS, API Gateway, CloudWatch, etc.
- Utilize CI/CD pipelines (Jenkins / GitHub Actions / AWS CodePipeline) for deployment automation.
- Implement security, scalability, and high availability best practices in microservice architecture.
- Collaborate with DevOps, QA, and Product teams to deliver robust solutions.
- Monitor and troubleshoot production issues using AWS CloudWatch / ELK / Prometheus & Grafana.
- Participate in code reviews, design discussions, and agile ceremonies.
Required Skills & Qualifications:
- Strong proficiency in Java 8+, Spring Boot, Spring Cloud, and RESTful APIs.
- Solid understanding of microservices architecture and service discovery / communication patterns (Eureka, Feign, Ribbon, etc.).
- Hands-on experience with AWS Cloud (ECS, Lambda, API Gateway, RDS, DynamoDB, S3, CloudFormation).
- Experience with Docker and Kubernetes for containerization and orchestration.
- Proficient in RDBMS / NoSQL databases (MySQL, PostgreSQL, MongoDB).
- Familiar with CI/CD pipelines, Git, and DevOps best practices.
- Knowledge of message brokers (Kafka, RabbitMQ, SQS) is a plus.
- Strong debugging, problem-solving, and analytical skills.
Job Description
Position - Full stack Developer
Location - Mumbai
Experience - 2-5 Years
Who are we
Based out of IIT Bombay, HaystackAnalytics is a HealthTech company creating clinical genomics products, which enable diagnostic labs and hospitals to offer accurate and personalized diagnostics. Supported by India's most respected science agencies (DST, BIRAC, DBT), we created and launched a portfolio of products to offer genomics in infectious diseases. Our genomics based diagnostic solution for Tuberculosis was recognized as one of top innovations supported by BIRAC in the past 10 years, and was launched by the Prime Minister of India in the BIRAC Showcase event in Delhi, 2022.
Objectives of this Role:
- Work across the full stack, building highly scalable distributed solutions that enable positive user experiences and measurable business growth
- Ideate and develop new product features in collaboration with domain experts in healthcare and genomics
- Develop state of the art enterprise standard front-end and backend services
- Develop cloud platform services based on container orchestration platform
- Continuously embrace automation for repetitive tasks
- Ensure application performance, uptime, and scale, maintaining high standards of code quality by using clean coding principles and solid design patterns
- Build robust tech modules that are Unit Testable, Automating recurring tasks and processes
- Engage effectively with team members and collaborate to upskill and unblock each other
Frontend Skills
- HTML 5
- CSS framework ( LESS/ SASS / Tailwind )
- Es6 / Typescript
- Electron app /Tauri)
- Component library ( Bootstrap , material UI, Lit )
- Responsive web layout ( Flex layout , Grid layout )
- Package manager --> yarn-/ npm / turbo
- Build tools - > (Vite/Webpack/Parcel)
- Frameworks -- > React with Redux of Mobx / Next JS
- Design patterns
- Testing - JEST / MOCHA / JASMINE / Cypress)
- Functional Programming concepts
- Scripting ( powershell , bash , python )
Backend Skills
- Nodejs - Express / NEST JS
- Python / Rust
- REST API
- SOLID Design Principles
- Database (postgresql / mysql / redis / cassandra / mongodb )
- Caching ( Redis )
- Container Technology ( Docker / Kubernetes )
- Cloud ( azure , aws , openshift, google cloud)
- Version Control - GIT
- GITOPS
- Automation ( terraform , ansible )
Cloud Skills
- Object storage
- VPC concepts
- Containerize Deployment
- Serverless architecture
Other Skills
- Innovation and thought leadership
- UI - UX design skills
- Interest in in learning new tools, languages, workflows, and philosophies to grow
- Communication
To know more about us- https://haystackanalytics.in/
MUST-HAVES:
- LLM, AI, Prompt Engineering LLM Integration & Prompt Engineering
- Context & Knowledge Base Design.
- Context & Knowledge Base Design.
- Experience running LLM evals
NOTICE PERIOD: Immediate – 30 Days
SKILLS: LLM, AI, PROMPT ENGINEERING
NICE TO HAVES:
Data Literacy & Modelling Awareness Familiarity with Databricks, AWS, and ChatGPT Environments
ROLE PROFICIENCY:
Role Scope / Deliverables:
- Scope of Role Serve as the link between business intelligence, data engineering, and AI application teams, ensuring the Large Language Model (LLM) interacts effectively with the modeled dataset.
- Define and curate the context and knowledge base that enables GPT to provide accurate, relevant, and compliant business insights.
- Collaborate with Data Analysts and System SMEs to identify, structure, and tag data elements that feed the LLM environment.
- Design, test, and refine prompt strategies and context frameworks that align GPT outputs with business objectives.
- Conduct evaluation and performance testing (evals) to validate LLM responses for accuracy, completeness, and relevance.
- Partner with IT and governance stakeholders to ensure secure, ethical, and controlled AI behavior within enterprise boundaries.
KEY DELIVERABLES:
- LLM Interaction Design Framework: Documentation of how GPT connects to the modeled dataset, including context injection, prompt templates, and retrieval logic.
- Knowledge Base Configuration: Curated and structured domain knowledge to enable precise and useful GPT responses (e.g., commercial definitions, data context, business rules).
- Evaluation Scripts & Test Results: Defined eval sets, scoring criteria, and output analysis to measure GPT accuracy and quality over time.
- Prompt Library & Usage Guidelines: Standardized prompts and design patterns to ensure consistent business interactions and outcomes.
- AI Performance Dashboard / Reporting: Visualizations or reports summarizing GPT response quality, usage trends, and continuous improvement metrics.
- Governance & Compliance Documentation: Inputs to data security, bias prevention, and responsible AI practices in collaboration with IT and compliance teams.
KEY SKILLS:
Technical & Analytical Skills:
- LLM Integration & Prompt Engineering – Understanding of how GPT models interact with structured and unstructured data to generate business-relevant insights.
- Context & Knowledge Base Design – Skilled in curating, structuring, and managing contextual data to optimize GPT accuracy and reliability.
- Evaluation & Testing Methods – Experience running LLM evals, defining scoring criteria, and assessing model quality across use cases.
- Data Literacy & Modeling Awareness – Familiar with relational and analytical data models to ensure alignment between data structures and AI responses.
- Familiarity with Databricks, AWS, and ChatGPT Environments – Capable of working in cloud-based analytics and AI environments for development, testing, and deployment.
- Scripting & Query Skills (e.g., SQL, Python) – Ability to extract, transform, and validate data for model training and evaluation workflows.
- Business & Collaboration Skills Cross-Functional Collaboration – Works effectively with business, data, and IT teams to align GPT capabilities with business objectives.
- Analytical Thinking & Problem Solving – Evaluates LLM outputs critically, identifies improvement opportunities, and translates findings into actionable refinements.
- Commercial Context Awareness – Understands how sales and marketing intelligence data should be represented and leveraged by GPT.
- Governance & Responsible AI Mindset – Applies enterprise AI standards for data security, privacy, and ethical use.
- Communication & Documentation – Clearly articulates AI logic, context structures, and testing results for both technical and non-technical audiences.
Company Name – Wissen Technology
Location : Pune / Bangalore / Mumbai (Based on candidate preference)
Work mode: Hybrid
Experience: 5+ years
Job Description
Wissen Technology is seeking an experienced C# .NET Developer to build and maintain applications related to streaming market data. This role involves developing message-based C#/.NET applications to process, normalize, and summarize large volumes of market data efficiently. The candidate should have a strong foundation in Microsoft .NET technologies and experience working with message-driven, event-based architecture. Knowledge of capital markets and equity market data is highly desirable.
Responsibilities
- Design, develop, and maintain message-based C#/.NET applications for processing real-time and batch market data feeds.
- Build robust routines to download and process data from AWS S3 buckets on a frequent schedule.
- Implement daily data summarization and data normalization routines.
- Collaborate with business analysts, data providers, and other developers to deliver high-quality, scalable market data solutions.
- Troubleshoot and optimize market data pipelines to ensure low latency and high reliability.
- Contribute to documentation, code reviews, and team knowledge sharing.
Required Skills and Experience
- 5+ years of professional experience programming in C# and Microsoft .NET framework.
- Strong understanding of message-based and real-time programming architectures.
- Experience working with AWS services, specifically S3, for data retrieval and processing.
- Experience with SQL and Microsoft SQL Server.
- Familiarity with Equity market data, FX, Futures & Options, and capital markets concepts.
- Excellent interpersonal and communication skills.
- Highly motivated, curious, and analytical mindset with the ability to work well both independently and in a team environment.
Education
- Bachelor’s degree in Computer Science, Engineering, or a related technical field.
We’re seeking a Back-End Developer with hands-on experience in Node.js or any other modern backend framework. You’ll be responsible for building robust, scalable APIs and server-side logic powering Zilo’s high-velocity quick commerce platform.
Key Responsibilities
Develop, maintain, and optimize back-end applications and APIs.
Work on microservices, API integrations, and data modeling.
Ensure system scalability, reliability, and performance under high traffic.
Collaborate with front-end and mobile developers for seamless API integration.
Implement best practices in code quality, security, and database design.
Troubleshoot production issues and support deployment cycles.
Technical Skills Required
Strong proficiency in Node.js, Express.js, or similar backend frameworks.
Experience in MongoDB, MySQL, or PostgreSQL.
Understanding of RESTful APIs, authentication (JWT/OAuth), and middleware.
Familiarity with AWS, Docker, or other deployment environments.
Hands-on experience with Git and CI/CD pipelines.
Knowledge of other programming languages like Python, Java, or Go is an added advantage.

One of the reputed Client in India
Our Client is looking to hire Databricks Amin immediatly.
This is PAN-INDIA Bulk hiring
Minimum of 6-8+ years with Databricks, Pyspark/Python and AWS.
Must have AWS
Notice 15-30 days is preferred.
Share profiles at hr at etpspl dot com
Please refer/share our email to your friends/colleagues who are looking for job.
Job Title: Mid-Level .NET Developer (Agile/SCRUM)
Location: Mohali, PTP or anywhere else)
Night Shift from 6:30 pm to 3:30 am IST
Experience:
5 Years
Job Summary:
We are seeking a proactive and detail-oriented Mid-Level .NET Developer to join our dynamic team. The ideal candidate will be responsible for designing, developing, and maintaining high-quality applications using Microsoft technologies with a strong emphasis on .NET Core, C#, Web API, and modern front-end frameworks. You will collaborate with cross-functional teams in an Agile/SCRUM environment and participate in the full software development lifecycle—from requirements gathering to deployment—while ensuring adherence to best coding and delivery practices.
Key Responsibilities:
- Design, develop, and maintain applications using C#, .NET, .NET Core, MVC, and databases such as SQL Server, PostgreSQL, and MongoDB.
- Create responsive and interactive user interfaces using JavaScript, TypeScript, Angular, HTML, and CSS.
- Develop and integrate RESTful APIs for multi-tier, distributed systems.
- Participate actively in Agile/SCRUM ceremonies, including sprint planning, daily stand-ups, and retrospectives.
- Write clean, efficient, and maintainable code following industry best practices.
- Conduct code reviews to ensure high-quality and consistent deliverables.
- Assist in configuring and maintaining CI/CD pipelines (Jenkins or similar tools).
- Troubleshoot, debug, and resolve application issues effectively.
- Collaborate with QA and product teams to validate requirements and ensure smooth delivery.
- Support release planning and deployment activities.
Required Skills & Qualifications:
- 4–6 years of professional experience in .NET development.
- Strong proficiency in C#, .NET Core, MVC, and relational databases such as SQL Server.
- Working knowledge of NoSQL databases like MongoDB.
- Solid understanding of JavaScript/TypeScript and the Angular framework.
- Experience in developing and integrating RESTful APIs.
- Familiarity with Agile/SCRUM methodologies.
- Basic knowledge of CI/CD pipelines and Git version control.
- Hands-on experience with AWS cloud services.
- Strong analytical, problem-solving, and debugging skills.
- Excellent communication and collaboration skills.
Preferred / Nice-to-Have Skills:
- Advanced experience with AWS services.
- Knowledge of Kubernetes or other container orchestration platforms.
- Familiarity with IIS web server configuration and management.
- Experience in the healthcare domain.
- Exposure to AI-assisted code development tools (e.g., GitHub Copilot, ChatGPT).
- Experience with application security and code quality tools such as Snyk or SonarQube.
- Strong understanding of SOLID principles and clean architecture patterns.
Technical Proficiencies:
- ASP.NET Core, ASP.NET MVC
- C#, Entity Framework, Razor Pages
- SQL Server, MongoDB
- REST API, jQuery, AJAX
- HTML, CSS, JavaScript, TypeScript, Angular
- Azure Services, Azure Functions, AWS
- Visual Studio
- CI/CD, Git
We are seeking a mid-to-senior level Full-Stack Developer with a foundational understanding of software development, cloud services, and database management. In this role, you will contribute to both the front-end and back-end of our application. focusing on creating a seamless user experience, supported by robust and scalable cloud infrastructure.
Key Responsibilities
● Develop and maintain user-facing features using React.js and TypeScript.
● Write clean, efficient, and well-documented JavaScript/TypeScript code.
● Assist in managing and provisioning cloud infrastructure on AWS using Infrastructure as Code (IaC) principles.
● Contribute to the design, implementation, and maintenance of our databases.
● Collaborate with senior developers and product managers to deliver high-quality software.
● Troubleshoot and debug issues across the full stack.
● Participate in code reviews to maintain code quality and share knowledge.
Qualifications
● Bachelor's degree in Computer Science, a related technical field, or equivalent practical experience.
● 5+ years of professional experience in web development.
● Proficiency in JavaScript and/or TypeScript.
● Proficiency in Golang and Python.
● Hands-on experience with the React.js library for building user interfaces.
● Familiarity with Infrastructure as Code (IaC) tools and concepts (e.g.(AWS CDK, Terraform, or CloudFormation).
● Basic understanding of AWS and its core services (e.g., S3, EC2, Lambda, DynamoDB).
● Experience with database management, including relational (e.g., PostgreSQL) or NoSQL (e.g., DynamoDB, MongoDB) databases.
● Strong problem-solving skills and a willingness to learn.
● Familiarity with modern front-end build pipelines and tools like Vite and Tailwind CSS.
● Knowledge of CI/CD pipelines and automated testing.
Job Summary:
Deqode is looking for a highly motivated and experienced Python + AWS Developer to join our growing technology team. This role demands hands-on experience in backend development, cloud infrastructure (AWS), containerization, automation, and client communication. The ideal candidate should be a self-starter with a strong technical foundation and a passion for delivering high-quality, scalable solutions in a client-facing environment.
Key Responsibilities:
- Design, develop, and deploy backend services and APIs using Python.
- Build and maintain scalable infrastructure on AWS (EC2, S3, Lambda, RDS, etc.).
- Automate deployments and infrastructure with Terraform and Jenkins/GitHub Actions.
- Implement containerized environments using Docker and manage orchestration via Kubernetes.
- Write automation and scripting solutions in Bash/Shell to streamline operations.
- Work with relational databases like MySQL and SQL, including query optimization.
- Collaborate directly with clients to understand requirements and provide technical solutions.
- Ensure system reliability, performance, and scalability across environments.
Required Skills:
- 3.5+ years of hands-on experience in Python development.
- Strong expertise in AWS services such as EC2, Lambda, S3, RDS, IAM, CloudWatch.
- Good understanding of Terraform or other Infrastructure as Code tools.
- Proficient with Docker and container orchestration using Kubernetes.
- Experience with CI/CD tools like Jenkins or GitHub Actions.
- Strong command of SQL/MySQL and scripting with Bash/Shell.
- Experience working with external clients or in client-facing roles.
Preferred Qualifications:
- AWS Certification (e.g., AWS Certified Developer or DevOps Engineer).
- Familiarity with Agile/Scrum methodologies.
- Strong analytical and problem-solving skills.
- Excellent communication and stakeholder management abilities.
Position Overview
We're seeking a skilled Full Stack Developer to build and maintain scalable web applications using modern technologies. You'll work across the entire development stack, from database design to user interface implementation.
Key Responsibilities
- Develop and maintain full-stack web applications using Node.js and TypeScript
- Design and implement RESTful APIs and microservices
- Build responsive, user-friendly front-end interfaces
- Design and optimize SQL databases and write efficient queries
- Collaborate with cross-functional teams on feature development
- Participate in code reviews and maintain high code quality standards
- Debug and troubleshoot application issues across the stack
Required Skills
- Backend: 3+ years experience with Node.js and TypeScript
- Database: Proficient in SQL (PostgreSQL, MySQL, or similar)
- Frontend: Experience with modern JavaScript frameworks (React, Vue, or Angular)
- Version Control: Git and collaborative development workflows
- API Development: RESTful services and API design principles
Preferred Qualifications
- Experience with cloud platforms (AWS, Azure, or GCP)
- Knowledge of containerization (Docker)
- Familiarity with testing frameworks (Jest, Mocha, or similar)
- Understanding of CI/CD pipelines
What We Offer
- Competitive salary and benefits
- Flexible work arrangements
- Professional development opportunities
- Collaborative team environment
Job Title : Senior QA Automation Architect (Cloud & Kubernetes)
Experience : 6+ Years
Location : India (Multiple Offices)
Shift Timings : 12 PM to 9 PM (Noon Shift)
Working Days : 5 Days WFO (NO Hybrid)
About the Role :
We’re looking for a Senior QA Automation Architect with deep expertise in cloud-native systems, Kubernetes, and automation frameworks.
You’ll design scalable test architectures, enhance automation coverage, and ensure product reliability across hybrid-cloud and distributed environments.
Key Responsibilities :
- Architect and maintain test automation frameworks for microservices.
- Integrate automated tests into CI/CD pipelines (Jenkins, GitHub Actions).
- Ensure reliability, scalability, and observability of test systems.
- Work closely with DevOps and Cloud teams to streamline automation infrastructure.
Mandatory Skills :
- Kubernetes, Helm, Docker, Linux
- Cloud Platforms : AWS / Azure / GCP
- CI/CD Tools : Jenkins, GitHub Actions
- Scripting : Python, Pytest, Bash
- Monitoring & Performance : Prometheus, Grafana, Jaeger, K6
- IaC Practices : Terraform / Ansible
Good to Have :
- Experience with Service Mesh (Istio/Linkerd).
- Container Security or DevSecOps exposure.
SENIOR DATA ENGINEER:
ROLE SUMMARY:
Own the design and delivery of petabyte-scale data platforms and pipelines across AWS and modern Lakehouse stacks. You’ll architect, code, test, optimize, and operate ingestion, transformation, storage, and serving layers. This role requires autonomy, strong engineering judgment, and partnership with project managers, infrastructure teams, testers, and customer architects to land secure, cost-efficient, and high-performing solutions.
RESPONSIBILITIES:
- Architecture and design: Create HLD/LLD/SAD, source–target mappings, data contracts, and optimal designs aligned to requirements.
- Pipeline development: Build and test robust ETL/ELT for batch, micro-batch, and streaming across RDBMS, flat files, APIs, and event sources.
- Performance and cost tuning: Profile and optimize jobs, right-size infrastructure, and model license/compute/storage costs.
- Data modeling and storage: Design schemas and SCD strategies; manage relational, NoSQL, data lakes, Delta Lakes, and Lakehouse tables.
- DevOps and release: Establish coding standards, templates, CI/CD, configuration management, and monitored release processes.
- Quality and reliability: Define DQ rules and lineage; implement SLA tracking, failure detection, RCA, and proactive defect mitigation.
- Security and governance: Enforce IAM best practices, retention, audit/compliance; implement PII detection and masking.
- Orchestration: Schedule and govern pipelines with Airflow and serverless event-driven patterns.
- Stakeholder collaboration: Clarify requirements, present design options, conduct demos, and finalize architectures with customer teams.
- Leadership: Mentor engineers, set FAST goals, drive upskilling and certifications, and support module delivery and sprint planning.
REQUIRED QUALIFICATIONS:
- Experience: 15+ years designing distributed systems at petabyte scale; 10+ years building data lakes and multi-source ingestion.
- Cloud (AWS): IAM, VPC, EC2, EKS/ECS, S3, RDS, DMS, Lambda, CloudWatch, CloudFormation, CloudTrail.
- Programming: Python (preferred), PySpark, SQL for analytics, window functions, and performance tuning.
- ETL tools: AWS Glue, Informatica, Databricks, GCP DataProc; orchestration with Airflow.
- Lakehouse/warehousing: Snowflake, BigQuery, Delta Lake/Lakehouse; schema design, partitioning, clustering, performance optimization.
- DevOps/IaC: Terraform with 15+ years of practice; CI/CD (GitHub Actions, Jenkins) with 10+ years; config governance and release management.
- Serverless and events: Design event-driven distributed systems on AWS.
- NoSQL: 2–3 years with DocumentDB including data modeling and performance considerations.
- AI services: AWS Entity Resolution, AWS Comprehend; run custom LLMs on Amazon SageMaker; use LLMs for PII classification.
NICE-TO-HAVE QUALIFICATIONS:
- Data governance automation: 10+ years defining audit, compliance, retention standards and automating governance workflows.
- Table and file formats: Apache Parquet; Apache Iceberg as analytical table format.
- Advanced LLM workflows: RAG and agentic patterns over proprietary data; re-ranking with index/vector store results.
- Multi-cloud exposure: Azure ADF/ADLS, GCP Dataflow/DataProc; FinOps practices for cross-cloud cost control.
OUTCOMES AND MEASURES:
- Engineering excellence: Adherence to processes, standards, and SLAs; reduced defects and non-compliance; fewer recurring issues.
- Efficiency: Faster run times and lower resource consumption with documented cost models and performance baselines.
- Operational reliability: Faster detection, response, and resolution of failures; quick turnaround on production bugs; strong release success.
- Data quality and security: High DQ pass rates, robust lineage, minimal security incidents, and audit readiness.
- Team and customer impact: On-time milestones, clear communication, effective demos, improved satisfaction, and completed certifications/training.
LOCATION AND SCHEDULE:
● Location: Outside US (OUS).
● Schedule: Minimum 6 hours of overlap with US time zones.
Job Description
The ideal candidate will possess expertise in Core Java (at least Java 8), Spring framework, JDBC, threading, database management, and cloud platforms such as Azure and GCP. The candidate should also have strong debugging skills, the ability to understand multi-service flow, experience with large data processing, and excellent problem-solving abilities.
JD:
- Develop, and maintain Java applications using Core Java, Spring framework, JDBC, and threading concepts.
- Strong understanding of the Spring framework and its various modules.
- Experience with JDBC for database connectivity and manipulation
- Utilize database management systems to store and retrieve data efficiently.
- Proficiency in Core Java8 and thorough understanding of threading concepts and concurrent programming.
- Experience in in working with relational and nosql databases.
- Basic understanding of cloud platforms such as Azure and GCP and gain experience on DevOps practices is added advantage.
- Knowledge of containerization technologies (e.g., Docker, Kubernetes)
- Perform debugging and troubleshooting of applications using log analysis techniques.
- Understand multi-service flow and integration between components.
- Handle large-scale data processing tasks efficiently and effectively.
- Hands on experience using Spark is an added advantage.
- Good problem-solving and analytical abilities.
- Collaborate with cross-functional teams to identify and solve complex technical problems.
- Knowledge of Agile methodologies such as Scrum or Kanban
- Stay updated with the latest technologies and industry trends to improve development processes continuously and methodologies.
Job Title : React + Node.js Developer (Full Stack)
Experience : 5+ Years
Location : Mumbai or Pune (Final location to be decided post-interview)
Notice Period : Immediate to 15 Days
Interview Rounds : 1 Internal Round + 1 Client Round
Job Summary :
We are looking for a highly skilled Full Stack Developer (React + Node.js) with strong expertise in both frontend and backend development.
The ideal candidate should demonstrate hands-on experience with databases, excellent project understanding, and the ability to deliver scalable, high-performance applications in production environments.
Mandatory Skills :
React.js, Node.js, PostgreSQL/MySQL, JavaScript (ES6+), Docker, AWS/GCP, full-stack development, production system experience, and strong project understanding with hands-on database expertise.
Key Responsibilities :
- Design, develop, and deploy robust full-stack applications using React (frontend) and Node.js (backend).
- Exhibit a deep understanding of database design, optimization, and integration using PostgreSQL or MySQL.
- Translate project requirements into efficient, maintainable, and scalable technical solutions.
- Build clean, modular, and reusable components following SOLID principles and industry best practices.
- Manage backend services, APIs, and data-driven functionalities for large-scale applications.
- Work closely with product and engineering teams to ensure smooth end-to-end project delivery.
- Use Docker and cloud platforms (AWS/GCP) for containerization, deployment, and scaling of services.
- Participate in design discussions, code reviews, and troubleshooting production issues.
Required Skills :
- 5+ Years of hands-on experience in full-stack development using React and Node.js.
- Strong understanding and hands-on expertise with relational databases (PostgreSQL/MySQL).
- Solid grasp of JavaScript (ES6+), and proficiency in Object-Oriented Programming (OOP) or Functional Programming (FP).
- Proven experience working with production-grade systems and scalable architectures.
- Proficiency with Docker, API development, and cloud services (preferably AWS or GCP).
- Excellent project understanding, problem-solving ability, and strong communication skills (verbal and written).
Good to Have :
- Experience in Golang or Elixir for backend development.
- Knowledge of Kubernetes, Redis, RabbitMQ, or similar distributed tools.
- Exposure to AI APIs and tools.
- Contributions to open-source projects.
Job Title: Data Engineering Support Engineer / Manager
Experience range:-8+ Years
Location:- Mumbai
Experience :
Knowledge, Skills and Abilities
- Python, SQL
- Familiarity with data engineering
- Experience with AWS data and analytics services or similar cloud vendor services
- Strong problem solving and communication skills
- Ablity to organise and prioritise work effectively
Key Responsibilities
- Incident and user management for data and analytics platform
- Development and maintenance of Data Quliaty framework (including anomaly detection)
- Implemenation of Python & SQL hotfixes and working with data engineers on more complex issues
- Diagnostic tools implementation and automation of operational processes
Key Relationships
- Work closely with data scientists, data engineers, and platform engineers in a highly commercial environment
- Support research analysts and traders with issue resolution
Job Overview:
We are looking for a skilled Senior Backend Engineer to join our team. The ideal candidate will have a strong foundation in Java and Spring, with proven experience in building scalable microservices and backend systems. This role also requires familiarity with automation tools, Python development, and working knowledge of AI technologies.
Responsibilities:
- Design, develop, and maintain backend services and microservices.
- Build and integrate RESTful APIs across distributed systems.
- Ensure performance, scalability, and reliability of backend systems.
- Collaborate with cross-functional teams and participate in agile development.
- Deploy and maintain applications on AWS cloud infrastructure.
- Contribute to automation initiatives and AI/ML feature integration.
- Write clean, testable, and maintainable code following best practices.
- Participate in code reviews and technical discussions.
Required Skills:
- 4+ years of backend development experience.
- Strong proficiency in Java and Spring/Spring Boot frameworks.
- Solid understanding of microservices architecture.
- Experience with REST APIs, CI/CD, and debugging complex systems.
- Proficient in AWS services such as EC2, Lambda, S3.
- Strong analytical and problem-solving skills.
- Excellent communication in English (written and verbal).
Good to Have:
- Experience with automation tools like Workato or similar.
- Hands-on experience with Python development.
- Familiarity with AI/ML features or API integrations.
- Comfortable working with US-based teams (flexible hours).
About the Role
We’re looking for a passionate Fullstack Product Engineer with a strong JavaScript foundation to work on a high-impact, scalable product. You’ll collaborate closely with product and engineering teams to build intuitive UIs and performant backends using modern technologies.
Responsibilities
- Build and maintain scalable features across the frontend and backend.
- Work with tech stacks like Node.js, React.js, Vue.js, and others.
- Contribute to system design, architecture, and code quality enforcement.
- Follow modern engineering practices including TDD, CI/CD, and live coding evaluations.
- Collaborate in code reviews, performance optimizations, and product iterations.
Required Skills
- 4–6 years of hands-on fullstack development experience.
- Strong command over JavaScript, Node.js, and React.js.
- Solid understanding of REST APIs and/or GraphQL.
- Good grasp of OOP principles, TDD, and writing clean, maintainable code.
- Experience with CI/CD tools like GitHub Actions, GitLab CI, Jenkins, etc.
- Familiarity with HTML, CSS, and frontend performance optimization.
Good to Have
- Exposure to Docker, AWS, Kubernetes, or Terraform.
- Experience in other backend languages or frameworks.
- Experience with microservices and scalable system architectures.
We’re hiring a Full Stack Developer (5+ years, Pune location) to join our growing team!
You’ll be working with React.js, Node.js, JavaScript, APIs, and cloud deployments to build scalable and high-performing web applications.
Responsibilities include developing responsive apps, building RESTful APIs, working with SQL/NoSQL databases, and deploying apps on AWS/Docker.
Experience with CI/CD, Git, secure coding practices (OAuth/JWT), and Agile collaboration is a must.
If you’re passionate about full stack development and want to work on impactful projects, we’d love to connect!
About Us:
PluginLive is an all-in-one tech platform that bridges the gap between all its stakeholders - Corporates, Institutes Students, and Assessment & Training Partners. This ecosystem helps Corporates in brand building/positioning with colleges and the student community to scale its human capital, at the same time increasing student placements for Institutes, and giving students a real time perspective of the corporate world to help upskill themselves into becoming more desirable candidates.
Role Overview:
Entry-level Data Engineer position focused on building and maintaining data pipelines while developing visualization skills. You'll work alongside senior engineers to support our data infrastructure and create meaningful insights through data visualization.
Responsibilities:
- Assist in building and maintaining ETL/ELT pipelines for data processing
- Write SQL queries to extract and analyze data from various sources
- Support data quality checks and basic data validation processes
- Create simple dashboards and reports using visualization tools
- Learn and work with Oracle Cloud services under guidance
- Use Python for basic data manipulation and cleaning tasks
- Document data processes and maintain data dictionaries
- Collaborate with team members to understand data requirements
- Participate in troubleshooting data issues with senior support
- Contribute to data migration tasks as needed
Qualifications:
Required:
- Bachelor's degree in Computer Science, Information Systems, or related field
- around 2 years of experience in data engineering or related field
- Strong SQL knowledge and database concepts
- Comfortable with Python programming
- Understanding of data structures and ETL concepts
- Problem-solving mindset and attention to detail
- Good communication skills
- Willingness to learn cloud technologies
Preferred:
- Exposure to Oracle Cloud or any cloud platform (AWS/GCP)
- Basic knowledge of data visualization tools (Tableau, Power BI, or Python libraries like Matplotlib)
- Experience with Pandas for data manipulation
- Understanding of data warehousing concepts
- Familiarity with version control (Git)
- Academic projects or internships involving data processing
Nice-to-Have:
- Knowledge of dbt, BigQuery, or Snowflake
- Exposure to big data concepts
- Experience with Jupyter notebooks
- Comfort with AI-assisted coding tools (Copilot, GPTs)
- Personal projects showcasing data work
What We Offer:
- Mentorship from senior data engineers
- Hands-on learning with modern data stack
- Access to paid AI tools and learning resources
- Clear growth path to mid-level engineer
- Direct impact on product and data strategy
- No unnecessary meetings — focused execution
- Strong engineering culture with continuous learning opportunities
Senior Cloud & ML Infrastructure Engineer
Location: Bangalore / Bengaluru, Hyderabad, Pune, Mumbai, Mohali, Panchkula, Delhi
Experience: 6–10+ Years
Night Shift - 9 pm to 6 am
About the Role:
We’re looking for a Senior Cloud & ML Infrastructure Engineer to lead the design,scaling, and optimization of cloud-native machine learning infrastructure. This role is ideal forsomeone passionate about solving complex platform engineering challenges across AWS, witha focus on model orchestration, deployment automation, and production-grade reliability. You’llarchitect ML systems at scale, provide guidance on infrastructure best practices, and work cross-functionally to bridge DevOps, ML, and backend teams.
Key Responsibilities:
● Architect and manage end-to-end ML infrastructure using SageMaker, AWS StepFunctions, Lambda, and ECR
● Design and implement multi-region, highly-available AWS solutions for real-timeinference and batch processing
● Create and manage IaC blueprints for reproducible infrastructure using AWS CDK
● Establish CI/CD practices for ML model packaging, validation, and drift monitoring
● Oversee infrastructure security, including IAM policies, encryption at rest/in-transit, andcompliance standards
● Monitor and optimize compute/storage cost, ensuring efficient resource usage at scale
● Collaborate on data lake and analytics integration
● Serve as a technical mentor and guide AWS adoption patterns across engineeringteams
Required Skills:
● 6+ years designing and deploying cloud infrastructure on AWS at scale
● Proven experience building and maintaining ML pipelines with services like SageMaker,ECS/EKS, or custom Docker pipelines
● Strong knowledge of networking, IAM, VPCs, and security best practices in AWS
● Deep experience with automation frameworks, IaC tools, and CI/CD strategies
● Advanced scripting proficiency in Python, Go, or Bash
● Familiarity with observability stacks (CloudWatch, Prometheus, Grafana)
Nice to Have:
● Background in robotics infrastructure, including AWS IoT Core, Greengrass, or OTA deployments
● Experience designing systems for physical robot fleet telemetry, diagnostics, and control
● Familiarity with multi-stage production environments and robotic software rollout processes
● Competence in frontend hosting for dashboard or API visualization
● Involvement with real-time streaming, MQTT, or edge inference workflows
● Hands-on experience with ROS 2 (Robot Operating System) or similar robotics frameworks, including launch file management, sensor data pipelines, and deployment to embedded Linux devices
🚀 We’re Hiring: Senior Cloud & ML Infrastructure Engineer 🚀
We’re looking for an experienced engineer to lead the design, scaling, and optimization of cloud-native ML infrastructure on AWS.
If you’re passionate about platform engineering, automation, and running ML systems at scale, this role is for you.
What you’ll do:
🔹 Architect and manage ML infrastructure with AWS (SageMaker, Step Functions, Lambda, ECR)
🔹 Build highly available, multi-region solutions for real-time & batch inference
🔹 Automate with IaC (AWS CDK, Terraform) and CI/CD pipelines
🔹 Ensure security, compliance, and cost efficiency
🔹 Collaborate across DevOps, ML, and backend teams
What we’re looking for:
✔️ 6+ years AWS cloud infrastructure experience
✔️ Strong ML pipeline experience (SageMaker, ECS/EKS, Docker)
✔️ Proficiency in Python/Go/Bash scripting
✔️ Knowledge of networking, IAM, and security best practices
✔️ Experience with observability tools (CloudWatch, Prometheus, Grafana)
✨ Nice to have: Robotics/IoT background (ROS2, Greengrass, Edge Inference)
📍 Location: Bengaluru, Hyderabad, Mumbai, Pune, Mohali, Delhi
5 days working, Work from Office
Night shifts: 9pm to 6am IST
👉 If this sounds like you (or someone you know), let’s connect!
Apply here:














