50+ SQL Jobs in Bangalore (Bengaluru) | SQL Job openings in Bangalore (Bengaluru)
Apply to 50+ SQL Jobs in Bangalore (Bengaluru) on CutShort.io. Explore the latest SQL Job opportunities across top companies like Google, Amazon & Adobe.
We are looking for a strong Mobile Engineer with backend exposure who can own end-to-end feature development. This is a mobile-heavy fullstack role where you will primarily build scalable mobile applications while contributing to backend services and APIs.
Key Responsibilities
- Design and develop high-quality mobile applications (primary focus)
- Build and integrate RESTful APIs and backend services
- Collaborate with product and design teams to ship features end-to-end
- Ensure performance, scalability, and reliability of mobile apps
- Write clean, maintainable, and testable code
- Participate in architecture discussions and technical decision-making
Must Have Skills
- Strong experience in mobile development (Flutter / React Native / iOS / Android)
- Solid understanding of backend development (Node.js / Java / Python / Go)
- Experience with API design, microservices, and databases
- Good understanding of system design and app performance optimization
- Familiarity with cloud platforms (AWS/GCP)
Good to Have
- Experience working in startup environments
- Exposure to CI/CD pipelines and DevOps practices
- Understanding of real-time systems or scalable architectures
Generative AI System Design
- Architect and implement end-to-end LLM-powered applications
- Build scalable RAG pipelines (chunking, embeddings, hybrid search, reranking)
- Design and implement agent-based workflows (tool calling, multi-step reasoning, orchestration)
- Integrate LLM APIs such as OpenAI and Anthropic, along with open-source models
- Implement structured output validation, grounding strategies, and hallucination mitigation
- Optimize inference cost, latency, and token efficiency
- Design evaluation pipelines for performance, accuracy, and safety
2️⃣ Backend & Microservices Engineering
- Design scalable backend systems using Python
- Build REST and async APIs using FastAPI / Django
- Architect and implement microservices with clear service boundaries
- Implement service-to-service communication (REST, gRPC, event-driven messaging)
- Work with message brokers (Kafka / RabbitMQ)
- Optimize database performance (PostgreSQL, MongoDB)
- Implement caching strategies (Redis)
- Build observability: logging, monitoring, distributed tracing
3️⃣ Cloud-Native Architecture & DevOps
- Design and deploy containerized services using Docker
- Orchestrate services using Kubernetes
- Implement CI/CD pipelines
- Ensure system scalability, resilience, and fault tolerance
- Apply distributed systems principles:
- Circuit breakers
- API gateway patterns
- Load balancing
- Horizontal scaling
- Saga patterns
- Zero-downtime deployments
Role- Data Analyst
Experience- 2 to 5 years
Location-Bangalore
Job Role-
● Experience: Minimum of 2+ years of professional experience in a data-heavy
environment (E-commerce or Fintech experience is a plus).
● SQL Mastery: Exceptional ability to write complex joins, window functions, Analytical
functions, and CTEs. Experience with high-scale databases (e.g., BigQuery, Hive, or
Postgres).
● Scripting: Functional knowledge of Python for data manipulation (Pandas, NumPy)
and basic automation scripts.
● Systems Thinking: Ability to understand upstream data flows and how they impact
downstream reporting.
● Problem-Solving: A "detective" mindset—you enjoy digging into a Rs 600Cr discrepancy until you find the root cause
Role & Responsibilities:
We are looking for a strong Data Engineer to join our growing team. The ideal candidate brings solid ETL fundamentals, hands-on pipeline experience, and cloud platform proficiency — with a preference for GCP / BigQuery expertise.
Responsibilities:
- Design, build, and maintain scalable data pipelines and ETL/ELT workflows
- Work with Dataform or DBT to implement transformation logic and data models
- Develop and optimize data solutions on GCP (BigQuery, GCS) or AWS/Azure
- Support data migration initiatives and data mesh architecture patterns
- Collaborate with analysts, scientists, and business stakeholders to deliver reliable data products
- Apply data governance and quality best practices across the data lifecycle
- Troubleshoot pipeline issues and drive proactive monitoring and resolution
Ideal Candidate:
- Strong Data Engineer Profile
- Must have 6+ years of hands-on experience in Data Engineering, with strong ownership of end-to-end data pipeline development.
- Must have strong experience in ETL/ELT pipeline design, transformation logic, and data workflow orchestration.
- Must have hands-on experience with any one of the following: Dataform, dbt, or BigQuery, with practical exposure to data transformation, modeling, or cloud data warehousing.
- Must have working experience on any cloud platform: GCP (preferred), AWS, or Azure, including object storage (GCS, S3, ADLS).
- Must have strong SQL skills with experience in writing complex queries and optimizing performance.
- Must have programming experience in Python and/or SQL for data processing.
- Must have experience in building and maintaining scalable data pipelines and troubleshooting data issues.
- Exposure to data migration projects and/or data mesh architecture concepts.
- Experience with Spark / PySpark or large-scale data processing frameworks.
- Experience working in product-based companies or data-driven environments.
- Bachelor’s or Master’s degree in Computer Science, Engineering, or related field.
NOTE:
- There will be an interview drive scheduled on 28th and 29th March 2026, and if shortlisted, they will be expected to be available on these Interview dates. Only Immediate joiners are considered.
Senior Data Engineer (Azure Databricks)
Key Responsibilities:
- Design, develop, and maintain scalable data pipelines using Azure Databricks and PySpark
- Work extensively with PySpark notebooks within Databricks for data processing and transformation
- Build and optimize batch data processing workflows
- Develop and manage data integrations using Azure Functions and Logic Apps
- Write efficient and optimized SQL queries for data extraction and transformation
Required Skills:
- Strong hands-on experience with Azure Databricks, PySpark, and SQL
- Experience working with batch processing frameworks
- Proficiency in building and managing data pipelines in Azure ecosystem
Good to Have:
- Experience with Python
Mandatory Requirement:
- Candidate must have hands-on experience working with PySpark notebooks in Databricks
ROLE SUMMARY
The Senior Python Developer designs, builds, and improves Python and Django applications. The role includes developing end‑to‑end integrations using REST and SOAP services and delivering reliable, scalable solutions through hands‑on coding and data transformation work. The developer works closely with Business Analysts, architects, and other teams to ensure technical solutions support business needs. Key responsibilities also include improving SQL performance, taking part in code reviews, supporting DevOps workflows with Git and Azure DevOps, and helping integrate GenAI features—such as GPT models, embeddings, and agent‑based tools—into enterprise applications.
ROLE RESPONSIBILITIES
- Design and develop Python and Django applications that are scalable, secure, and maintainable.
- Implement UI components using CSS, Bootstrap, jQuery, or similar technologies as needed.
- Develop integrations with internal and external systems using REST, SOAP, and WSDL‑based services.
- Create and optimize SQL queries, database structures, and data access logic to support application features.
- Work with Business Analysts and stakeholders to translate functional requirements into technical specifications and solutions.
- Implement accurate data mappings and transformations in accordance with business and technical requirements.
- Contribute to code reviews, follow established coding standards, and ensure high‑quality deliverables.
- Support the implementation and maintenance of DevOps pipelines using Git and Azure DevOps.
- Contribute to the integration of GenAI capabilities—including GPT models, embeddings, and agent‑based components—into enterprise applications.
- Troubleshoot issues across the application stack and collaborate closely with peers to resolve technical challenges.
TECHNICAL QUALIFICATIONS
- 7+ years of hands‑on experience with Python and Django, including complex application development.
- 5+ years of experience with SQL development, optimization, and database design.
- At least 1-2 years of applied experience with GenAI technologies (GPT models, embeddings, agents, etc.).
- Deep expertise in application architecture, system integration, and service‑oriented design.
- Strong experience with DevOps tools and practices, including Git, Azure DevOps, CI/CD pipelines, and automated deployments.
- Advanced understanding of REST, SOAP, WSDL, and large‑scale service integrations.
GENERAL QUALIFICATIONS
- Exceptional verbal and written communication skills.
- Strong analytical, problem‑solving, and architectural reasoning abilities.
- Demonstrated leadership experience with the ability to guide and mentor technical teams.
- Proven ability to work effectively in fast‑paced, collaborative environments.
EDUCATION REQUIREMENTS
- Bachelor’s degree in Computer Science, MIS, or a related field.
- Advanced certifications in Python, cloud technologies, or GenAI are preferred but not required.
Job Summary:
As a Java Full Stack Developer, you will design, develop, and maintain scalable backend services and frontend applications using Java (Spring Boot) and React. You will work closely with cross-functional teams to deliver high-performance and reliable systems.
Key Responsibilities:
• Develop and maintain applications using Java, Spring Boot, and React
• Design and build RESTful APIs for data-driven applications
• Work on frontend development using ReactJS
• Ensure scalability, performance, and reliability of applications
• Collaborate with QA, DevOps, and Product teams
• Participate in code reviews and technical discussions
• Troubleshoot and resolve production issues
• Mentor and guide junior developers
Required Skills & Qualifications:
• Strong experience in Java and Spring Boot
• Hands-on experience with React.js
• Experience with PostgreSQL or other relational databases
• Good understanding of data modeling and backend architecture
• Strong knowledge of OOP concepts
• Familiarity with Agile/Scrum and Git workflows
• Excellent problem-solving and communication skills
Good to Have:
• Experience with Snowflake / Databricks
• Exposure to data-driven or analytics platforms
About Shopflo
At Shopflo, we're trying to change the way consumers experience brands and businesses. Our first product was a cart and checkout platform for e-commerce, that allowed marketers to personalise discounts, rewards, and payments. We are currently also working on a new product that takes it a notch higher by unlocking enterprise-grade personalization for all consumer tech businesses.
Team & Company
Shopflo was founded by three co-founders:
- Ankit Bansal (ex-IIT Kharagpur, Oracle, Gupshup)
- Ishan Rakshit (ex-IIT Bombay, Parthenon, Elevation Capital)
- Priy Ranjan (ex-IIT Madras, McKinsey, Elevation Capital)
We’re a fast-growing team of ~50 people, based in HSR Layout, Bengaluru. We raised a $3.8M seed round from Tiger Global, TQ Ventures.
What you will do
- Design and develop microservice that can work in a large-scale multi-tenant environment.
- Explore design implications and work towards an appropriate balance between functionality, performance, and maintainability.
- Working with a cross-discipline team of Design, Product, Data Science and Analytics team.
- Deploy and maintain the application in a secured AWS environment.
- Take ownership from the ideation phase to deployment and maintenance.
- Active participation in the hiring process to bring world-class programmers in the team.
You should apply if you have:
- 2-4 years of experience in server-side development
- Strong programming skills in Java, Python, Node or Golang
- Hands-on experience in API development and frameworks such as Spring, Node, or Django.
- Good Understanding of SQL and NoSQL databases.
- Experience in test-driven development. (writing unit test and API test).
- Understanding of basic cloud computing concepts and experience in using any of the major cloud service providers(AWS/GCP/Azure).
- Ability to build and deploy the application in a containerized environment.
- Understanding of application logging and monitoring systems like Prometheus or Kibana.
- B. E/B. Tech/M. E. /M. Tech/M. S. from a reputed university with a good academic record.
- Curiosity to explore cutting-edge technologies and bake them into the products.
- Zeal and drive to take end-to-end ownership.
🤖 Data Scientist – Frontier AI for Data Platforms & Distributed Systems (4–8 Years)
Experience: 4–8 Years
Location: Bengaluru (On-site / Hybrid)
Company: Publicly Listed, Global Product Platform
🧠 About the Mission
We are building a Top 1% AI-Native Engineering & Data Organization — from first principles.
This is not incremental improvement.
This is a full-stack transformation of a large-scale enterprise into an AI-native data platform company.
We are re-architecting:
- Legacy systems → AI-native architectures
- Static pipelines → autonomous, self-healing systems
- Data platforms → intelligent, learning systems
- Software workflows → agentic execution layers
This is the kind of shift you would expect from companies like Google or Microsoft —
Except here, you will build it from day zero and scale it globally.
🧠 The Opportunity: This role sits at the intersection of three high-impact domains:
1. Frontier AI Systems: Large Language Models (LLMs), Small Language Models (SLMs), and Agentic AI
2. Data Platforms: Warehouses, Lakehouses, Streaming Systems, Query Engines
3. Distributed Systems: High-throughput, low-latency, multi-region infrastructure
We are building systems where:
- Data platforms optimize themselves using ML/LLMs
- Pipelines are autonomous, self-healing, and adaptive
- Queries are generated, optimized, and executed intelligently
- Infrastructure learns from usage and evolves continuously
This is: AI as the control plane for data infrastructure
🧩 What You’ll Work On
You will design and build AI-native systems deeply embedded inside data infrastructure.
1. AI-Native Data Platforms
- Build LLM-powered interfaces:
- Natural language → SQL / pipelines / transformations
- Design semantic data layers:
- Embeddings, vector search, knowledge graphs
- Develop AI copilots:
- For data engineers, analysts, and platform users
2. Autonomous Data Pipelines
- Build self-healing ETL/ELT systems using AI agents
- Create pipelines that:
- Detect anomalies in real time
- Automatically debug failures
- Dynamically optimize transformations
3. Intelligent Query & Compute Optimization
- Apply ML/LLMs to:
- Query planning and execution
- Cost-based optimization using learned models
- Workload prediction and scheduling
- Build systems that:
- Learn from query patterns
- Continuously improve performance and cost efficiency
4. Distributed Data + AI Infrastructure
- Architect systems operating at:
- Billions of events per day
- Petabyte-scale data
- Work with:
- Distributed compute engines (Spark / Flink / Ray class systems)
- Streaming systems (Kafka-class infra)
- Vector databases and hybrid retrieval systems
5. Learning Systems & Feedback Loops
- Build closed-loop AI systems:
- Execution → feedback → model updates
- Develop:
- Continual learning pipelines
- Online learning systems for infra optimization
- Experimentation frameworks (A/B, bandits, eval pipelines)
6. LLM & Agentic Systems (Infra-Aware)
- Build agents that understand data systems
- Enable:
- Autonomous pipeline debugging
- Root cause analysis for infra failures
- Intelligent orchestration of data workflows
🧠 What We’re Looking For
Core Foundations
- Strong grounding in:
- Machine Learning, Deep Learning, NLP
- Statistics, optimization, probabilistic systems
- Distributed systems fundamentals
- Deep understanding of:
- Transformer architectures
- Modern LLM ecosystems
Hands-On Expertise
- Experience building:
- LLM / GenAI systems (RAG, fine-tuning, embeddings)
- Data platforms (warehouse, lake, lakehouse architectures)
- Distributed pipelines and compute systems
- Strong programming skills:
- Python (ML/AI stack)
- SQL (deep understanding — query planning, optimization mindset)
Systems Thinking (Critical)
You think in systems, not components.
- Built or worked on:
- Large-scale data pipelines
- High-throughput distributed systems
- Low-latency, high-concurrency architectures
- Understand:
- Query optimization and execution
- Data partitioning, indexing, caching
- Trade-offs in distributed systems
🔥 What Sets You Apart (Top 1%)
- Built AI-powered data platforms or infra systems in production
- Designed or contributed to:
- Query engines / optimizers
- Data observability / lineage systems
- AI-driven infra or AIOps platforms
- Experience with:
- Multi-modal AI (logs, metrics, traces, text)
- Agentic AI systems
- Autonomous infrastructure
- Worked on systems at scale comparable to:
- Google (BigQuery-like systems)
- Meta (real-time analytics infra)
- Snowflake / Databricks (lakehouse architectures)
🧬 Ideal Background (Not Mandatory)
We often see strong candidates from:
- Data infrastructure or platform engineering teams
- AI-first startups or research-driven environments
- High-scale product companies
Experience building:
- Internal platforms used by 1000s of engineers
- Systems serving millions of users / high throughput workloads
- Multi-region, distributed cloud systems
🧠 The Kind of Problems You’ll Solve
- Can LLMs replace traditional query optimizers?
- How do we build self-healing data pipelines at scale?
- Can data systems learn from every query and improve automatically?
- How do we embed reasoning and planning into infrastructure layers?
- What does a fully autonomous data platform look like?
Background: We Commonly See (But Not Limited To)
Our team often includes engineers from top-tier institutions and strong research or product backgrounds, including:
- Leading engineering schools in India and globally
- Engineers with experience in top product companies, AI startups, or research-driven environments
- That said, we care far more about demonstrated ability, depth, and impact than pedigree alone.
🧭 Tech Lead (Backend / Fullstack | 7–10 Years)
Location: Bangalore (On-Site, Hybrid)
Company Type: Public-Listed Product Company
We’re Building a “Top 1% Engineering Org”
We’re building a high-talent-density, AI-first R&D organization from scratch — inside a publicly listed company undergoing a full-scale transformation.
Think:
→ Rewriting legacy systems into AI-native architectures
→ Embedding LLMs + Agentic AI into core workflows
→ Reimagining platforms, infra, and data systems for the next decade
This is the kind of shift you’d expect from Google, Microsoft, or Meta —
Except you get to build it from day 0 → scale it globally.
About the Role / Team
We are building a next-generation AI-first R&D organization in Bengaluru, focused on solving complex problems across LLMs, Agentic AI systems, distributed computing, and enterprise-scale architectures.
This initiative is part of a publicly listed global company investing heavily in AI-driven transformation, re-architecting its platforms into intelligent, autonomous systems powered by large language models, workflows, and decision engines.
You will be working on:
- Agentic AI systems & LLM-powered workflows
- Distributed, scalable backend systems
- Enterprise-grade AI platforms
- Automation-first engineering environments
🚀 The Mandate
Lead execution of mission-critical systems while staying hands-on — bridging architecture and delivery.
🧩 What You’ll Do
- Own end-to-end delivery of complex engineering initiatives (0→1, 1→N)
- Design systems across backend + frontend (if fullstack)
- Translate ambiguous problems into structured technical solutions
- Drive engineering best practices, code quality, and velocity
- Mentor engineers and elevate team performance
- Collaborate with stakeholders on roadmap and execution strategy
🧠 What We’re Looking For
- Strong experience in backend systems + optional frontend frameworks
- Proven ability to lead projects and deliver at scale
- Solid understanding of system design and architecture patterns
- Ability to balance speed vs quality vs scalability trade-offs
- Strong communication and leadership without authority
- Strong coding skills in Python / Java / Go / Node.js
- Solid understanding of data structures, system design basics, and backend architecture
- Experience building scalable APIs and services
- Familiarity or curiosity around AI/LLMs, async systems, or event-driven design
- Strong debugging, problem-solving, and ownership mindset
Nice to Have
- Experience integrating LLMs, vector databases, or AI pipelines
- Contributions to architecture at scale
- Experience with Agentic AI / LLM orchestration frameworks
- Background in product engineering or platform companies
- Exposure to global-scale systems (millions of users / high throughput)
🔥 What Sets You Apart
- Experience leading platform builds or major system rewrites
- Exposure to AI systems, LLM integrations, or intelligent workflows
- Built platforms used by millions of users / high-throughput systems
- Experience with event-driven systems, stream processing, or infra platforms
- Prior work on AI/ML platforms, model serving, or intelligent systems
Background: We Commonly See (But Not Limited To)
- Our team often includes engineers from top-tier institutions and strong research or product company or DeepTech or AI Product backgrounds, including:
- Leading engineering schools in India and globally
- Engineers with experience in top product companies, AI startups, or research-driven environments
- That said, we care far more about demonstrated ability, depth, and impact than pedigree alone.
Job Details
- Job Title: Senior Backend Engineer
- Industry: SAAS
- Function – Information Technology
- Experience Required: 5-8 years
- Working Days: 6 days a week, (5 days-in-office, Saturdays WFH)
- Employment Type: Full Time
- Job Location: Bangalore
- CTC Range: Best in Industry
Preferred Skills: AWS, NodeJS, RESTful APIs, NoSQL
Criteria
· Minimum 5+ years in backend engineering with strong system design expertise
· Experience building scalable systems from scratch
· Expert-level proficiency in Node.js
· Deep understanding of distributed systems
· Strong NoSQL design skills
· Hands-on AWS cloud experience
· Proven leadership and mentoring capability
· Preferred candidates from SAAS/Software/IT Services based startups or scaleup companies
Job Description
The Role:
What You’ll Build:
1. System Architecture & Design
● Architect highly scalable backend systems from the ground up
● Define technology choices: frameworks, databases, queues, caching layers
● Evaluate microservices vs monoliths based on product stage
● Design REST, GraphQL, and real-time WebSocket APIs
● Build event-driven systems for asynchronous processing
● Architect multi-tenant systems with strict data isolation
● Maintain architectural documentation and technical specs
2. Core Backend Services
● Build high-performance APIs for 3D content, XR experiences, analytics, and user interactions
● Create 3D asset processing pipelines for uploads, conversions, and optimization
● Develop distributed job workers for CPU/GPU-intensive tasks
● Build authentication/authorization systems (RBAC)
● Implement billing, subscription, and usage metering
● Build secure webhook systems and third-party integration APIs
● Create real-time collaboration features via WebSockets/SSE
3. Data Architecture & Databases
● Design scalable schemas for 3D metadata, XR sessions, and analytics
● Model complex product catalogs with variants and hierarchies
● Implement Redis-based caching strategies
● Build search and indexing systems (Elasticsearch/Algolia)
● Architect ETL pipelines and data warehouses
● Implement sharding, partitioning, and replication strategies
● Design backup, restore, and disaster recovery workflows
4. Scalability & Performance
● Build systems designed for 10x–100x traffic growth
● Implement load balancing, autoscaling, and distributed processing
● Optimize API response times and database performance
● Implement global CDN delivery for heavy 3D assets
● Build rate limiting, throttling, and backpressure mechanisms
● Optimize storage and retrieval of large 3D files
● Profile and improve CPU, memory, and network performance
5. Infrastructure & DevOps
● Architect AWS infrastructure (EC2, S3, Lambda, RDS, ElastiCache)
● Build CI/CD pipelines for automated deployments and rollbacks
● Use IaC tools (Terraform/CloudFormation) for infra provisioning
● Set up monitoring, logging, and alerting systems
● Use Docker + Kubernetes for container orchestration
● Implement security best practices for data, networks, and secrets
● Define disaster recovery and business continuity plans
6. Integration & APIs
● Build integrations with Shopify, WooCommerce, Magento
● Design webhook systems for real-time events
● Build SDKs, client libraries, and developer tools
● Integrate payment gateways (Stripe, Razorpay)
● Implement SSO and OAuth for enterprise customers
● Define API versioning and lifecycle/deprecation strategies
7. Data Processing & Analytics
● Build analytics pipelines for engagement, conversions, and XR performance
● Process high-volume event streams at scale
● Build data warehouses for BI and reporting
● Develop real-time dashboards and insights systems
● Implement analytics export pipelines and platform integrations
● Enable A/B testing and experimentation frameworks
● Build personalization and recommendation systems
Technical Stack:
1. Backend Languages & Frameworks
● Primary: Node.js (Express, NestJS), Python (FastAPI, Django)
● Secondary: Go, Java/Kotlin (Spring)
● APIs: REST, GraphQL, gRPC
2. Databases & Storage
● SQL: PostgreSQL, MySQL
● NoSQL: MongoDB, DynamoDB
● Caching: Redis, Memcached
● Search: Elasticsearch, Algolia
● Storage/CDN: AWS S3, CloudFront
● Queues: Kafka, RabbitMQ, AWS SQS
3. Cloud & Infrastructure:
● Cloud: AWS (primary), GCP/Azure (nice to have)
● Compute: EC2, Lambda, ECS, EKS
● Infrastructure: Terraform, CloudFormation
● CI/CD: GitHub Actions, Jenkins, CircleCI
● Containers: Docker, Kubernetes
4. Monitoring & Operations
● Monitoring: Datadog, New Relic, CloudWatch
● Logging: ELK Stack, CloudWatch Logs
● Error Tracking: Sentry, Rollbar
● APM tools
5. Security & Auth
● Auth: JWT, OAuth 2.0, SAML
● Secrets: AWS Secrets Manager, Vault
● Security: Encryption (at rest/in transit), TLS/SSL, IAM
What We’re Looking For:
1. Must-Haves
● 5+ years in backend engineering with strong system design expertise
● Experience building scalable systems from scratch
● Expert-level proficiency in at least one backend stack (Node, Python, Go, Java)
● Deep understanding of distributed systems and microservices
● Strong SQL/NoSQL design skills with performance optimization
● Hands-on AWS cloud experience
● Ability to write high-quality production code daily
● Experience building and scaling RESTful APIs
● Strong understanding of caching, sharding, horizontal scaling
● Solid security and best-practice implementation experience
● Proven leadership and mentoring capability
2. Highly Desirable
● Experience with large file processing (3D, video, images)
● Background in SaaS, multi-tenancy, or e-commerce
● Experience with real-time systems (WebSockets, streams)
● Knowledge of ML/AI infrastructure
● Experience with HA systems, DR planning
● Familiarity with GraphQL, gRPC, event-driven systems
● DevOps/infrastructure engineering background
● Experience with XR/AR/VR backend systems
● Open-source contributions or technical writing
● Prior senior technical leadership experience
Technical Challenges You’ll Solve:
● Designing large-scale 3D asset processing pipelines
● Serving XR content globally with ultra-low latency
● Scaling from thousands to millions of daily requests
● Efficiently handling CPU/GPU-heavy workloads
● Architecting multi-tenancy with complete data isolation
● Managing billions of analytics events at scale
● Building future-proof APIs with backward compatibility
Why company:
● Architectural Ownership: Build foundational systems from scratch
● Deep Technical Work: Solve distributed systems and scaling challenges
● Hands-On Impact: Design and code mission-critical infrastructure
● Diverse Problems: APIs, infra, data, ML, XR, asset processing
● Massive Scale Opportunity: Build systems for exponential growth
● Modern Stack and best practices
● Product Impact: Your architecture directly powers millions of users
● Leadership Opportunity: Shape engineering culture and direction
● Learning Environment: Stay at the forefront of backend engineering
● Backed by AWS, Microsoft, Google
Location & Work Culture:
● Location: Bengaluru
● Schedule: 6 days a week, (5 days-in-office, Saturdays WFH)
● Culture: Builder mindset, strong ownership, technical excellence
● Team: Small, highly skilled backend and infra team
● Resources: AWS credits, latest tooling, learning budget
Description
Power BI JD
Mandatory:
• 5+ years of Power BI Report development experience.
• Building Analysis Services reporting models.
• Developing visual reports, KPI scorecards, and dashboards using Power BI desktop.
• Connecting data sources, importing data, and transforming data for Business intelligence.
• Analytical thinking for translating data into informative reports and visuals.
• Capable of implementing row-level security on data along with an understanding of application security layer models in Power BI.
• Should have an edge over making DAX queries in Power BI desktop.
• Expert in using advanced-level calculations on the data set.
• Responsible for design methodology and project documentaries.
• Should be able to develop tabular and multidimensional models that are compatible with data warehouse standards.
• Very good communication skills must be able to discuss the requirements effectively with the client teams, and with internal teams.
• Experience working with Microsoft Business Intelligence Stack having Power BI, SSAS, SSRS, and SSIS
• Mandate to have experience with BI tools and systems such as Power BI, Tableau, and SAP.
• Must have 3-4years of experience in data-specific roles.
• Have knowledge of database fundamentals such as multidimensional database design, relational database design, and more
• Knowledge of all the Power BI products (Power Bi premium, Power BI server, Power BI services, Powerquery etc)
• Grip over data analytics
• Interact with customers to understand their business problems and provide best-in-class analytics solutions
• Proficient in SQL and Query performance tuning skills
• Understand data governance, quality and security and integrate analytics with these corporate platforms
• Attention to detail and ability to deliver accurate client outputs
• Experience of working with large and multiple datasets / data warehouses
• Ability to derive insights from data and analysis and create presentations for client teams
• Experience with performance optimization of the dashboards
• Interact with UX/UI designers to create best in class visualization for business harnessing all product capabilities.
• Resilience under pressure and against deadlines.
• Proactive attitude and an open outlook.
• Strong analytical problem-solving skills
• Skill in identifying data issues and anomalies during the analysis
• Strong business acumen demonstrated an aptitude for analytics that incite action
• Ability to execute on design requirements defined by business
• Ability to understand required Power BI functionality from wireframes/ requirement documents
• Ability to architect and design reporting solutions based on client needs.
• Being able to communicate with internal/external customers, desire to develop communication and client-facing skills.
• Ability to seamlessly work with MS Excel working knowledge of pivot table and related functions
Good to have:
• Experience in working with Azure and connecting synapse with Tableau
• Demonstrate strength in data modelling, ETL development, and data warehousing
• Knowledge of leading large-scale data warehousing and analytics projects using Azure, Synapse, MS SQL DB
• Good knowledge of building/operating highly available, distributed systems of data extraction, ingestion, and processing of large data sets
• Good to have knowledge of Supply Chain Domain.
Java Developer (6+ Years Experience)
We are looking for an experienced Java Developer to join our dynamic team for an exciting project with a leading client.
Role Details:
Location: Bangalore
Key Requirements:
6+ years of hands-on experience in Java development
Strong expertise in Core Java, Spring Boot, Microservices
Experience with REST APIs & backend development
Good understanding of databases (SQL/NoSQL)
Familiarity with Agile methodologies
Job Title: QA Tester – FinTech (Manual + Automation Testing)
Location: Bangalore, India
Job Type: Full-Time
Experience Required: 3 Years
Industry: FinTech / Financial Services
Function: Quality Assurance / Software Testing
About the Role:
We are looking for a skilled QA Tester with 3 years of experience in both manual and automation testing, ideally in the FinTech domain. The candidate will work closely with development and product teams to ensure that our financial applications meet the highest standards of quality, performance, and security.
Key Responsibilities:
- Analyze business and functional requirements for financial products and translate them into test scenarios.
- Design, write, and execute manual test cases for new features, enhancements, and bug fixes.
- Develop and maintain automated test scripts using tools such as Selenium, TestNG, or similar frameworks.
- Conduct API testing using Postman, Rest Assured, or similar tools.
- Perform functional, regression, integration, and system testing across web and mobile platforms.
- Work in an Agile/Scrum environment and actively participate in sprint planning, stand-ups, and retrospectives.
- Log and track defects using JIRA or a similar defect management tool.
- Collaborate with developers, BAs, and DevOps teams to improve quality across the SDLC.
- Ensure test coverage for critical fintech workflows like transactions, KYC, lending, payments, and compliance.
- Assist in setting up CI/CD pipelines for automated test execution using tools like Jenkins, GitLab CI, etc.
Required Skills and Experience:
- 3+ years of hands-on experience in manual and automation testing.
- Solid understanding of QA methodologies, STLC, and SDLC.
- Experience in testing FinTech applications such as digital wallets, online banking, investment platforms, etc.
- Strong experience with Selenium WebDriver, TestNG, Postman, and JIRA.
- Knowledge of API testing, including RESTful services.
- Familiarity with SQL to validate data in databases.
- Understanding of CI/CD processes and basic scripting for automation integration.
- Good problem-solving skills and attention to detail.
- Excellent communication and documentation skills.
Preferred Qualifications:
- Exposure to financial compliance and regulatory testing (e.g., PCI DSS, AML/KYC).
- Experience with mobile app testing (iOS/Android).
- Working knowledge of test management tools like TestRail, Zephyr, or Xray.
- Performance testing experience (e.g., JMeter, LoadRunner) is a plus.
- Basic knowledge of version control systems (e.g., Git).
Role & Responsibilities
drives large-scale data modernization and AI readiness for global enterprises. We are looking for an experienced Data Modeler to design, standardize, and maintain enterprise data models across our modernization initiatives — ensuring consistency, quality, and business alignment across cloud data platforms.
The person will be responsible for translating business requirements and data flows into robust conceptual, logical, and physical data models across multiple domains (Customer, Product, Finance, Supply Chain, etc.). You will work closely with Data Architects, Engineers, and Governance teams to ensure data is structured, traceable, and optimized for analytics and interoperability across platforms like Snowflake, Dremio, and Databricks.
Key Responsibilities-
- Develop conceptual, logical, and physical data models aligned with enterprise architecture standards.
- Engage with Business Stakeholders: Collaborate with business teams, business analysts and SMEs to understand business processes, data lifecycles, and key metrics that drive value and outcomes.
- Value Chain Understanding: Analyze end-to-end customer and product value chains to identify critical data entities, relationships, and dependencies that should be represented in the data model.
- Conceptual and Logical Modeling: Translate business concepts and data requirements into conceptual and logical data models that capture enterprise semantics and support analytical and operational needs.
- Physical Data Modeling: Design and implement physical data models optimized for performance and scalability
- Semantic Layer Design: Create semantic models that enable business access to data via BI tools and data discovery platforms.
- Data Standards and Governance: Ensure models comply with enterprise data standards, naming conventions, lineage tracking, and governance practices.
- Implement naming conventions, data standards, and metadata definitions across all models.
- Collaboration with Data Engineering: Work closely with data engineers to align data pipelines with the logical and physical models, ensuring consistency and accuracy from ingestion to consumption.
- Manage version control, lineage tracking, and change documentation for models.
- Participate in data quality and governance initiatives to ensure trusted and consistent data definitions across domains.
- Create and maintain a business glossary in collaboration with the governance team.
Ideal Candidate
- Strong Enterprise Data Modeller profile (Modern Data Platforms)
- Mandatory (Experience 1) – Must have 7+ years of experience in Data Modeling or Enterprise Data Architecture, with strong hands-on expertise in designing conceptual, logical, and physical data models for enterprise data platforms
- Mandatory (Experience 2) – Must have Strong hands-on experience with enterprise data modeling tools such as Erwin, ER/Studio, PowerDesigner, SQLDBM, or similar enterprise data modeling tools
- Mandatory (Experience 3) – Must have Deep understanding of dimensional modeling (Kimball / Inmon methodologies), normalization techniques, and schema design for modern data warehouse environments.
- Mandatory (Experience 4) – Proven experience designing data models for modern data platforms such as Snowflake, Databricks, Redshift, Dremio, or similar cloud data warehouse / lakehouse systems.
- Mandatory (Experience 5) – Must have strong SQL expertise and schema design skills, with the ability to validate data model implementations and collaborate closely with data engineering teams
- Mandatory (Education) – Bachelor’s or Master’s degree in Computer Science, Information Systems, or a related technical field.
- Preferred (Experience 1) – Should have familiarity with data governance, metadata management, lineage, and business glossary tools such as Collibra, Alation, or Microsoft Purview.
- Preferred (Experience 2) – Exposure to data integration pipelines and ETL frameworks such as Informatica, DBT, Airflow, or similar tools.
- Preferred (Data Management) – Understanding of master data management (MDM) and reference data management principles.
- Preferred (Domain) – Experience working with high-tech or manufacturing data domains, including customer, product, or supply chain data models
About Us:
REConnect Energy’s GRIDConnect platform helps integrate and manage energy generation and consumption for 1000s of renewable energy assets and grid operators. We are currently serving customers across India, Bhutan and the Middle East with expansion planned in US and European markets.
We are headquartered in Central Bangalore with a team of 150+ and growing. You will join the Bangalore based Engineering team as a senior member and work at the intersection of Energy, Weather & Climate Sciences and AI.
Responsibilities:
● Engineering - Take complete ownership of engineering stacks including Data Engineering and MLOps. Define and maintain software systems architecture for high availability 24x7 systems.
● Leadership - Lead a team of engineers and analysts managing engineering development as well as round the clock service delivery. Provide mentorship and technical guidance to team members and contribute towards their professional growth. Manage weekly and monthly reviews with team members and senior management.
● Product Development - Contribute towards new product development through engineering solutions to product requirements. Interact with cross-functional teams to bring forward a technology perspective.
● Operations - Manage delivery of critical services to power utilities with expectations of zero downtime. Take ownership for uninterrupted product uptime.
Requirements:
● 4-5 years of experience building highly available systems
● 2-3 years experience leading a team of engineers and analysts
● Bachelors or Master’s degree in Computer Science, Software Engineering, Electrical Engineering or equivalent
● Proficient in python programming skills and expertise with data engineering and machine learning deployment
● Experience in databases including MySQL and NoSQL
● Experience in developing and maintaining critical and high availability systems will be given strong preference
● Experience in software design using design principles and architectural modeling.
● Experience working with AWS cloud platform.
● Strong analytical and data driven approach to problem solving
Backend – Software Development Engineer II
Experience: 4–7 years
Location: Bangalore
Work Mode - Hybrid
About Wekan Enterprise Solutions
Wekan Enterprise Solutions is a leading Technology Consulting company and a strategic investment partner of MongoDB. We help companies drive innovation in the cloud by adopting modern technology solutions that help them achieve their performance and availability requirements. With strong capabilities around Mobile, IOT and Cloud environments, we have an extensive track record helping Fortune 500 companies modernize their most critical legacy and on-premise applications, migrating them to the cloud and leveraging the most cutting-edge technologies.
Job Description
We are looking for passionate software engineers eager to be a part of our growth journey. The right candidate needs to be interested in working in high-paced and challenging environments. Interested in constantly upskilling, learning new technologies and expanding their domain knowledge to new industries. This candidate needs to be a team player and should be looking to help build a culture of excellence. Do you have what it takes?
You will be working on complex data migrations, modernizing legacy applications and building new applications on the cloud for large enterprise and/or growth stage startups. You will have the opportunity to contribute directly into mission critical projects directly interacting with business stakeholders, customer’s technical teams and MongoDB solutions Architects
What you will do
- Own backend features and migration workstreams end to end, from understanding the existing codebase and data model to delivering production-ready implementations.
- Build and enhance Java/Spring Boot services used in modernization, migration, and cloud transformation projects.
- Design and implement backend flows that are safe under retries and partial failures, with clear thinking around validation, transaction boundaries, idempotency, and downstream side effects.
- Work across application, database, and deployment layers to improve reliability, maintainability, and operational visibility.
- Model data based on access patterns and business workflows, with sound choices around schema design, indexing, and query performance.
- Investigate production issues using logs, request traces, database state, and service behavior; identify root causes and implement durable fixes.
- Collaborate with internal teams, customer engineering teams, architects, and stakeholders to deliver high-quality solutions on mission-critical projects.
- Write clean, modular, testable code and participate actively in sprint ceremonies, code reviews, design discussions, and release activities.
What we’re looking for
- 4–7 years of backend engineering experience, with strong hands-on delivery in Java-based systems.
- Solid experience with Java, Spring Boot, and microservice-style backend development.
- Demonstrated ownership of at least one meaningful backend service or feature area in production.
- Strong understanding of backend engineering fundamentals, including service reliability, data consistency, failure handling, and production-grade design considerations.
- Strong database fundamentals, including schema design, query writing, indexing, and performance reasoning.
- Strong depth in MongoDB or a relational database such as Oracle, with working comfort across both styles being a plus.
- Ability to investigate and resolve real production issues across services and databases, including consistency, performance, and reliability problems.
- Hands-on experience with testing frameworks such as JUnit and Mockito.
- Proficiency with Git, including branching, code review workflows, and conflict resolution.
- Strong communication skills and the ability to collaborate effectively with engineers, stakeholders, and customers.
Preferred qualifications
- Experience working on legacy modernization, data/service migrations, or decomposition of existing systems into cleaner service boundaries.
- Exposure to both MongoDB and relational databases, including query tuning, indexing, and production troubleshooting.
- Familiarity with Oracle PL/SQL or migration of logic from database-heavy systems into service-layer Java code.
- Exposure to CI/CD deployment and cloud environments like AWS, Azure, or GCP.
Immediate hiring for Senior Data Engineer
📍 Location: Hyderabad/Bangalore
💼 Experience: 7+Years
🕒 Employment Type: Full-Time
🏢 Work Mode: Hybrid
📅 Notice Period: 0-1Month serving notice only
We are seeking a highly skilled and motivated Data Engineer to join our innovative team. As a Data Engineer, you will be responsible for designing, building, and maintaining scalable data pipelines and infrastructure to support our enterprise-wide data-driven initiatives. You will collaborate closely with cross-functional teams to ensure the availability, reliability, and performance of our data systems and solutions.
🔎 Key Responsibilities:
- Data Pipeline Development
- Data Modeling and Architecture
- Data Integration and API Development
- Data Infrastructure Management
- Collaboration and Documentation
🎯 Required Skills:
- Bachelor’s degree in computer science, Engineering, Information Systems, or a related field.
- 7+ years of proven experience in data engineering, software development, or related technical roles.
- 7+ years of experience in programming languages commonly used in data engineering (Python, Java, SQL, Stored Procedures, Scala, etc.).
- 7+ years of experience with database systems, data modeling, and advanced SQL.
- 7+ years of experience with ETL tools such as SSIS, Snowflake, Databricks, Azure Data Factory, Stored Procedures, etc.
- Experience with big data technologies such as Hadoop, Spark, Kafka, etc.
- 5+ years of experience working with cloud platforms like Azure, AWS, or Google Cloud.
- Strong analytical, problem-solving, and debugging skills with high attention to detail.
- Excellent communication and collaboration skills in a team-oriented, fast-paced environment.
- Ability to adapt to rapidly evolving technologies and business requirements.
Strong Senior Backend Engineer profiles
Mandatory (Experience 1) – Must have 5+ years of hands-on Backend Engineering experience building scalable, production-grade systems
Mandatory (Experience 2) – Must have strong backend development experience using one or more frameworks (FastAPI / Django (Python), Spring (Java), Express (Node.js).
Mandatory (Experience 3) – Must have deep understanding of relevant libraries, tools, and best practices within the chosen backend framework
Mandatory (Experience 4) – Must have strong experience with databases, including SQL and NoSQL, along with efficient data modeling and performance optimization
Mandatory (Experience 5) – Must have experience designing, building, and maintaining APIs, services, and backend systems, including system design and clean code practices
Mandatory (Domain) – Experience with financial systems, billing platforms, or fintech applications is highly preferred (fintech background is a strong plus)
Mandatory (Company) – Must have worked in product companies / startups, preferably Series A to Series D
Mandatory (Education) – Candidates from Tier - 1 engineering institutes (IITs, BITS, are highly preferred)
If you are good at writing complex queries, very good at python, and good at debugging, very good at understanding complex systems and can swim through logs to find the dropping point, and have been on the firefighting side to address bugs in live production systems, send your resume
FULL STACK DEVELOPER
JOB DESCRIPTION – FULL STACK DEVELOPER
Location: Bangalore
Key Responsibilities:
Establish processes, SLAs, and escalation protocols for the support & maintenance of web applications
Manage stakeholders with effective communication & collaborate with cross functional teams to address issues and maintain business continuity.
Design, implement, unit test, and build business applications using React, React-Native, .Net Core, .Net 8, Azure/AWS and leveraging an agile methodology and latest tech like Agentic AI & Gihub Copilot.
Facilitate scrum ceremonies including sprint planning, retrospectives, reviews, and daily stand-ups·
Facilitate discussion, assessment of alternatives or different approaches, decision making, and conflict resolution within the development team
Develop and administer CI/CD pipelines in cloud-hosted Git repositories, and source control artifacts via Git in alignment with common branching strategies and workflows
Assist Software Designer/Implementers with the creation of detailed software design specifications
Participate in the system specification review process to ensure system requirements can be translated into valid software architecture
Integrate internal and external product designs into a cohesive user experience
Identify and keep track of metrics that indicate how software is performing
Handle technical and non-technical queries from the development team and stakeholders
Ensure that all development practices follow best practices and any relevant policies / procedures
Other Duties· Maintain project reporting including dashboards, status reports, road maps, burn down, velocity, and resource utilization.
Own the technical solution and ensure all technical aspects are implemented as designed. ·
Partner with the customer success team and aid in triaging and troubleshooting customer support issues spanning across a range of software components, infrastructure, integrations, and services, some of which target 24/7/365 availability
Flexible to work in rotational shift
Required Qualification
Previous experience of leading full stack technology projects with scrum teams and stakeholder management·
BTech or MTech in computer science, or related field·
3-5 years of experience.
Required Knowledge, Skills and Abilities: (Include any required computer skills, certifications, licenses, languages, etc)·
With Proficiency in .NET Core/.Net 8/, React, React-Native, Redux, Material, Bootstrap, Typescript, SCSS, Microservices, EF, LINQ, SQL, Azure/AWS, CI CD, Agile, Agentic AI, Github Copilot·
Azure Dev Ops, Design System, Micro front ends, Data Science·
Stakeholder management & excellent communication skills.
Must have skills
React - 3 years
React Native - 3 years
Redux - 1 years
Material UI - 1 years
Typescript - 1 years
Bootstrap - 1 years
Microservices - 2 years
SQL - 1 years
Azure - 1 years
Nice to have skills
.NET Core - 3 years
NET 8 - 3 years
AWS - 1 years
LINQ - 1 years
Strong Senior Backend Engineer profiles
Mandatory (Experience 1) – Must have 5+ years of hands-on Backend Engineering experience building scalable, production-grade systems
Mandatory (Experience 2) – Must have strong backend development experience using one or more frameworks (FastAPI / Django (Python), Spring (Java), Express (Node.js).
Mandatory (Experience 3) – Must have deep understanding of relevant libraries, tools, and best practices within the chosen backend framework
Mandatory (Experience 4) – Must have strong experience with databases, including SQL and NoSQL, along with efficient data modeling and performance optimization
Mandatory (Experience 5) – Must have experience designing, building, and maintaining APIs, services, and backend systems, including system design and clean code practices
Mandatory (Domain) – Experience with financial systems, billing platforms, or fintech applications is highly preferred (fintech background is a strong plus)
Mandatory (Company) – Must have worked in product companies / startups, preferably Series A to Series D
Mandatory (Education) – Candidates from Tier - 1 engineering institutes (IITs, BITS, are highly preferred)
We are looking for an experienced Data Engineer with strong expertise in AWS, DBT, Databricks, and Apache Airflow to join our growing data engineering team.
Immediate joiners preferred
Role Overview
The ideal candidate will design, develop, and maintain scalable data pipelines and data platforms to support analytics and business intelligence initiatives.
Key Responsibilities
- Design and build scalable data pipelines using AWS, Databricks, DBT, and Airflow.
- Develop and optimize ETL/ELT workflows for large-scale data processing.
- Implement data transformation models using DBT.
- Orchestrate workflows using Apache Airflow.
- Work with Databricks for big data processing and analytics.
- Ensure data quality, reliability, and performance optimization.
- Collaborate with data analysts, engineers, and business teams.
Required Skills
- Strong experience with AWS data services
- Hands-on experience with Databricks
- Experience in DBT (Data Build Tool)
- Workflow orchestration using Apache Airflow
- Strong SQL and Python skills
- Experience in data warehousing and ETL pipelines

Business Intelligence & Digital Consulting company
Description
JOB DESCRIPTION – SENIOR ANALYST – DATA SCIENTIST
Key Responsibilities ·
Work with business stakeholders and cross-functional SMEs to deeply understand business context and key business questions·
Advanced skills with statistical/programming in Python and data querying languages (e.g., SQL, Hadoop/Hive, Scala)·
Solid understanding of time-series forecasting techniques·
Good hands-on skills in both feature engineering and hyperparameter optimization·
Able to write clean and tested code that can be maintained by other software engineers·
Able to clearly summarize and communicate data analysis assumptions and results·
Able to craft effective data pipelines to transform your analyses from offline to production systems·
Self-motivated and a proactive problem solver who can work independently and in teams·
Connects both externally and internally to understand industry trends, technology advances and outstanding processes or solutions·
Is collaborative and engages (strategic & tactical. Able to influence without authority, handle complex issues and implement positive change·
Work on multiple pillars of AI including cognitive engineering, conversational bots, and data science·
Ensure that solutions exhibit high levels of performance, security, scalability, maintainability, repeatability, appropriate reusability, and reliability upon deployment ·
Provide guidance and leadership to more junior data scientists, managing processes and flow of work, vetting designs, and mentoring team members to realize their full potential·
Lead discussions at peer review and use interpersonal skills to positively influence decision making·
Provide subject matter expertise in machine learning techniques, tools, and concepts; make impactful contributions to internal discussions on emerging practices·
Facilitate cross-geography sharing of new ideas, learnings, and best-practices
What We Are Looking For
Required Qualifications ·
Master's degree in a quantitative field such as Data Science, Statistics, Applied Mathematics or Bachelor's degree in engineering, computer science, or related field. ·
4 – 6 years of total work experience as data scientist or analytical role, with at least 2-3 years of experience in time series forecasting·
A combination of business focus, strong analytical and problem-solving skills, and programming knowledge to be able to quickly cycle hypothesis through the discovery phase of a project ·
Strong experience in Time Series Forecasting and Demand Planning ·
Advanced skills with statistical/programming software (e.g., R, Python) and data querying languages (e.g., SQL, Hadoop/Hive, Scala) ·
Good hands-on skills in both feature engineering and hyperparameter optimization ·
Experience producing high-quality code, tests, documentation·
Understanding of descriptive and exploratory statistics, predictive modelling, evaluation metrics, decision trees, machine learning algorithms, optimization & forecasting techniques, and / or deep learning methodologies·
Proficiency in statistical concepts and ML algorithms·
Ability to lead, manage, build, and deliver customer business results through data scientists or professional services team·
Ability to share ideas in a compelling manner, to clearly summarize and communicate data analysis assumptions and results·
Self-motivated and a proactive problem solver who can work independently and in teams·
Outstanding verbal and written communication skills with the ability to effectively advocate technical solutions to engineering and business teams
Desired Qualifications ·
Experience working in one or multiple supply chain functions (e.g., procurement, planning, manufacturing, quality, logistics) is strongly preferred ·
Experience in applying AI/ML within a CPG or Healthcare business environment is strongly preferred ·
Experience in creating CI/CD pipelines for deployment using Jenkins. ·
Experience implementing MLOPs framework along with understanding of data security·
Implementation on ML models·
Exposure to visualization packages and Azure tech stack.
Must have skills
Python - 2 years
Data Science - 4 years
SQL - 2 years
Machine Learning - 2 years
Nice to have skills
Data Analysis - 4 years
Time Series Forecasting - 2 years
Demand Planning - 2 years
Hadoop - 2 years
Statistical concepts - 2 years
Supply chain functions - 2 years
Data Engineer MS Data Engineer + Snowflake/databrics Required Skills: · 6 to 8 years of being a practitioner in data engineering or a related field. Should have experience in Snowflake or Databricks. Experience with data processing frameworks like Apache Spark or Hadoop. Experience working on Databricks. Familiarity with cloud platforms (AWS, Azure) and their data services. Experience with data warehousing concepts and technologies. Experience with message queues and streaming platforms (e.g., Kafka). Excellent communication and collaboration skills. Ability to work independently and as part of a geographically distributed team.
About the role
We are seeking a seasoned Backend Tech Lead with deep expertise in Golang and Python to lead our backend team. The ideal candidate has 6+ years of experience in backend technologies and 2–3 years of proven engineering mentoring experience, having successfully scaled systems and shipped B2C applications in collaboration with product teams.
Responsibilities
Technical & Product Delivery
● Oversee design and development of backend systems operating at 10K+ RPM scale.
● Guide the team in building transactional systems (payments, orders, etc.) and behavioral systems (analytics, personalization, engagement tracking).
● Partner with product managers to scope, prioritize, and release B2C product features and applications.
● Ensure architectural best practices, high-quality code standards, and robust testing practices.
● Own delivery of projects end-to-end with a focus on scalability, reliability, and business impact.
Operational Excellence
● Champion observability, monitoring, and reliability across backend services.
● Continuously improve system performance, scalability, and resilience.
● Streamline development workflows and engineering processes for speed and quality.
Requirements
● Experience:
7+ years of professional experience in backend technologies.
2-3 years as Tech lead and driving delivery.
● Technical Skills:
Strong hands-on expertise in Golang and Python.
Proven track record with high-scale systems (≥10K RPM).
Solid understanding of distributed systems, APIs, SQL/NoSQL databases, and cloud platforms.
● Leadership Skills:
Demonstrated success in managing teams through 2–3 appraisal cycles.
Strong experience working with product managers to deliver consumer-facing applications.
● Excellent communication and stakeholder management abilities.
Nice-to-Have
● Familiarity with containerization and orchestration (Docker, Kubernetes).
● Experience with observability tools (Prometheus, Grafana, OpenTelemetry).
● Previous leadership experience in B2C product companies operating at scale.
What We Offer
● Opportunity to lead and shape a backend engineering team building at scale.
● A culture of ownership, innovation, and continuous learning.
● Competitive compensation, benefits, and career growth opportunities.
Job Summary:
We are seeking a highly skilled and self-driven Java Backend Developer with strong experience in designing and deploying scalable microservices using Spring Boot and Azure Cloud. The ideal candidate will have hands-on expertise in modern Java development, containerization, messaging systems like Kafka, and knowledge of CI/CD and DevOps practices.Key Responsibilities:
- Design, develop, and deploy microservices using Spring Boot on Azure cloud platforms.
- Implement and maintain RESTful APIs, ensuring high performance and scalability.
- Work with Java 11+ features including Streams, Functional Programming, and Collections framework.
- Develop and manage Docker containers, enabling efficient development and deployment pipelines.
- Integrate messaging services like Apache Kafka into microservice architectures.
- Design and maintain data models using PostgreSQL or other SQL databases.
- Implement unit testing using JUnit and mocking frameworks to ensure code quality.
- Develop and execute API automation tests using Cucumber or similar tools.
- Collaborate with QA, DevOps, and other teams for seamless CI/CD integration and deployment pipelines.
- Work with Kubernetes for orchestrating containerized services.
- Utilize Couchbase or similar NoSQL technologies when necessary.
- Participate in code reviews, design discussions, and contribute to best practices and standards.
Required Skills & Qualifications:
- Strong experience in Java (11 or above) and Spring Boot framework.
- Solid understanding of microservices architecture and deployment on Azure.
- Hands-on experience with Docker, and exposure to Kubernetes.
- Proficiency in Kafka, with real-world project experience.
- Working knowledge of PostgreSQL (or any SQL DB) and data modeling principles.
- Experience in writing unit tests using JUnit and mocking tools.
- Experience with Cucumber or similar frameworks for API automation testing.
- Exposure to CI/CD tools, DevOps processes, and Git-based workflows.
Nice to Have:
- Azure certifications (e.g., Azure Developer Associate)
- Familiarity with Couchbase or other NoSQL databases.
- Familiarity with other cloud providers (AWS, GCP)
- Knowledge of observability tools (Prometheus, Grafana, ELK)
Soft Skills:
- Strong problem-solving and analytical skills.
- Excellent verbal and written communication.
- Ability to work in an agile environment and contribute to continuous improvement.
Why Join Us:
- Work on cutting-edge microservice architectures
- Strong learning and development culture
- Opportunity to innovate and influence technical decisions
- Collaborative and inclusive work environment
🚀 Hiring: .NET Developer at Deqode
⭐ Experience: 4+ Years
📍 Location: Mumbai and Bangalore
⭐ Work Mode:- Hybrid
⏱️ Notice Period: Immediate Joiners
(Only immediate joiners & candidates serving notice period)
🚀 Hiring: .NET Developer
We are looking for a skilled .NET Developer to join our growing team. The ideal candidate should have strong experience in developing, testing, and maintaining applications using the .NET framework.
🎗️ Key Responsibilities:
✅ Develop and maintain web applications using .NET / .NET Core
✅ Write clean, scalable, and efficient code
✅ Troubleshoot, debug, and upgrade existing applications
✅ Work with databases and APIs for application integration
💫 Requirements:
✅ Experience with C#, ASP.NET, .NET Core
✅ Knowledge of SQL Server
✅ Familiarity with REST APIs
✅ Understanding of HTML, CSS, JavaScript
✅ Strong problem-solving and communication skills
Job opportunity for Developer -Python Full Stack with Siemens at Bangalore.
Interview Process:
1st round of interview - F2F (in-Person)-Technical
2nd round of interview – F2F /Virtual Interview - Technical
3rd round of interview – Virtual Interview – Technical + HR
Job Title / Designation: Developer -Python Full Stack
Employment Type: Full Time, Permanent
Location: Bangalore
Experience: 3-5 Years Job Description: : Developer -Python Full Stack
We are looking for a python full stack expert who has proven 5+ years of experience in developing automating solutions on Linux based environments. You should be capable of developing python-based web applications or automation solutions and have with excellent knowledge on DB handling and decent knowledge of the K8-based deployment environment.
Required Skills:
- Solid experience in Python back-end technology
- Sound experience in web application development
- Decent knowledge and experience in UI development using JavaScript, React/Angular or related tech stack.
- Strong understanding of software design patterns and testing principles
- Ability to learn and adapt to working with multiple programming languages.
- Experience Docker, ArgoCD, Kubernetes and Terraform
- Understanding of ETL processes to extract data from different data sources is a plus.
- Proven experience in Linux development environments using Python.
- Excellent knowledge in interacting with database systems (SQL, NoSQL) and webservices (REST)
- Experienced in establishing an optimized CI / CD environment relevant to the project.
- Good knowledge on repository management tools like Git, Bit Bucket, etc.
- Excellent debugging skills/strategies.
- Excellent communication skills
- Experienced in working in an Agile environment.
Nice to have
- Good Knowledge in eclipse IDE, developed add-ons/ plugins on eclipse Platform.
- Knowledge of 93K Semiconductor test platforms
- Good know-how of agile management tools like Jira, Azure DevOps.
- Good knowledge of RHEL
- Knowledge of JIRA administration
NOW HIRING · WORLD-CLASS TALENT Backend Tech Lead (Senior Level Engineering Leadership)
Placed by Recruiting Bond on behalf of a Confidential Digital Platform Leader
📍Location: Bengaluru, India (Hybrid / On-Site)
🏢Sector: Technology, Information & Media
👥Company Size: 500 – 1,000 Employees
💼Employment: Full-Time, Permanent
🎯Experience: 6 – 9 Years (Backend Engineering)
🚀 Level: Tech Lead
ABOUT THIS MANDATE
Recruiting Bond has been exclusively retained by one of India's most well-established digital platform organisations — a company operating at the intersection of Technology, Information, and Media — to identify and place a world-class Backend Tech Lead who can drive a transformational engineering agenda at scale.
This is not an ordinary role. The organisation is executing a high-stakes, large-scale modernisation of its backend infrastructure — migrating from legacy monolithic systems to resilient, cloud-native, AI-augmented distributed architectures that serve millions of concurrent users. The person in this seat will be a core pillar of that transformation.
We are looking exclusively for the top 1% — engineers who think in systems, own outcomes, and lead by example.
THE OPPORTUNITY AT A GLANCE
🏗️ Architecture Ownership
Drive system design decisions across the entire backend platform. Shape the future of distributed, fault-tolerant architecture.
🤖 AI-Augmented Engineering
Embed GenAI and LLM tooling directly into the SDLC. Champion automation-first development practices across squads.
🎓 Engineering Leadership
Mentor and grow the next generation of backend engineers. Lead hiring, reviews, and cross-functional technical alignment.
KEY RESPONSIBILITIES
1. Architecture & Platform Modernisation
- Lead the full migration of legacy monolithic systems to a scalable, cloud-native microservices architecture
- Design and own distributed, fault-tolerant backend systems with sub-millisecond SLO targets
- Architect API-first and event-driven platforms using async messaging patterns (Kafka, Pub/Sub, SQS)
- Resolve systemic performance bottlenecks, concurrency conflicts, and scalability ceilings
- Establish backend design standards, coding guidelines, and architectural review processes
2. Distributed Systems Engineering (Production-Grade)
- Design and implement Webhook reliability frameworks with intelligent retry and exponential backoff strategies
- Build idempotent, versioned APIs with enterprise-grade rate limiting and throttling controls
- Implement circuit breakers, bulkheads, and resilience patterns using Resilience4j / Hystrix or equivalents
- Engineer Dead-Letter Queue (DLQ) strategies and event reprocessing pipelines with guaranteed delivery semantics
- Apply Saga orchestration and choreography patterns for distributed transaction integrity
- Execute zero-downtime deployments and canary release strategies with rollback capability
- Design and enforce multi-region disaster recovery and business continuity protocols
3. AI-Driven Engineering Practices
- Champion LLM and GenAI adoption as first-class tooling across the software development lifecycle
- Apply prompt engineering techniques for automated code generation, review, and documentation workflows
- Utilise AI-assisted debugging, root cause analysis, and predictive performance optimisation
- Build automation-first pipelines that reduce toil and accelerate delivery velocity
- Evaluate and integrate emerging AI developer tools into the engineering ecosystem
4. Engineering Leadership & Culture
- Own backend platforms end-to-end with full accountability across development, stability, and performance
- Actively mentor, coach, and elevate engineers at all levels (L3–L6) through structured 1:1s and code reviews
- Drive and lead technical hiring — from designing assessments to final hire decisions
- Partner with Product, Data, DevOps, and Security stakeholders to align engineering with business objectives
- Represent the engineering org in cross-functional roadmap planning and architecture decision reviews
- Foster a culture of technical excellence, psychological safety, and high-velocity delivery
TECHNOLOGY STACK (HANDS-ON PROFICIENCY REQUIRED)
Languages: Java (primary) · Go · Python · Node.js · PHP · Rust
Cloud: AWS · GCP · Azure (Multi-cloud exposure preferred)
Containers: Docker · Kubernetes · Helm · Service Mesh (Istio / Linkerd)
Databases: PostgreSQL · MySQL · MongoDB · Cassandra · Redis · Elasticsearch
Messaging: Apache Kafka · RabbitMQ · AWS SQS/SNS · Google Pub/Sub
Observability: Datadog · Prometheus · Grafana · OpenTelemetry · Jaeger · ELK Stack
CI/CD & IaC: GitHub Actions · Jenkins · ArgoCD · Terraform · Ansible
AI & GenAI: OpenAI / Claude APIs · LangChain · RAG Pipelines · GitHub Copilot · Cursor
QUALIFICATIONS & CANDIDATE PROFILE
Education
- B.E. / B.Tech or M.E. / M.Tech from a Tier-I or Tier-II Institution — CS, IS, ECE, AI/ML streams strongly preferred
- Exceptional real-world engineering track record may be considered in lieu of institution pedigree
Experience
- 6 to 9 years of progressive backend engineering experience with demonstrable ownership and impact
- Proven track record of shipping and scaling production SaaS / Product systems at significant user load
- Exposure to and success within start-up, mid-size, and large-scale product organisations — the full spectrum
- Strong computer science fundamentals: algorithms, data structures, distributed systems theory, OS internals
- Demonstrated career stability — minimum 2 years average tenure per organisation
- The Ideal Candidate Exemplifies
- System-level thinking with an ability to hold context across code, architecture, product, and business
- An ownership mindset — no task is 'not my job'; outcomes and quality are personal commitments
- Strong written and verbal communication skills for asynchronous, cross-functional collaboration
- Intellectual curiosity: actively follows engineering trends, contributes to the community (OSS, blogs, talks)
- Bias for automation, observability, and engineering efficiency at every level
- A mentor's instinct — genuine desire to grow others and raise the capability of the team around them
WHY THIS ROLE STANDS APART
✅ Transformational Scope
Lead platform modernisation at scale. Your architectural choices will define systems serving millions of users for years.
✅ AI-Forward Engineering Culture
Be at the forefront of AI-augmented development. This org invests in tools and practices that make great engineers exceptional.
✅ Established, Stable Platform
Join a company with 500–1,000 employees, proven product-market fit, and the resources to execute on a serious technical vision.
✅ Career-Defining Leadership
Operate with strategic influence, direct access to senior leadership, and a clear path toward Principal / Staff / VP Engineering.
HOW TO APPLY
This search is being managed exclusively by Recruiting Bond
Submit your application with an updated resume
Only shortlisted candidates will be contacted. All applications are treated with the strictest confidentiality.
⚡ We move fast — qualified candidates can expect a response within 48–72 business hours.
Recruiting Bond | Bengaluru, Karnataka, India | 2026
An L2 Technical Support Engineer with Python knowledge is responsible for handling escalated, more complex technical issues that the Level 1 (L1) support team cannot resolve. Your primary goal is to perform deep-dive analysis, troubleshooting, and problem resolution to minimize customer downtime and ensure system stability.
Python is a key skill, used for scripting, automation, debugging, and data analysis in this role.
Key Responsibilities
- Advanced Troubleshooting & Incident Management:
- Serve as the escalation point for complex technical issues (often involving software bugs, system integrations, backend services, and APIs) that L1 support cannot resolve.
- Diagnose, analyze, and resolve problems, often requiring in-depth log analysis, code review, and database querying.
- Own the technical resolution of incidents end-to-end, adhering strictly to established Service Level Agreements (SLAs).
- Participate in on-call rotation for critical (P1) incident support outside of regular business hours.
- Python-Specific Tasks:
- Develop and maintain Python scripts for automation of repetitive support tasks, system health checks, and data manipulation.
- Use Python for debugging and troubleshooting by analyzing application code, API responses, or data pipeline issues.
- Write ad-hoc scripts to extract, analyze, or modify data in databases for diagnostic or resolution purposes.
- Potentially apply basic-to-intermediate code fixes in Python applications in collaboration with development teams.
- Collaboration and Escalation:
- Collaborate closely with L3 Support, Software Engineers, DevOps, and Product Teams to report bugs, propose permanent fixes, and provide comprehensive investigation details.
- Escalate issues that require significant product changes or deeper engineering expertise to the L3 team, providing clear, detailed documentation of all steps taken.
- Documentation and Process Improvement:
- Conduct Root Cause Analysis (RCA) for major incidents, documenting the cause, resolution, and preventative actions.
- Create and maintain a Knowledge Base (KB), runbooks, and Standard Operating Procedures (SOPs) for recurring issues to empower L1 and enable customer self-service.
- Proactively identify technical deficiencies in processes and systems and recommend improvements to enhance service quality.
- Customer Communication:
- Maintain professional, clear, and timely communication with customers, explaining complex technical issues and resolutions in an understandable manner.
Required Technical Skills
- Programming/Scripting:
- Strong proficiency in Python (for scripting, automation, debugging, and data manipulation).
- Experience with other scripting languages like Bash or Shell
- Databases:
- Proficiency in SQL for complex querying, debugging data flow issues, and data extraction.
- Application/Web Technologies:
- Understanding of API concepts (RESTful/SOAP) and experience troubleshooting them using tools like Postman or curl.
- Knowledge of application architectures (e.g., microservices, SOA) is a plus.
- Monitoring & Tools:
- Experience with support ticketing systems (e.g., JIRA, ServiceNow).
- Familiarity with log aggregation and monitoring tools (Kibana, Splunk, ELK Stack, Grafana)
We are seeking a skilled Java Developer with hands-on experience in Java and Spark to build scalable data processing solutions. You'll contribute to high-performance data pipelines and analytics platforms in a dynamic Agile environment.
Key Responsibilities
- Design and develop Java applications integrated with Apache Spark for ETL processes, data transformations, and analytics.
- Build and optimize Spark jobs (Spark SQL, DataFrames, Streaming) for large-scale data processing.
- Collaborate with data engineers and analysts to implement robust data workflows.
- Write clean, maintainable Java code following best practices (Spring Boot, Microservices preferred).
- Perform code reviews, unit testing, and contribute to CI/CD pipelines.
- Troubleshoot and optimize Spark performance for production workloads.
- Document technical solutions and mentor junior developers.
Required Skills & Qualifications
- 4-7 years of hands-on Java development experience.
- Strong expertise in Apache Spark (Spark Core, Spark SQL, PySpark basics).
- Proficiency in Java 8/11+ with multithreading and collections frameworks.
- Experience with data processing (ETL, data pipelines, big data).
- Familiarity with build tools (Maven/Gradle) and version control (Git).
- Strong problem-solving skills and Bangalore location availability.
- Excellent communication skills for cross-team collaboration.
Good to Have
- Experience with Snowflake for cloud data warehousing.
- Knowledge of DBT (Data Build Tool) for analytics engineering.
- Python scripting for data manipulation and automation.
- Exposure to AWS/GCP/Azure cloud platforms.
- Familiarity with Kafka, Airflow, or containerization (Docker/Kubernetes).
We are looking for an integration engineer to assist our rapidly growing customer base. As part of our integration team, you will be the primary point of contact for all integrations. You would be responsible for helping our clients integrate with OneFin APIs, configuring our system for clients and providing ongoing help to them to resolve any issues.
Responsibilities
- Understand and explain APIs to clients. Help clients integrate OneFin APIs. Research and identify solutions to issues during integration.
- Escalate unresolved issues to appropriate internal teams (e. g. software developers).
- Become a product expert for clients.
- Configure OneFin system for customized usage by clients. Identify and write internal and external technical articles or knowledge-base entries, like typical troubleshooting steps, workarounds, or best practices, how-to guides etc.
- Automate solution of common issues using Python.
- Help live clients resolve issues and coordinate with the development team for issue resolution.
Requirements and Qualifications:
- Strong verbal and written communication skills.
- Experience in writing code in Python.
- Understanding web based systems.
- Proficient in understanding and writing JSON.
- Experience in SQL databases.
- Experience working with REST APIs.
- Excellent analytical skills, passion for pinning down technical issues, and solving problems.
Job Title: Senior Full-stack Developer (Python,React)
Location: Hyderabad, India (On-site Only)
Employment Type: Full-Time
Work Mode: Office-Based; Remote or Hybrid Not Allowed
Role Summary
We are looking for a skilled Senior Fullstack Developer with expertise in Django (Python),React, RESTful APIs, GraphQL, microservices architecture, Redis, and AWS services (SNS, SQS, etc.). The ideal candidate will be responsible for designing, developing, and maintaining scalable backend systems and APIs to support dynamic frontend applications and services.
Required Skillset:
l 9+ years of professional experience writing production-grade software, including experience leading the design of complex systems.
l Strong expertise in Python (Django or equivalent frameworks) and REST API development.
l Solid exp of frontend frameworks such as React and TypeScript.
l Strong understanding of relational databases (MySQL or PostgreSQL preferred).
l Experience with CI/CD pipelines, containerization (Docker), and orchestration (Kubernetes).
l Hands-on experience with cloud infrastructure (AWS preferred)
l Proven experience debugging complex production issues and improving observability.
Preferred Skillset:
l Experience in enterprise SaaS or B2B systems with multi-tenancy, authentication (OAuth, SSO, SAML), and data partitioning. Exposure to Kafka or RabbitMQ, microservices.
l Knowledge of event-driven architecture, A/B testing frameworks, and analytics pipelines.
l Familiarity with accessibility standards and best practices Agile/Scrum methodologies.
l Exposure to the Open edX ecosystem or open-source contributions in education tech.
l Demonstrated history of technical mentorship, team leadership, or cross-team collaboration.
Tech Stack:
l Backend: Python (Django), (Celery,Redis Asynchronous workflows), REST APIs
l Frontend: React, TypeScript, SCSS
l Data: MySQL, Snowflake, Elasticsearch
l DevOps/Cloud: Docker,Kubernetes,GitHub Actions,AWS
l Monitoring: Datadog
l Collaboration Tools: GitHub, Jira, Slack, Segment
Primary Responsibilities:
l Lead, guide, and mentor a team of Python/Django engineers, offering hands-on technical support and direction.
l Architect, design, and deliver secure, scalable, and high-performing web applications.
l Manage the complete software development lifecycle including requirements gathering, system design, development, testing, deployment, and post-launch maintenance.
l Ensure compliance with coding standards, architectural patterns, and established development best practices.
l Collaborate with product teams, QA, UI/UX, and other stakeholders to ensure timely and high-quality product releases.
l Perform detailed code reviews, optimize system performance, and resolve production-level issues.
l Drive engineering improvements such as automation, CI/CD implementation, and modernization of outdated systems.
l Create and maintain technical documentation while providing regular updates to leadership and stakeholders.

A real time Customer Data Platform and cross channel marketing automation delivers superior experiences that result in an increased revenue for some of the largest enterprises in the world.
Key Responsibilities:
- Design and develop backend components and sub-systems for large-scale platforms under guidance from senior engineers.
- Contribute to building and evolving the next-generation customer data platform.
- Write clean, efficient, and well-tested code with a focus on scalability and performance.
- Explore and experiment with modern technologies—especially open-source frameworks—
- and build small prototypes or proof-of-concepts.
- Use AI-assisted development tools to accelerate coding, testing, debugging, and learning while adhering to engineering best practices.
- Participate in code reviews, design discussions, and continuous improvement of the platform.
Qualifications:
- 0–2 years of experience (or strong academic/project background) in backend development with Java.
- Good fundamentals in algorithms, data structures, and basic performance optimizations.
- Bachelor’s or Master’s degree in Computer Science or IT (B.E / B.Tech / M.Tech / M.S) from premier institutes.
Technical Skill Set:
- Strong aptitude and analytical skills with emphasis on problem solving and clean coding.
- Working knowledge of SQL and NoSQL databases.
- Familiarity with unit testing frameworks and writing testable code is a plus.
- Basic understanding of distributed systems, messaging, or streaming platforms is a bonus.
AI-Assisted Engineering (LLM-Era Skills):
- Familiarity with modern AI coding tools such as Cursor, Claude Code, Codex, Windsurf, Opencode, or similar.
- Ability to use AI tools for code generation, refactoring, test creation, and learning new systems responsibly.
- Willingness to learn how to combine human judgment with AI assistance for high-quality engineering outcomes.
Soft Skills & Nice to Have
- Appreciation for technology and its ability to create real business value, especially in data and marketing platforms.
- Clear written and verbal communication skills.
- Strong ownership mindset and ability to execute in fast-paced environments.
- Prior internship or startup experience is a plus.
Role & Responsibilities:
Develop and deliver defect free, web-based applications using C#, ASP.Net and Oracle as per the specifications provided by the Business Analysts.
- Read and Understand the Functional and Technical Specification and have complete understanding of the work before commencing the work
- Design, develop, and unit test applications in accordance with established standards.
- Adhering to high-quality development principles while delivering solutions on-time.
- Adhere to the Quality Management Standards established in the organization.
- Understanding of the SDLC process defined in the organization and follow it without any deviation
- Providing third-level support to the support tickets raised by the business users
- Analyzing and resolving technical and application Logic related problems
- Ensure high performance in the application by developing efficient code
Ideal Candidate:
- Strong .NET Senior Software Engineer Profile
- Must have 5+ years of hands-on development experience with C#.NET, ASP.NET, ADO.NET.
- Must have 3+ years of experience in web application development using HTML, CSS, JavaScript/jQuery
- Must have strong experience in Writing Complex SQL Queries, Stored Procedures, Functions using Oracle / SQL Server.
- Must have experience in designing, developing, and unit testing applications with SDLC compliance
- Experience with AJAX, Crystal Reports, and front-end validations using JavaScript/jQuery.
- ME/MTech (CS) or BE/BTech (CS).
Role & Responsibilities:
- Design, develop, and unit test applications in accordance with established standards.
- Preparing reports, manuals and other documentation on the status, operation and maintenance of software.
- Analyzing and resolving technical and application problems
- Adhering to high-quality development principles while delivering solutions on-time
- Providing third-level support to business users.
- Compliance of process and quality management standards
- Understanding and implementation of SDLC process
Ideal Candidate:
- Strong Senior Angular Developer Profiles.
- Must have 6+ years of experience in frontend development, with at least 4+ years in Angular 8+.
- Must have strong proficiency in JavaScript, TypeScript, HTML5, and CSS3.
- Must have strong test-driven development experience and proficiency in unit testing frameworks such as Jasmine, Karma, NUnit, Selenium.
- Must have strong experience in database technologies (MySQL / SQL Server / Oracle)
- Considering candidates from South India only.
- Must have 2+ experience with Web APIs, Entity Framework, and Linq Queries.
- Experience in .NET Core framework, OOP, and C# APIs.
- Product Companies
- B.Tech./M.Tech in Computer Science (or related field).
Power BI Analyst – EdTech (UAE Market)
📍 Location: Bangalore (Onsite)
🕔 Working Days: 5 Days
🏢 Industry: EdTech – Professional Training & Certification Programs
🌍 Market Focus: UAE
About Us – Learners Point
Learners Point Academy is a leading professional training institute in the UAE, empowering working professionals and organizations through globally recognised certification programs such as CMA, PMP, ACCA, CIA, and other corporate training solutions.
With a strong presence in the UAE market, we specialise in career-focused education, enterprise workforce development, and high-impact learning solutions designed to drive measurable professional growth.
As we expand our analytics capabilities, we are looking for a skilled
Power BI Analyst to support business intelligence and data-driven decision-making across our Professional Training Programs.
Role Overview
The Power BI Analyst will be responsible for transforming business, learner, and sales data into actionable dashboards and reports that enhance performance tracking, learner engagement, and revenue optimisation.
Key Responsibilities
- Design, develop, and maintain interactive dashboards using Microsoft Power BI
- Develop advanced reports using DAX, data modelling, and Power Query
- Analyze training program performance (enrollments, retention, completion rates, revenue)
- Build KPI dashboards for:
- Sales & Reactivation Team
- Academic & Training Team
- Leadership & Management
- Extract and manage data using SQL from databases and CRM systems
- Automate reporting processes and ensure data accuracy
- Translate business requirements into technical BI solutions
- Present insights through clear and compelling data storytelling
Required Technical Skills
- Strong experience in Power BI (Desktop & Service)
- Proficiency in:
- DAX (Measures, Time Intelligence)
- Data Modeling (Star & Snowflake Schema)
- Power Query (ETL)
- Good knowledge of SQL
- Advanced Excel (Pivot Tables, Power Pivot, Lookup Functions)
- Experience integrating data from CRM, LMS, or ERP systems
Industry-Specific Requirements (EdTech Focus)
- Understanding of:
- Learner engagement metrics
- Course completion & drop-off analysis
- Revenue per program
- Student retention analytics
- Experience working with Professional Certification Programs is an added advantage
- Familiarity with UAE market reporting standards preferred
Preferred Skills
- Exposure to Azure Data Services
- Dashboard design best practices
- Ability to manage large datasets
- Strong analytical mindset with business understanding
Soft Skills
- Strong communication & stakeholder management skills
- Business-oriented thinking
- Problem-solving mindset
- Attention to detail
Experience & Qualification
- Bachelor’s Degree in Computer Science, Data Analytics, Statistics, or related field
- 2–8 years of experience as a Power BI / Data Analyst (EdTech preferred)
- UAE or GCC market exposure is a plus
JOB DESCRIPTION:
Location: Pune, Mumbai, Bangalore
Mode of Work : 3 days from Office
* Python : Strong expertise in data workflows and automation
* Pandas: For detailed data analysis and validation
* SQL: Querying and performing operations on Delta tables
* AWS Cloud: Compute and storage services
* OOPS concepts
Strong Senior Backend Engineer profiles
Mandatory (Experience 1) – Must have 5+ years of hands-on Backend Engineering experience building scalable, production-grade systems
Mandatory (Experience 2) – Must have strong backend development experience using one or more frameworks (FastAPI / Django (Python), Spring (Java), Express (Node.js).
Mandatory (Experience 3) – Must have deep understanding of relevant libraries, tools, and best practices within the chosen backend framework
Mandatory (Experience 4) – Must have strong experience with databases, including SQL and NoSQL, along with efficient data modeling and performance optimization
Mandatory (Experience 5) – Must have experience designing, building, and maintaining APIs, services, and backend systems, including system design and clean code practices
Mandatory (Domain) – Experience with financial systems, billing platforms, or fintech applications is highly preferred (fintech background is a strong plus)
Mandatory (Company) – Must have worked in product companies / startups, preferably Series A to Series D
Mandatory (Education) – Candidates from Tier - 1 engineering institutes (IITs, BITS, are highly preferred)
Job description Data Analyst
About Miror
Miror is India’s leading FemTech platform transforming how women experience peri-menopause and menopause. In just a year, we’ve built India’s largest menopause-focused WhatsApp community, partnered with the National Health Mission and the Indian Menopause Society, and launched category-defining nutraceutical products and digital health services. Our app blends science and technology—offering personalized care pathways, symptom tracking, diagnostic links, games, AI-powered chat, expert consultations, and more. We're proud recipients of the Innovation in Menopause Care award at the Global Women’s Health Innovation Conference 2024 and are rapidly scaling toward our $1B+ vision. Learn more: miror.in
Role Overview
We’re looking for a Data Analyst who is excited to work at the intersection of data, technology, and women’s wellness. You'll be instrumental in helping us understand user behaviour, community engagement, campaign performance, and product usage across platforms — including app, web, and WhatsApp.
You’ll also have opportunities to collaborate on AI-powered features such as chatbots and personalized recommendations. Experience with GenAI or NLP is a plus but not a requirement.
Key Responsibilities
· Clean, transform, and analyse data from multiple sources (SQL databases, CSVs, APIs).
· Build dashboards and reports to track KPIs, user behaviour, and marketing performance.
· Collaborate with product, marketing, and customer teams to uncover actionable insights.
· Support experiments, A/B testing, and cohort analysis to drive growth and retention.
· Assist in documentation and communication of findings to technical and non-technical teams.
· Work with the data team to enhance personalization and AI features (optional).
Required Qualifications
· Bachelor’s degree in Data Science, Statistics, Computer Science, or a related field.
· 2 – 4 years of experience in data analysis or business intelligence.
· Strong hands-on experience with SQL and Python (pandas, NumPy, matplotlib).
· Familiarity with data visualization tools (Streamlit, Tableau, Metabase, Power BI, etc.)
· Ability to translate complex data into simple visual stories and clear recommendations.
· Strong attention to detail and a mindset for experimentation.
Preferred (Not Mandatory)
· Exposure to GenAI, LLMs (e.g., OpenAI, HuggingFace), or NLP concepts.
· Experience working with healthcare, wellness, or e-commerce datasets.
· Familiarity with REST APIs, JSON structures, or chatbot systems.
· Interest in building tools that impact women’s health and wellness.
Why Join Us?
· Be part of a high-growth startup tackling a real need in women’s healthcare.
· Work with a passionate, purpose-driven team.
· Opportunity to grow into GenAI/ML-focused roles as we scale.
· Competitive salary and career progression
Best Regards,
Indrani Dutta
MIROR THERAPEUTICS PRIVATE LIMITED
Connect with me here or on my LinkedIn page.
Job Description: Data Analyst Intern
Location: On-site, Bangalore
Duration: 6 months (Full-time)
About us:
- Optimo Capital is a newly established NBFC founded by Prashant Pitti, who is also a co-founder of EaseMyTrip (a billion-dollar listed startup that grew profitably without any funding).
- Our mission is to serve the underserved MSME businesses with their credit needs in India. With less than 15% of MSMEs having access to formal credit, we aim to bridge this credit gap through a phygital model (physical branches + digital decision-making). As a technology and data-first company, tech lovers and data enthusiasts play a crucial role in building the analytics & tech at Optimo that helps the company thrive.
What we offer:
- Join our dynamic startup team and play a crucial role in core data analytics projects involving credit risk, lending strategy, credit features analytics, collections, and portfolio management.
- The analytics team at Optimo works closely with the Credit & Risk departments, helping them make data-backed decisions.
- This is an exceptional opportunity to learn, grow, and make a significant impact in a fast-paced startup environment.
- We believe that the freedom and accountability to make decisions in analytics and technology brings out the best in you and helps us build the best for the company.
- This environment offers you a steep learning curve and an opportunity to experience the direct impact of your analytics contributions. Along with this, we offer industry-standard compensation.
What we look for:
- We are looking for individuals with a strong analytical mindset, high levels of initiative / ownership, ability to drive tasks independently, clear communication and comfort working across teams.
- We value not only your skills but also your attitude and hunger to learn, grow, lead, and thrive, both individually and as part of a team.
- We encourage you to take on challenges, bring in new ideas, implement them, and build the best analytics systems.
Key Responsibilities:
- Conduct analytical deep-dives such as funnel analysis, cohort tracking, branch-wise performance reviews, TAT analysis, portfolio diagnostic, credit risk analytics that lead to clear actions.
- Work closely with stakeholders to convert business questions into measurable analyses and clearly communicated outputs.
- Support digital underwriting initiatives, including assisting in the development and analysis of underwriting APIs that enable decisioning on borrower eligibility (“whom to lend”) and exposure sizing (“how much to lend”).
- Develop and maintain periodic MIS and KPI reporting for key business functions (e.g., pipeline, disbursals, TAT, conversion, collections performance, portfolio trends).
- Use Python (pandas, numpy) to clean, transform, and analyse datasets; automate recurring reports and data workflows.
- Perform basic scripting to support data validation, extraction, and lightweight automation.
Required Skills and Qualifications:
- Strong proficiency in Excel, including pivots, lookup functions, data cleaning, and structured analysis.
- Strong working knowledge of SQL, including joins, aggregations, CTEs, and window functions.
- Proficiency in Python for data analysis (pandas, numpy); ability to write clean, maintainable scripts/notebooks.
- Strong logical reasoning and attention to detail, including the ability to identify errors and validate results rigorously.
- Ability to work with ambiguous requirements and imperfect datasets while maintaining output quality.
Preferred (Good to Have):
- REST APIs: A fundamental understanding of APIs and previous experience or projects related to API development/integrations.
- Familiarity with basic AWS tools/services: (S3, lambda, EC2, Glue Jobs).
- Experience with Git and basic engineering practices.
- Any experience with the lending/finance industry.
Role & Responsibilities
As a Founding Engineer, you'll join the engineering team during an exciting growth phase, contributing to a platform that handles complex financial operations for B2B companies. You'll work on building scalable systems that automate billing, usage metering, revenue recognition, and financial reporting—directly impacting how businesses manage their revenue operations.
This role is ideal for someone who thrives in a dynamic startup environment where requirements evolve quickly and problems require creative solutions. You'll work on diverse technical challenges, from API development to external integrations, while collaborating with senior engineers, product managers, and customer success teams.
Key Responsibilities
- Build core platform features: Develop robust APIs, services, and integrations that power billing automation and revenue recognition capabilities.
- Work across the full stack: Contribute to backend services and frontend interfaces to ensure seamless user experiences.
- Implement critical integrations: Connect the platform with external systems including CRMs, data warehouses, ERPs, and payment processors.
- Optimize for scale: Design systems that handle complex pricing models, high-volume usage data, and real-time financial calculations.
- Drive quality and best practices: Write clean, maintainable code and participate in code reviews and architectural discussions.
- Solve complex problems: Debug issues across the stack and collaborate with cross-functional teams to address evolving client needs.
The Impact You'll Make
- Power business growth: Enable fast-growing B2B companies to scale billing and revenue operations efficiently.
- Build critical financial infrastructure: Contribute to systems handling high-value transactions with accuracy and compliance.
- Shape product direction: Join during a scaling phase where your contributions directly impact product evolution and customer success.
- Accelerate your expertise: Gain deep exposure to financial systems, B2B SaaS operations, and enterprise-grade software development.
- Drive the future of B2B commerce: Help build infrastructure supporting next-generation pricing models, from usage-based to value-based billing.
Ideal Candidate Profile
Experience
- 5+ years of hands-on Backend Engineering experience building scalable, production-grade systems.
- Strong backend development experience using one or more frameworks: FastAPI / Django (Python), Spring (Java), or Express (Node.js).
- Deep understanding of relevant libraries, tools, and best practices within the chosen backend framework.
- Strong experience with databases (SQL & NoSQL), including efficient data modeling and performance optimization.
- Proven experience designing, building, and maintaining APIs, services, and backend systems with solid system design and clean code practices.
Domain
- Experience with financial systems, billing platforms, or fintech applications is highly preferred.
Company Background
- Experience working in product companies or startups (preferably Series A to Series D).
Education
- Candidates from Tier 1 engineering institutes (IITs, BITS, etc.) are highly preferred.
About CloudThat:-
At CloudThat, we are driven by our mission to empower professionals and businesses to harness the full potential of cloud technologies. As a leader in cloud training and consulting services in India, our core values guide every decision we make and every customer interaction we have.
Role Overview:-
We are looking for a passionate and experienced Technical Trainer to join our expert team and help drive knowledge adoption across our customers, partners, and internal teams.
Key Responsibilities:
• Deliver high-quality, engaging technical training sessions both in-person and virtually to customers, partners, and internal teams.
• Design and develop training content, labs, and assessments based on business and technology requirements.
• Collaborate with internal and external SMEs to draft course proposals aligned with customer needs and current market trends.
• Assist in training and onboarding of other trainers and subject matter experts to ensure quality delivery of training programs.
• Create immersive lab-based sessions using diagrams, real-world scenarios, videos, and interactive exercises.
• Develop instructor guides, certification frameworks, learner assessments, and delivery aids to support end-to-end training delivery.
• Integrate hands-on project-based learning into courses to simulate practical environments and deepen understanding.
• Support the interpersonal and facilitation aspects of training fostering an inclusive, engaging, and productive learning environment
Skills & Qualifications:
• Experience developing content for professional certifications or enterprise skilling programs.
• Familiarity with emerging technology areas such as cloud computing, AI/ML, DevOps, or data engineering.
Technical Competencies:
- Expertise in languages like C, C++, Python, Java
- Understanding of algorithms and data structures
- Expertise on SQL
Or Directly Apply-https://cloudthat.keka.com/careers/jobdetails/95441
Review Criteria:
- Strong Dremio / Lakehouse Data Architect profile
- 5+ years of experience in Data Architecture / Data Engineering, with minimum 3+ years hands-on in Dremio
- Strong expertise in SQL optimization, data modeling, query performance tuning, and designing analytical schemas for large-scale systems
- Deep experience with cloud object storage (S3 / ADLS / GCS) and file formats such as Parquet, Delta, Iceberg along with distributed query planning concepts
- Hands-on experience integrating data via APIs, JDBC, Delta/Parquet, object storage, and coordinating with data engineering pipelines (Airflow, DBT, Kafka, Spark, etc.)
- Proven experience designing and implementing lakehouse architecture including ingestion, curation, semantic modeling, reflections/caching optimization, and enabling governed analytics
- Strong understanding of data governance, lineage, RBAC-based access control, and enterprise security best practices
- Excellent communication skills with ability to work closely with BI, data science, and engineering teams; strong documentation discipline
- Candidates must come from enterprise data modernization, cloud-native, or analytics-driven companies
Preferred:
- Experience integrating Dremio with BI tools (Tableau, Power BI, Looker) or data catalogs (Collibra, Alation, Purview); familiarity with Snowflake, Databricks, or BigQuery environments
Role & Responsibilities:
You will be responsible for architecting, implementing, and optimizing Dremio-based data lakehouse environments integrated with cloud storage, BI, and data engineering ecosystems. The role requires a strong balance of architecture design, data modeling, query optimization, and governance enablement in large-scale analytical environments.
- Design and implement Dremio lakehouse architecture on cloud (AWS/Azure/Snowflake/Databricks ecosystem).
- Define data ingestion, curation, and semantic modeling strategies to support analytics and AI workloads.
- Optimize Dremio reflections, caching, and query performance for diverse data consumption patterns.
- Collaborate with data engineering teams to integrate data sources via APIs, JDBC, Delta/Parquet, and object storage layers (S3/ADLS).
- Establish best practices for data security, lineage, and access control aligned with enterprise governance policies.
- Support self-service analytics by enabling governed data products and semantic layers.
- Develop reusable design patterns, documentation, and standards for Dremio deployment, monitoring, and scaling.
- Work closely with BI and data science teams to ensure fast, reliable, and well-modeled access to enterprise data.
Ideal Candidate:
- Bachelor’s or Master’s in Computer Science, Information Systems, or related field.
- 5+ years in data architecture and engineering, with 3+ years in Dremio or modern lakehouse platforms.
- Strong expertise in SQL optimization, data modeling, and performance tuning within Dremio or similar query engines (Presto, Trino, Athena).
- Hands-on experience with cloud storage (S3, ADLS, GCS), Parquet/Delta/Iceberg formats, and distributed query planning.
- Knowledge of data integration tools and pipelines (Airflow, DBT, Kafka, Spark, etc.).
- Familiarity with enterprise data governance, metadata management, and role-based access control (RBAC).
- Excellent problem-solving, documentation, and stakeholder communication skills.
Preferred:
- Experience integrating Dremio with BI tools (Tableau, Power BI, Looker) and data catalogs (Collibra, Alation, Purview).
- Exposure to Snowflake, Databricks, or BigQuery environments.
- Experience in high-tech, manufacturing, or enterprise data modernization programs.
Key Responsibilities :
- Develop backend services using Node.js, including API orchestration and integration with AI/ML services.
- Implement frontend redaction features using Redact.js, integrated into React.js dashboards.
- Collaborate with AI/ML engineers to embed intelligent feedback and behavioral analysis.
- Build secure, multi-tenant systems with role-based access control (RLS).
- Optimize performance for real-time audio analysis and transcript synchronization.
- Participate in agile grooming sessions and contribute to architectural decisions.
Required Skills :
- Experience with React.js or similar annotation/redaction libraries.
- Strong understanding of RESTful APIs, React.js, and Material-UI.
- Familiarity with Azure services, SQL, and authentication protocols (SSO, JWT).
- Experience with secure session management and data protection standards.
Preferred Qualifications :
- Exposure to AI/ML workflows and Python-based services.
- Experience with Livekit or similar real-time communication platforms.
- Familiarity with Power BI and accessibility standards (WCGA).
Soft Skills :
- Problem-solving mindset and adaptability.
- Ability to work independently and meet tight deadlines.
JOB DETAILS:
* Job Title: Engineering Manager
* Industry: Technology
* Salary: Best in Industry
* Experience: 9-12 years
* Location: Bengaluru
* Education: B.Tech in computer science or related field from Tier 1, Tier 2 colleges
Role & Responsibilities
We are seeking a visionary and decisive Engineering Manager to join our dynamic team. In this role, you will lead and inspire a talented team of software engineers, driving innovation and excellence in product development efforts. This is an exciting opportunity to influence and shape the future of our engineering organization.
Key Responsibilities-
As an Engineering Manager, you will be responsible for managing the overall software development life cycle of one product. You will work and manage a cross-functional team consisting of Backend Engineers, Frontend Engineers, QA, SDET, Product Managers, Product Designers, Technical Project Managers, Data Scientists, etc.
- Responsible for mapping business objectives to an optimum engineering structure, including correct estimation of resource allocation.
- Responsible for key technical and product decisions. Provide direction and mentorship to the team. Set up best practices for engineering.
- Work closely with the Product Manager and help them in getting relevant inputs from the engineering team.
- Plan and track the development and release schedules, proactively assess and mitigate risks. Prepare for contingencies and provide visible leadership in crisis.
- Conduct regular 1:1s for performance feedback and lead their appraisals.
- Responsible for driving good coding practices in the team like good quality code, documentation, timely bug fixing, etc.
- Report on the status of development, quality, operations, and system performance to management.
- Create and maintain an open and transparent environment that values speed and innovation and motivates engineers to build innovative and effective systems rapidly.
Ideal Candidate
- Strong Engineering Manager / Technical Leadership Profile
- Must have 9+ years of experience in software engineering with experience building complex, large-scale products
- Must have 2+ years of experience as an Engineering Manager / Tech Lead with people management responsibilities
- Strong technical foundation with hands-on experience in Java (or equivalent compiled language), scripting languages, web technologies, and databases (SQL/NoSQL)
- Proven ability to solve large-scale technical problems and guide teams on architecture, design, quality, and best practices
- Experience in leading cross-functional teams, planning and tracking delivery, mentoring engineers, conducting performance reviews, and driving engineering excellence
- Must have strong experience working with Product Managers, UX designers, QA, and other cross-functional partners
- Excellent communication and interpersonal skills to influence technical direction and stakeholder decisions
- (Company): Product companies
- Must have stayed for at least 2 years with each of the previous companies
- (Education): B.Tech in computer science or related field from Tier 1, Tier 2 colleges
Job Description -
Profile: .Net Full Stack Lead
Experience Required: 7–12 Years
Location: Pune, Bangalore, Chennai, Coimbatore, Delhi, Hosur, Hyderabad, Kochi, Kolkata, Trivandrum
Work Mode: Hybrid
Shift: Normal Shift
Key Responsibilities:
- Design, develop, and deploy scalable microservices using .NET Core and C#
- Build and maintain serverless applications using AWS services (Lambda, SQS, SNS)
- Develop RESTful APIs and integrate them with front-end applications
- Work with both SQL and NoSQL databases to optimize data storage and retrieval
- Implement Entity Framework for efficient database operations and ORM
- Lead technical discussions and provide architectural guidance to the team
- Write clean, maintainable, and testable code following best practices
- Collaborate with cross-functional teams to deliver high-quality solutions
- Participate in code reviews and mentor junior developers
- Troubleshoot and resolve production issues in a timely manner
Required Skills & Qualifications:
- 7–12 years of hands-on experience in .NET development
- Strong proficiency in .NET Framework, .NET Core, and C#
- Proven expertise with AWS services (Lambda, SQS, SNS)
- Solid understanding of SQL and NoSQL databases (SQL Server, MongoDB, DynamoDB, etc.)
- Experience building and deploying Microservices architecture
- Proficiency in Entity Framework or EF Core
- Strong knowledge of RESTful API design and development
- Experience with React or Angular is a good to have
- Understanding of CI/CD pipelines and DevOps practices
- Strong debugging, performance optimization, and problem-solving skills
- Experience with design patterns, SOLID principles, and best coding practices
- Excellent communication and team leadership skills
• Minimum 4+ years of years
• Experience in designing, developing, and maintain backend services using C# 12 and .NET 8 or .NET 9
• Experience in building and operating cloud native and serverless applications on AWS
• Experience in developing and integrating services using AWS lambda, API Gateway , dynamo DB, Eventbridge, CloudWatch, SQS, SNS, Kinesis, Secret Manager, S3 storage, server architectural models etc.
Experience in integrating services using AWS SDK
• Should be cognizant of the OMS paradigms including Inventory Management, Inventory publish, supply feed processing, control mechanisms, ATP publish, Order Orchestration, workflow set up and customizations, integrations with tax, AVS, payment engines, sourcing algorithms and managing reservations with back orders, schedule mechanisms, flash sales management etc.
• Should have a decent End to End knowledge of various Commerce subsystems which include Storefront, Core Commerce back end, Post Purchase processing, OMS, Store / Warehouse Management processes, Supply Chain and Logistic processes. This is to ascertain candidates knowhow on the overall Retail landscape of any customer.
• Strong knowledge in Querying in Oracle DB and SQL Server
• Able to read, write and manage PLSQL procedures in oracle.
• Strong debugging, performance tuning and problem solving skills
• Experience with event driven and micro services architectures
Job Details
- Job Title: Staff Engineer
- Industry: Technology
- Domain - Information technology (IT)
- Experience Required: 9-12 years
- Employment Type: Full Time
- Job Location: Bengaluru
- CTC Range: Best in Industry
Role & Responsibilities
As a Staff Engineer at company, you will play a critical role in defining and driving our backend architecture as we scale globally. You’ll own key systems that handle high volumes of data and transactions, ensuring performance, reliability, and maintainability across distributed environments.
Key Responsibilities-
- Own one or more core applications end-to-end, ensuring reliability, performance, and scalability.
- Lead the design, architecture, and development of complex, distributed systems, frameworks, and libraries aligned with company’s technical strategy.
- Drive engineering operational excellence by defining robust roadmaps for system reliability, observability, and performance improvements.
- Analyze and optimize existing systems for latency, throughput, and efficiency, ensuring they perform at scale.
- Collaborate cross-functionally with Product, Data, and Infrastructure teams to translate business requirements into technical deliverables.
- Mentor and guide engineers, fostering a culture of technical excellence, ownership, and continuous learning.
- Establish and uphold coding standards, conduct design and code reviews, and promote best practices across teams.
- Stay ahead of the curve on emerging technologies, frameworks, and patterns to strengthen company’s technology foundation.
- Contribute to hiring by identifying and attracting top-tier engineering talent.
Ideal Candidate
- Strong staff engineer profile
- Must have 9+ years in backend engineering with Java, Spring/Spring Boot, and microservices building large and schalable systems
- Must have been SDE-3 / Tech Lead / Lead SE for at least 2.5 years
- Strong in DSA, system design, design patterns, and problem-solving
- Proven experience building scalable, reliable, high-performance distributed systems
- Hands-on with SQL/NoSQL databases, REST/gRPC APIs, concurrency & async processing
- Experience in AWS/GCP, CI/CD pipelines, and observability/monitoring
- Excellent ability to explain complex technical concepts to varied stakeholders
- Product companies (B2B SAAS preferred)
- Must have stayed for at least 2 years with each of the previous companies
- (Education): B.Tech in computer science from Tier 1, Tier 2 colleges


















